Normal view

Alert fatigue is costing you: Why your SOC misses 1% of real threats

3 February 2026 at 15:04

Introducing the 2026 Intezer AI SOC Report for CISOs

For years, security leaders have lived with an uncomfortable truth. It has been to date, simply impossible to investigate every alert. As alert volumes exploded and teams failed to scale, SOCs, whether in-house or outsourced, normalized “acceptable risk” with the deprioritization of low-severity and informational alerts.

Our latest research shows that this approach is no longer defensible.

Intezer has just released the 2026 AI SOC Report for CISOs, based on the forensic analysis of more than 25 million security alerts across live enterprise environments. The findings reveal a critical disconnect between how security teams prioritize alerts and where real threats actually originate, and the cost of that gap is far higher than most organizations realize .

Why “acceptable risk” is no longer acceptable 

Across endpoint, cloud, identity, network, and phishing telemetry, Intezer found that nearly 1% of confirmed incidents originated from alerts initially labeled as low-severity or informational. On endpoints, that figure climbed to nearly 2%.

At enterprise scale, that percentage is not noise.

For a typical organization generating roughly 450,000 alerts per year, this translates to ~50 real threats annually, about one per week, never investigated by a SOC or MDR team. These are not theoretical risks. They are real compromises hiding in plain sight, dismissed not because they were benign, but because teams lacked the capacity to look.

What the data revealed across the attack surface

Because Intezer AI SOC investigates 100% of alerts using forensic-grade analysis, the report exposes how attackers actually operate once you remove triage bias from the equation.

Endpoint security is more fragile than reported

More than half of endpoint alerts were not automatically mitigated by endpoint protection tools. Of those, nearly 9% were confirmed malicious. Even more concerning, 1.6% of endpoints undergoing live forensic scans were still actively compromised despite being reported as “mitigated” by EDR tools.

See the full endpoint threat data → Download the 2026 AI SOC Report

Low-severity does not mean low-risk

Within endpoint alerts alone, 1.9% of low-severity and informational alerts were real incidents, the exact alerts most SOCs never review.

Attackers favor stealth over noise

Cloud telemetry was dominated by defense evasion and persistence techniques, reflecting a shift toward long-term access, token abuse, and misuse of legitimate services rather than overt exploitation.

Phishing has moved into trusted platforms and browsers

Fewer than 6% of malicious phishing emails contained attachments. Most relied on links, language, and abuse of legitimate services such as cloud file sharing, code sandboxes, CAPTCHA mechanisms, where traditional controls have limited visibility.

Cloud misconfigurations persist as silent risk multipliers

Most cloud posture findings stemmed from legacy or default configurations, especially in Amazon S3, including missing encryption, weak access controls, and lack of logging—issues often classified as “low severity,” yet repeatedly exploited once attackers gain a foothold.

To read the full report and all the findings, download the CISOs guide to AI SOC 2026 here. 

Why traditional SOCs fail: capacity, fragmentation and judging alerts by their severity

Modern SOC failures are rarely the result of a single broken tool or negligent team. They are the outcome of structural tradeoffs that every traditional SOC—internal or MDR—has been forced to make.

Capacity is the first constraint.
Human analysts do not scale linearly with alert volume. As telemetry expands across endpoint, cloud, identity, network, and SaaS, SOCs hit a hard ceiling. The only way to cope is aggressive triage: close most alerts automatically, investigate only what looks “important,” and hope severity labels align with reality. The 2026 AI SOC Report shows that this assumption is false at scale.

Tool fragmentation compounds the problem.
Most SOC stacks are collections of siloed detections, EDR, SIEM, identity, cloud posture, email, each optimized for a narrow signal. Severity is assigned locally, without cross-surface context or forensic validation. As a result, alerts are scored based on abstract rules, not evidence of compromise. When SOCs trust these labels blindly, they inherit the tools’ blind spots.

Process tradeoffs lock risk in place.
Once triage rules are defined, they become institutionalized. Low-severity alerts are ignored by design. MDR providers codify this into SLAs. Internal SOCs bake it into runbooks. Crucially, there is no closed-loop feedback: missed threats do not automatically improve detections, because they were never investigated in the first place.

The outcome is not an occasional failure. It is systematic, repeatable risk, embedded directly into how SOCs operate.

Real-world examples of missed threats hiding in plain sight

The data in the 2026 AI SOC Report makes clear that missed threats are not exotic edge cases. They are ordinary attacks progressing quietly through environments because no one looked.

Endpoints marked “mitigated” but still compromised
In over 1.6% of live forensic endpoint scans, Intezer found active malicious code running in memory even though the EDR had already reported the threat as resolved. These cases included stealers, RATs, and post-exploitation frameworks, often originating from low-severity alerts that never triggered deeper inspection. Without memory-level forensics, these compromises would have remained invisible.

Phishing hosted on trusted platforms
Attackers increasingly host phishing pages on legitimate developer platforms like Vercel and CodePen, or abuse trusted cloud services such as OneDrive and PayPal. The parent domains appear reputable, so alerts are downgraded or ignored. Yet behind them are live credential-harvesting pages that bypass email gateways and browser-based defenses alike.

Cloud misconfigurations as delayed breach accelerators
Many cloud posture findings such as unencrypted S3 buckets, missing access logs and permissive cross-account policies rarely trigger action. But once an attacker gains any foothold, these long-standing misconfigurations dramatically accelerate lateral movement, persistence, and data exposure.

In every case, the failure was not detection. The signal existed. The failure was investigation.

How attackers deliberately exploit SOC blind spots

Attackers understand SOC economics better than most defenders.

They know which alerts generate fatigue.
They know which detections are noisy.
They know which categories are deprioritized by default.

As a result, modern attackers design their campaigns to blend into the backlog, not trigger alarms.

Stealth over speed
Cloud intrusions favor defense evasion, persistence, and token abuse over loud exploitation. These behaviors generate alerts, but rarely high-severity ones. The report shows cloud telemetry dominated by exactly these tactics, indicating attackers are optimizing for long-term access rather than immediate impact.

Living off trusted infrastructure
Phishing campaigns increasingly abuse legitimate brands, file-sharing services, CAPTCHA frameworks, and developer platforms. These environments inherit trust by default, allowing attackers to operate under severity thresholds that SOCs routinely ignore.

Multi-stage loaders and memory-only execution
On endpoints, attackers rely on layered loaders, in-memory payloads, and obfuscation techniques that evade static detections. Initial alerts may look benign or incomplete. Without forensic follow-through, SOCs miss the actual compromise entirely.

Attackers are not evading detection systems alone, rather they are exploiting SOC decision-making models.

What this means for your SOC operations

For CISOs and SOC leaders, the implication is stark:
Risk is no longer defined by what you detect, but by what you choose not to investigate.

If your SOC:

  • Ignores low-severity alerts by default
  • Relies on severity labels without forensic validation
  • Limits investigations based on human capacity
  • Operates without a feedback loop between outcomes and detections

Then missed threats are not anomalies, they are guaranteed.

The organizations that will reduce risk in 2026 are not adding more dashboards or rewriting triage rules. They are adopting operating models where investigation is no longer a scarce resource.

This is why AI-driven, forensic-grade SOC platforms fundamentally change the equation. When every alert is investigated:

  • Severity becomes evidence-based, not assumed
  • Detection quality improves through real-world validation
  • Attackers lose the ability to hide in “acceptable risk”
  • SOC teams regain control without scaling headcount

This is the shift behind the Intezer AI SOC model and why the concept of acceptable risk must be redefined for the modern threat landscape.

This all changes when you can investigate everything

The data in the 2026 AI SOC Report points to a different reality, one where AI-driven forensic analysis removes investigation capacity as a constraint.

When every alert is investigated:

  • “Low severity” stops being a proxy for “safe”
  • Detection quality improves through real-world validation
  • Missed threats drop from dozens per year to near zero
  • Escalations fall below 2%, without sacrificing coverage
  • Risk tolerance is defined by evidence, not exhaustion

This is the operating model behind Intezer AI SOC, powered by ForensicAI™ and it is why the definition of acceptable risk must be reset.

Download the report and join the discussion

The 2026 AI SOC Report for CISOs is grounded in:

  • 25 million alerts analyzed
  • 10 million monitored endpoints and identities
  • 82,000 forensic endpoint investigations, including live memory scans
  • Telemetry from 7 million IP addresses, 3 million domains and URLs, and over 550,000 phishing emails

All data was aggregated and anonymized across Intezer’s global enterprise customer base.

👉 Download the full report to explore the findings in detail, and
👉 Join Intezer’s research team on Wednesday, February 4th at 12 p.m. ET for a live webinar breaking down what this data means for SOC leaders and CISOs.

Because in 2026, the biggest risk is no longer what you detect, it’s what you choose not to investigate.

The post Alert fatigue is costing you: Why your SOC misses 1% of real threats appeared first on Intezer.

Building effective AI for the SOC: How Intezer Forensic AI SOC follows Anthropic’s best practices

14 January 2026 at 18:58

One of the most influential publications on real-world AI system design is Anthropic’s guide, Building Effective Agents. Its core message is simple:
Effective AI requires structure first, adaptability second.

Anthropic emphasizes that AI agents work best when:

  1. A deterministic workflow does all the structured work up front
  2. The agent only activates when uncertainty remains
  3. The agent begins with full context, not an empty slate
  4. Tool usage is controlled and evidence-driven
  5. Human-in-the-loop remains central for oversight and trust

These principles ensure accuracy, avoid hallucinations and keep investigations reproducible, all critical requirements for cybersecurity.

Intezer Forensic AI SOC is built on exactly this philosophy. Our platform uses a dual-mode design with Intezer AI Workflow and AI Agent, completely aligning with Anthropic’s best practices to deliver fast, scalable and highly accurate investigations across a broad range of alerts, all while keeping analysts in the loop.

Here is how Intezer implements Anthropic’s best practices for agents.

Structured first: Intezer AI Workflow handles the majority of alerts

Anthropic advises that AI systems should begin with deterministic workflows instead of free-form reasoning. In cybersecurity, this is essential for accuracy, auditability, trust and scalability (when handling huge volumes of alerts).

Intezer’s AI Workflow mode is a structured triage process designed by security experts and executed with strict consistency. It applies AI only at key decision points, not as the driver of the entire investigation.

This approach provides:

  • Deterministic, reproducible results
  • High speed due to streamlined, parallelizable steps
  • Lower costs because heavy reasoning is used sparingly
  • No drift or unexpected branching
  • Clear human oversight points

Most alerts, especially well-defined ones, are fully resolved at this stage, giving SOCs broad alert coverage at low cost.

Adaptive only when needed: Intezer AI Agent extends the investigation

Anthropic states that agents should activate only when the structured workflow reaches uncertainty, and only after they inherit the full context. Intezer follows this exactly.

AI Agent mode activates only when the Workflow cannot reach a high-confidence verdict.

At that point, the agent:

  • Starts with all evidence collected so far
  • Avoids premature assumptions
  • Uses tools deliberately and contextually
  • Expands the investigation where human analysts would
  • Surfaces deeper behavioral patterns or cross-asset correlations

This ensures the agent is guided, not free-floating, and its decisions remain grounded in evidence, not guesswork.

Tools the AI Agent can leverage once activated

  • Dynamic SIEM queries
  • EDR/XDR telemetry lookups
  • Identity provider (IDP) investigation
  • Behavioral analysis of processes and command lines
  • User activity mapping
  • Process ancestry and parent-child correlation
  • Intezer’s historical alert database
  • Code DNA similarity and malware lineage tracking
  • Additional host, memory, or file-based forensics

The result is deeper investigation where it matters, without unnecessary cost.

Human-in-the-loop by design

Intezer keeps human analysts at the center so they can review and override conclusions, and trace every decision made by Intezer. Of course, all evidence and reasoning is grounded in forensic data and is fully transparent and explainable for beginners and advanced analysts alike.

This aligns with Anthropic’s principle that humans remain final decision-makers, especially in high-stakes domains like cybersecurity.

How this architecture improves SOC performance

Intezer’s adherence to Anthropic’s best practices produces measurable outcomes across the three most important SOC metrics: accuracy, coverage, and speed, while also reducing cost.

Accuracy

Intezer’s approach of combining deterministic forensics + adaptive AI = best-in-class verdict quality.

  • The structured workflow prevents hallucinations
  • The AI Agent only activates with strong guardrails
  • Context inheritance ensures consistent reasoning
  • Analysts always have visibility and control

This hybrid approach dramatically reduces false positives and prevents premature conclusions.

Triage of all alerts, including low-severity (where threats often hide)

Because AI Workflows handle the bulk of alerts inexpensively and AI Agents only run when needed, heavy and expensive reasoning calls are minimized

This frees SOCs from cherry-picking which alerts to ingest allowing them to triage and investigate them all.

This is crucial for:

  • High-volume enterprise environments
  • MSSPs with strict SLAs
  • Cloud-scale detection pipelines
  • 24/7 monitoring teams

You get broad alert coverage without inflating compute costs.

Speed: Structured steps + adaptive depth

  • Workflow mode resolves most alerts within seconds
  • Agents accelerate investigations that normally take analysts hours
  • No bottlenecks, no backlog, no manual evidence gathering

The result is a SOC where every alert is investigated quickly, consistently, and with forensic depth.

Table of how Intezer’s design reflects Anthropic’s guidance

Anthropic best practiceHow Intezer implements it
Start with deterministic workflowsAI Workflow handles structured triage with predefined expert steps
Activate agents only when neededAI Agent triggers only when confidence is insufficient
Give agents full contextAgent inherits the entire Workflow evidence set
Control tool usageAgent selects tools based on evidence, not speculation
Maintain human-in-the-loopAnalysts can verify, guide, and override conclusions
Prioritize safety and reproducibilityEvery action is logged, justified, and traceable

Conclusion: Anthropic’s Agent principles in a real SOC

Anthropic’s framework for building effective agents is now influencing industries far beyond general AI research. Intezer Forensic AI SOC might be one of the strongest real-world implementations of these practices in cybersecurity.

By combining:

  • Deterministic workflows for reliable baseline investigations
  • Adaptive agents for deeper reasoning when needed
  • Human oversight for trust and accountability
  • Cost efficiency enabling full-pipeline alert coverage

Intezer is able to deliver fast, accurate, and scalable triage that transforms SOC operations.

Learn more about how you can transform your SOC today.

The post Building effective AI for the SOC: How Intezer Forensic AI SOC follows Anthropic’s best practices appeared first on Intezer.

Properly framing the AI SOC conversation 

2 November 2025 at 21:45

Gartner’s recent Innovation Insight: AI SOC Agents report is an encouraging signal that the concept of an “AI-powered SOC” has reached mainstream awareness. The report recognizes the potential of AI technologies to transform how security operations centers function, especially in augmenting analysts through automation and intelligent workflows.

Yet, while Gartner’s analysis succeeds in capturing the momentum of this space, it falls short in clarifying how and where AI actually fits within the security operations stack. By treating “AI SOC” as a monolithic, undifferentiated category, the report overlooks the crucial distinctions between detection, triage and response, each of which requires a very different kind of AI capability and delivers very different value.

A closer look at Gartner’s analysis 

Gartner’s report provides a valuable overview of how AI SOC can assist with detection, alert investigation, and even response recommendation. We wholeheartedly agree with Gartner’s advice that CISOs should evaluate which security activities are “volumetric, troublesome, or low-performing, and which would benefit the most from augmentation with the application of AI”. However, presenting all of the AI SOC functions (and vendors) as part of a single undifferentiated security ecosystem, can be confusing. 

This broad framing misses the fact that an AI model designed to improve SIEM detection logic operates on entirely different data, architecture, and feedback loops than one built to support analyst decision-making or response automation. The result is a flattening of a nuanced market into one monolithic category, useful for taxonomy, but not for decision-making.

For CISOs, this lack of segmentation makes it hard to answer the key strategic question: Where should we apply AI first to get tangible operational value?

By contrast, our view is that organizations should start by identifying which part of their operations needs augmentation most, then evaluate AI solutions purpose-built for that domain.

A clearer way to frame the AI SOC market

To understand where AI truly fits in and how it can deliver measurable outcomes, it helps to zoom out and look at the broader security operations stack. As we described in a previous blog post, “Making sense of the AI SOC market”, we see three main layers where AI can add value:

Detection (SIEM, XDR)

The first layer converts raw telemetry into actionable alerts. Here, AI can strengthen correlation logic, improve detection models, and reduce false positives. This is largely about data pattern recognition and automation of repetitive analysis.

Triage and Investigation (SOC / MDR)

The middle layer is where human analysts determine which alerts are real incidents worth escalating. This is where AI can truly emulate analyst reasoning, gathering context, cross-referencing intelligence, and presenting likely root causes. Done well, AI here acts as a co-analyst, not a replacement.

Response and Case Management (SOAR)

The final layer coordinates remediation and manages incident workflows. AI can accelerate playbook creation, automate routine case handling, and improve overall response time through dynamic decision logic.

Each layer offers opportunities for AI—but they are fundamentally different problems to solve. When vendors use the term “AI SOC” without specifying which layer they’re addressing, it creates confusion and unrealistic expectations.

A more practical evaluation framework

To move the conversation forward, we recommend a more structured approach to evaluating AI SOC solutions.

Step 1: Identify your target layer

Ask: Which layer of our operations needs the most improvement. Is it detection (SIEM/XDR/Cloud), triage (SOC/MDR), or response (SOAR)? 

This helps narrow the field to the right class of solutions rather than chasing the broad “AI SOC” label.

Step 2: Define measurable outcomes

Especially for alert triage and investigation (which is usually handled by an internal SOC or external MDR), establish metrics to compare performance, such as:

  • Reduction in mean time to detect (MTTD)
  • Noise reduction rate
  • Scale of alert coverage
  • Consistency across SOC shifts or analyst tiers
  • Triage accuracy

These metrics allow organizations to compare vendors on tangible outcomes, not vague AI promises.

Step 3: Evaluate transparency and integration

An effective AI SOC solution should clearly explain its reasoning, integrate easily with your existing tools, and allow human oversight. The goal is augmentation, not opacity.

Read more about why the “AI SOC agent” narrative misses the point.

The way forward

Gartner deserves credit for bringing visibility to an emerging market, but their analysis underscores how early and fluid this space still is. The future of the AI SOC isn’t one product category. It’s a set of AI capabilities applied intelligently across the detection–triage–response continuum.

Organizations that treat AI as a modular capability rather than a monolithic product will see the most success. The key is knowing your operational priorities and matching them to the layer where AI can have the greatest impact.

Conclusion

AI is not a magic “SOC-in-a-box.” It’s a set of technologies that, when properly targeted, can transform specific parts of security operations. Gartner’s latest report captures the enthusiasm, but not yet the structure, of this market.

At Intezer, we believe the path forward starts with clarity. Understanding the distinct layers of the SOC, the role AI plays in each, and the outcomes that matter most. Only then can organizations cut through the noise and choose the right AI SOC partner for their needs.

Explore how Intezer delivers complete peace of mind for your security operations! 

The post Properly framing the AI SOC conversation  appeared first on Intezer.

❌