I recently participated in a security leader roundtable hosted by Cybersecurity Tribe. During this session, I got to hear firsthand from security leaders at major organizations including BNP Paribas, the NFL, ION Group, and half a dozen other global enterprises.
Across industries and maturity levels, their priorities were remarkably consistent. When it comes to AI-powered SOC platforms, these are the seven capabilities every CISO is asking for.
1. Trust and traceability
If there was one theme that came up more than anything else, it was trust. Security leaders don’t want “mysterious” AI. They want transparency.
They repeatedly insisted that AI outputs must be auditable, explainable, and reproducible. They need to show the work, for compliance auditors, for internal governance boards, and increasingly to address emerging legal and regulatory risk.
Black-box decisions won’t cut it. AI must generate evidence, not just conclusions.
2. Reduction of alert fatigue (operational efficiency)
Every leader I spoke with is wrestling with alert overload. Even mature SOCs are drowning in low-value notifications and pseudo-incidents.
A measurable reduction in alerts escalated to humans is now a top KPI for evaluating AI platforms. Leaders want an environment where analysts spend their time on exploitable, high-impact threats, not noise.
If AI can remove repetitive triage work, that’s not just helpful, it’s transformational.
4. Safe automation with human-in-the-loop for high-impact actions
Most leaders are open to selective autonomous remediation, but only in narrow, well-defined, high-confidence scenarios.
For example:
Rapid ransomware containment
Isolation of clearly compromised endpoints
Automatic execution of repeatable hygiene tasks
But for broader or higher-impact actions, CISOs still want human review. The tone was clear: AI should move fast where appropriate, but never at the expense of control.
5. Integration and practical telemetry coverage
Every leader emphasized that an AI platform is only as good as the data it can consume.
The must-have list included:
Cloud telemetry (AWS, Azure, GCP)
Identity providers (Okta, Entra ID, Ping)
EDR/XDR
SIEM logs
Ticketing/ITSM
Custom threat intelligence feeds
They don’t want a magical AI that promises answers without good data. They want a connected system that can see across the entire environment.
6. Executive & board alignment with demonstrable ROI
CISOs aren’t implementing AI in a vacuum. Their boards and executive leadership teams are pressuring them from two very different angles:
Some are mandating AI adoption as a strategic priority.
Others are slowing everything down with extensive governance, risk, and compliance processes.
To navigate this dynamic, CISOs need clear, defensible ROI:
Reduced operating costs
Faster mean-time-to-respond
Fewer escalations
More predictable outcomes
AI without measurable value is no longer acceptable. They need something they can put in front of the board and say, “Here’s the impact.”
7. Accountability and legal clarity
Before enterprises allow AI to autonomously take security actions, CISOs need a fundamental question answered:
“Who is accountable when the AI acts?”
This isn’t just a theoretical concern. It’s a gating requirement for adoption.
Until there is clear guidance on liability, responsibility, and governance, many organizations will keep AI on a tight leash.
Closing thoughts
Across all of these conversations, the message was consistent: AI in the SOC is inevitable, but it must be safe, transparent, integrated, and measurable.
CISOs aren’t looking for science fiction. They’re looking for credible, operational AI that enhances their teams, strengthens their defenses, and aligns with business realities.
An XLL is a native Windows DLL that Excel loads as an add-in, allowing it to execute arbitrary code through exported functions like xlAutoOpen. Since at least mid-2017, threat actors began abusing Microsoft Excel add-ins via the .XLL format, the earliest documented misuse is by the threat group APT10 (aka Stone Panda / Potassium) injecting backdoor payloads via XLLs.
Since 2021, a growing number of commodity malware families and cyber-crime actors have added XLL-based delivery to their arsenals. Notable examples include Agent Tesla and Dridex, researchers observed an increase of these malware being dropped via malicious XLL add-ins.
Attackers typically embed their malicious code in the standard add-in export functions, such as xlAutoOpen. When a user enables the add-in in Excel, the malicious payload executes automatically, dropping or downloading a malicious payload. Some malware families use legitimate frameworks to create XLL (Excel Add-in) files. One common example is Excel-DNA, a popular open-source framework.
These frameworks make it easier for attackers to build and load malicious XLLs. In some cases, they also allow threat actors to pack and execute additional payloads directly in memory.
In late October 2025, a 64-bit DLL compiled as an XLL add-in was submitted to VirusTotal from two different countries. The first submission came from Ukraine on October 26, followed by three separate submissions from Russia beginning on October 27. The Russian-submitted samples were named Плановые цели противника.xll (“enemy’s planned targets”) and Плановые цели противника НЕ ЗАПУСКАТЬ.xll, which depending on context can mean either “Do NOT release the enemy’s planned targets” or “Do NOT activate the enemy’s scheduled targets.”
This DLL contains an embedded second-stage payload, a backdoor we named EchoGather. Once launched, the backdoor collects system information, communicates with a hardcoded command-and-control (C2) server, and supports command execution and file transfer operations. While it uses the XLL format for delivery, its execution chain and payload behavior differ from previously documented threats abusing Excel add-ins. Through pivoting on infrastructure and TTPs we were able to link this campaign to Paper Werewolf (aka GOFFEE), a group that has been targeting Russian organizations.
An XLL is an Excel add-in implemented as a DLL that Excel loads directly, usually with the .xll extension. Microsoft explicitly describes XLL files as a DLL-style add-in that extends Excel with custom functions.
When a user double clicks the file with the .xll extension, Excel is launched, loads the DLL and calls its exported functions such as xlAutoOpen, initialization code, or xlAutoClose, when unloading. Often malicious XLLs embed their payload inside xlAutoOpen or through a secondary loader, so that code runs immediately once Excel imports the DLL.
Excel XLL add-ins and macros differ mainly in how they execute and the level of control they provide an attacker. Macros, VBA or legacy XLM, run as scripts inside Excel’s macro engine and are constrained by Microsoft’s security model, which now includes blocking macros from the internet, signature requirements, and multiple user-facing warnings. XLLs, on the other hand, are compiled DLLs that Excel loads directly into its own process using LoadLibrary(), giving them the full power of native code without going through macro security checks. While macros rely on interpreted scripting and COM interactions, XLLs can call any Windows API, inject into other processes, or act as full-featured malware loaders. This makes XLLs far more capable and harder to analyze, and it may explain why some threat actors chose XLL-based delivery methods rather than macro-based.
Loader behavior
The DLL exports two functions, xlAutoOpen and xlAutoClose, both of which return zero. This behavior differs from that of legitimate XLL add-ins as well as from previously documented threats abusing the XLL format, such as those described in the most recent CERT-UA publication. In this case, the malicious logic is not tied to the typical export functions but instead is triggered through dllmain. The main function of the loader is called when fdwReason > 2 meaning that dllmain_dispatch was called with DLL_THREAD_DETACH (=3). Essentially the main function will be called when any thread in Excel that previously called into the XLL (even Excel’s own threads) exits.
Triggering the malicious payload during DLL_THREAD_DETACH helps the malware evade detection by delaying execution until a thread exits. This bypasses typical behavior-based detection, which focuses on early-stage activity like PROCESS_ATTACH, making the execution appear benign at first and allowing the second-stage payload to activate covertly after the sandbox times out or AV heuristics complete.
A call to the function that loads and executes the backdoor.
The embedded file is dropped as mswp.exe in %APPDATA%\Microsoft\Windows, then executed as a hidden process using CreateProcessW with CREATE_NO_WINDOW. Standard Output and Error is captured and redirected via anonymous pipes. If process creation succeeds, the function returns true otherwise, it cleans up and returns false.
The backdoor: EchoGather
We refer to this backdoor as EchoGather due to its focus on system reconnaissance and repeated beaconing behavior.
The dropped payload is a 64-bit backdoor with hardcoded configuration and C2 address. It collects system information and communicates with the C2 over HTTP(S) using the WinHTTP API.
Main function of EchoGather.
The data collected by EchoGather consists of:
IPv4 addresses
OS type (“Windows”)
Architecture
NetBIOS name
Username
Workstation domain
Process ID
Executable path
Static version string: 1.1.1.1
Next, EchoGather encodes that data using Base64 and sends it to the C2 using POST method. The C2 address is constructed from hardcoded strings. In the analyzed sample the C2 address was: https://fast-eda[.]my:443/dostavka/lavka/kategorii/zakuski/sushi/sety/skidki/regiony/msk/birylievo This transmission occurs in an infinite loop with randomized sleep intervals between 300–360 seconds.
In all of its C2 communications, EchoGather uses the WinHTTP API. It supports various proxy configurations and is designed to ignore SSL/TLS certificate validation errors, allowing it to operate in environments with custom or misconfigured proxy and certificate settings.
Supported commands
EchoGather supports four commands.
All outgoing communication with the C2 is encoded using standard Base64. When a command is received from the C2 the first 36 bytes contain the request ID, it’s a unique identifier that is being used when the backdoor needs to send the information is several packages.
0x54 Remote Command Execution
EchoGather first extracts the request ID, followed by the command that needs to be executed. It then decrypts the string cmd.exe /C %s using a hardcoded XOR key (0xCA), which serves as a template for command execution. Using this template, it executes the specified command via cmd.exe. The output of the command is captured through a pipe and sent back to the C2 server, with the request ID prepended to the response.
0x45 Return Configuration
Sends the embedded configuration structure to the C2.
0x56 File Exfiltration
The backdoor begins by extracting a request ID and the name of the file to be exfiltrated. It opens the specified file, determines its total size, and calculates how many 512 KB chunks are required for transmission. A transfer header containing metadata about the chunk count and size is then sent to the C2 server. In response, the backdoor receives the request ID used to identify the session. The file is read and transmitted in chunks, with each chunk containing the request ID, chunk index, file tag, data length, and raw file data.
0x57 Remote File Write
EchoGather receives a filename from the C2 and writes the incoming data chunks to the system, reconstructing the file as the chunks arrive.
Infrastructure analysis
During our research we found two domains that were used by the threat actors.
IP Resolutions for fast-eda.my
The domain was registered on September 12, 2025.
The very first resolution was between September 12th and 14th, the domain was resolved to 199.59.243[.]228.
After that and until November 26th all of the resolutions were on Cloudflare instances.
From September 18th to November 24th the domain was resolved to 172.64.80[.]1
On November 27th it was resolved to 94.103.3[.]82 the address is connected to Russia based on geolocation.
When we looked up the related files to this domain on VirusTotal, we found 7 files. Two of them are powershell scripts that load the backdoor: mswt.ps1 and the second one wasn’t submitted with a name.
The two scripts are identical, including their execution flow. Both first decode two Base64-encoded files: a PDF document and the EchoGather payload. The PDF is opened, while the payload is executed in the background. The document appears to be an invitation, written in Russian, to a concert for high-ranking officers. However, the PDF is AI-generated and contains several noticeable inconsistencies. For instance, the stamp in the lower right corner appears to be an AI-generated attempt at recreating Russia’s national emblem, the double-headed eagle, but the result resembles a distorted or bird-like figure rather than the intended symbol. The text also includes several errors. Some Cyrillic letters are incorrect, for example, the letter Д is used in place of Л in multiple instances, and the word праздиика is a misspelled version of праздника. Additionally, the phrase «с глубоким уважением приглашает» (translated as “with deep respect invites (you)”) is unnatural and not idiomatic in the context of formal Russian invitations.
Decoy document, and invite to a concert.
IP Resolutions for ruzede.com
First seen on 2025-05-21, resolved to 162.255.119[.]43 and later to 5.45.85[.]43 until October 2nd.
On October 2nd it was resolved to IP addresses in Cloudflare.
From October 4th to November 26th the domain was resolved to the same address seen in the previous domain: 172.64.80[.]1
On November 26th it was resolved to 193.233.18[.]137 in Russia based on geolocation.
The ip address is linked to different malicious domains.
Using VirusTotal, we pivoted on the domain ruzede[.]com, and we identified a RAR archive that exploits a known vulnerability, CVE-2025-8088, a vulnerability in WinRAR that involves the abuse of NTFS alternate data streams (ADSes) in combination with path traversal. This flaw allows attackers to embed malicious content within seemingly harmless filenames by appending ADSes that include relative path traversal sequences.
The archive contains a file named Вх.письмо_Мипромторг.lnk:.._.._.._.._.._Roaming_Microsoft_Windows_run.bat
When the archive is opened, WinRAR fails to properly sanitize these ADS paths and extracts the hidden data streams, placing them in unintended or sensitive locations such as %APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup.
Connected file to the domain ruzede[.]com
The phrase “письмо Мипромторг” is misspelled; the correct form is “письмо Минпромторга.” This term refers to an official letter or communication issued by the Ministry of Industry and Trade of the Russian Federation (Минпромторг России). The same misspelling error is in the archive file name: Вх.письмо_Мипромторг.rar.
Essentially the file in the archive is a batch script that launches a hidden PowerShell process. This process navigates to a user-specific AppData directory, then downloads a PowerShell script named docc1.ps1 from a remote URL (https://2k-linep[.]com/upload/docc1.ps1) and saves it to the current working directory. The script is then executed via a new PowerShell instance with execution policy restrictions bypassed.
The downloaded script (docc1.ps1) extracts both a PDF file and an EchoGather payload, using a technique similar to the one described previously. However, in this instance, the embedded PDF differs from earlier samples. This document is allegedly sent from the deputy of the Ministry of Industry and Trade of the Russian Federation, asking for price justification documentation under the state defense order, focusing on violations of deadlines and reporting on pricing approval processes.
The companies listed with their emails on the top right side of the first page (Almaz-Antey, Shvabe, and the United Instrument-Making Corporation) are major Russian defense-industry and high-technology enterprises, and they might be the intended recipients of this decoy document.
Page 1Page 2Page 3
The same vulnerability was used by several threat actors including RomCom (Russia-aligned) and Paper Werewolf, a cyberespionage group targeting Russian organizations and active since 2022. In early August, BI.ZONE Threat Intelligence published a report about an ongoing campaign of Paper Werewolf that exploits CVE-2025-6218, affects WinRAR versions up to and including 7.11 and enables directory traversal attacks that allow malicious archives to extract files outside their intended directories. A second zero-day, at the time, vulnerability that abuses ADSs for path traversal. The report doesn’t mention CVE-2025-8088, but based on the description we assume that is the same vulnerability.
The interesting part is that we can see similarities between the decoy documents from the report to the document above. First, the filename of the decoy document in the report is запрос Минпромторга РФ.pdf (Request of the Ministry of Industry and Trade of the Russian Federation.pdf) no misspellings in the filename. It refers to the same office. The document asks to assess the impact of a specific government resolution on production capacities of subsidy recipients. Next, both documents share the same template and structure: red stamp on the left side, followed by the same information about the office, the date and the request id. Both documents contain a request for information to be submitted to a government-affiliated organization.
Attribution
Based on the shared infrastructure, such as the ruzede[.]com domain, as well as notable similarities in decoy document construction and the exploitation of the WINRAR vulnerability that leverages ADSs, we attribute this campaign to the Paper Werewolf (aka GOFFEE) threat group. The recent use of XLL files suggests that the group is experimenting with new delivery methods while continuing to rely on established infrastructure, possibly in an attempt to evade detection. In addition, the use of a new, yet simple, backdoor may indicate an effort to improve and evolve their toolset.
Summary
It’s less common to see public reporting on threats targeting Russian organizations, which makes this campaign worth highlighting. The threat actor appears to be actively exploring new methods to evade detection, including the use of XLL-based delivery techniques and newly developed payloads. These changes suggest an effort to enhance their capabilities. However, there are still clear gaps in both technical execution and linguistic accuracy, indicating that their tradecraft is still developing.
Security teams that rely on Microsoft know the power of a deeply integrated security stack. Today, we’re proud to announce an important milestone that further strengthens that ecosystem.
Intezer has been named a top-tier Solutions Partner in the Microsoft AI Cloud Partner Program (MAICPP), a designation reserved for solutions that meet Microsoft’s highest standards for security, architecture, and seamless cloud integration.
This recognition follows a successful Microsoft technical audit and certifies the Intezer Forensic AI SOC platform as trusted, Microsoft-validated software designed to deliver real security outcomes for modern SOC teams.
Join AI SOC Live on January 6th to see how to maximize your Microsoft Security investment with Forensic AI SOC. January 6th | 9am PT | 12pm EST.
Strengthening Microsoft-driven SOCs with Forensic AI
Microsoft security tools generate powerful signals, but signals alone don’t equal outcomes. SOC teams still face alert overload, limited context, and the constant risk that real threats hide in low- or medium-severity alerts.
The Intezer Forensic AI SOC platform was built to solve this problem.
Intezer strengthens the outcomes of Microsoft-driven SOCs by combining agentic AI with automated forensic investigation, enriching Microsoft alerts with deep technical evidence and cross-platform context. The platform investigates alerts from and across:
Microsoft Defender for Endpoint
Microsoft Defender for Identity (Entra ID)
Microsoft Defender for Office 365 and reported phishing
Microsoft Sentinel
Microsoft Defender for Cloud
Non-Microsoft security tools across endpoint, identity, cloud, email, and network environments
Instead of triaging only “high severity” alerts, Intezer investigates every alert with automated querying of Microsoft Sentinel, whenever needed, to enrich alerts, correlate logs, and validate activity. This provides visibility into every incident without manual lookups or switching tools.
How Intezer delivers better SOC outcomes on Microsoft
24/7 AI-powered triage and investigation
Intezer automatically triages and investigates 100% of alerts, including low- and medium-severity alerts that are commonly ignored. By mirroring how expert human analysts investigate incidents, using multiple AI models combined with deterministic forensics, Intezer delivers speed without sacrificing accuracy.
Less than 4% alerts escalated, higher confidence decisions
Across Microsoft and non-Microsoft alerts, fewer than 4% are escalated to human analysts. Each verdict is backed by forensic evidence, reducing noise, eliminating guesswork, and enabling analysts to focus only on what truly matters.
Faster response with native Microsoft actions
Intezer enables automated remediation directly through Microsoft tools, including:
Device isolation via Defender for Endpoint
User lockout through Entra ID
Email quarantine in Defender for Office 365
Interactive response via Microsoft Teams
This tight integration allows teams to move from alert to action in minutes, without switching tools or workflows.
Built to maximize the value of Microsoft security investments
“This designation reflects our commitment to helping organizations get the most out of their Microsoft security investments,” said Itai Tevet, CEO and co-founder of Intezer. “As a top-tier Solutions Partner in the Microsoft AI Cloud Partner Program, we deliver AI-powered, forensic-grade investigations that strengthen the security outcomes of SOC teams using Defender, Sentinel, and the broader Microsoft Security Suite. We help teams move from alerts to clear, confident decisions in minutes.”
Intezer customers can also purchase directly through the Microsoft Azure Marketplace and apply existing Azure credits, simplifying procurement and accelerating time to value.
What the MAICPP designation means for security teams
The Microsoft AI Cloud Partner Program recognizes partners whose solutions are proven to work at scale across the Microsoft Cloud. Achieving top-tier Solutions Partner status signals that Intezer:
Meets Microsoft’s highest standards for security, reliability, and architectural excellence
Integrates deeply and natively across the Microsoft Security Suite
Delivers validated customer impact for organizations operating on Microsoft infrastructure
For customers, this designation provides confidence that Intezer is not just compatible with Microsoft security, but purpose-built to extend and elevate it.
Why this matters now
As SOCs face increasing alert volumes, tighter budgets, and a growing shortage of skilled analysts, automation alone is no longer enough. Security teams need forensic-grade AI that can explain why an alert matters, not just label it.
The MAICPP designation confirms that Intezer delivers exactly that:
Enterprise-grade accuracy
Microsoft-validated integrations
Proven SOC efficiency at scale
For organizations running on Microsoft, Intezer is now officially recognized as a trusted partner to help transform alerts into outcomes.
There’s a clear trend emerging with many organizations transitioning from legacy SIEMs to Google SecOps. While the Google SIEM platform is powerful, in our experience working with enterprise clients, that power only reveals itself when security leaders make three early decisions correctly:
Detection strategy: Whether to migrate existing rules or start fresh with a green-field approach.
Data onboarding: How to scale ingestion across multi-cloud environments without breaking pipelines.
Operating model: Building workflows that prevent “alert debt” from piling up on day one.
The strategic message is clear. Treat SIEM detection management with the same diligence you treat core security architecture, and augment your analysts with AI-powered triage so your humans can focus on higher-order investigations.
Here’s a practical checklist for discovery, migration, and operational success, designed for CISOs and SOC leaders evaluating a move to Google SecOps.
The tl;dr version of the Google SIEM migration checklist
Phase
Key focus
Pre-Migration
Inventory, pain-point assessment, business justification
Migration
Tool selection, data ingestion, rule/dashboard migration, Integration, governance & risk
Post-Migration
Measurement of success, continuous improvement, cost optimisation, governance & reporting
Full Google SecOps migration checklist
Let’s dive into the details for each phase of the migration process.
Pre-migration checklist: Establishing the baseline
Inventory current environment
Catalogue all data sources feeding Splunk: log types, volumes (GB/day), retention policies, on-prem vs cloud vs multi-cloud.
Map all current detections, dashboards, reports, playbooks, SOAR workflows.
Identify any compliance/regulatory retention obligations (audit logs, legal hold).
Establish current licensing costs, infrastructure (forwarders, indexers), staffing.
Assess SIEM performance & pain points
Are you seeing cost escalation vs benefit (slower detection, high false positives, low automation)?
Is the SIEM struggling with data volume growth, scalability, multi-cloud telemetry?
Are SOC analysts spending more time on infrastructure/configuration than investigations?
Are you able to integrate newer requirements (cloud workloads, containers, IoT/OT, multi-cloud) effectively? This 451 Research report indicates many orgs run multiple SIEMs due to tool sprawl.
Define business & security objectives
What do you hope to achieve? E.g., faster detection/response, lower cost, improved coverages, cloud alignment.
What are the key metrics: mean time to detect (MTTD), mean time to respond (MTTR), cost-per-alert, false positive rate, regulatory coverage, etc.
What is your target SOC maturity in e.g., 12-24 months? Are you planning a cloud-first strategy, heavier automation/AI, less on-prem infrastructure?
Build the migration justification
Prepare a comparative TCO/ROI: legacy SIEM vs cloud-native. Google SecOps materials claim e.g., “ingest and analyse your data at Google speed and scale” and highlight cost benefit.
Understand what it will cost to migrate: re-write detections, dashboards, data flows, training, potential downtime.
Present risk assessment: What happens if you don’t migrate (risk of obsolete tool, scaling failure, cost spirals)? The “Great SIEM Migration” guide argues that legacy tools may become “dinosaurs”.
Migration-phase checklist: Executing the transition
Select migration path & vendor/partner support
Decide: full rip & replace vs phased migration vs augmentation (run new platform in parallel).
Evaluate tooling for data-migration, rule conversion, playbook migration.
Data ingestion, normalization & compatibility
Ensure: all of your log types/sources in Splunk are supported by the new platform. Google SecOps supports ingestion of Splunk CIM logs.
Plan for data mapping: Splunk field names, dashboards, custom fields → new schema.
Address historic data: Will you migrate archives? Will you keep Splunk as store-only? Community posts warn that mapping old archives can be complex.
Validate performance: test ingestion, query latency, retention policies on the new platform.
Detection rules, dashboards, SOAR workflows
Catalogue existing detection rules, dashboards, SOAR playbooks in Splunk.
Determine which can be reused, which need rewriting. Ensure parity: detection coverage, mapping to MITRE ATT&CK, business use-cases. Splunk claims strong out-of-box detection library.
Build and test new rules/playbooks in Google SecOps; validate they meet or exceed current performance (MTTD, MTTR, false positives).
Ensure analyst training and new workflows are adopted: new UI, new query language, new incident-investigation flows (Google SecOps offers “Gemini in security operations” natural-language assistant).
Integration & ecosystem fit
Ensure that Google SecOps integrates with your existing tool-stack (EDR, identity, network, cloud logs, SOAR, threat intel). Google advertises 300+ SOAR integrations.
Confirm multi-cloud/on-prem data ingestion: check vendor statements.
Validate APIs, custom connectors, forwarder architecture. Splunk vs Google SecOps comparison note: Splunk emphasizes hybrid flexibility.
Governance, compliance & retention
Check how historic data will be retained, archived, accessed, both for compliance (audits/regulators) and investigations.
Communicate to stakeholders: SOC analysts, business units, auditors. Ensure training and change-management.
Set benchmarks and metrics: Time to detect/resolve in new platform vs old; cost per alert; staff utilisation; alert volumes; false positives.
Post-migration checklist: Optimizing & sustaining value
Validate outcomes & measure success
Measure MTTD, MTTR, alert volumes, analyst productivity pre- and post-migration.
Compare actual cost savings vs business case.
Assess detection coverage: Are all critical use-cases still covered? Are any gaps emerging?
Run periodic health checks (some vendors like CardinalOps offer detection-rule health monitoring with MITRE ATT&CK coverage for Google SecOps).
Continuous improvement & SOC maturity evolution
SOC maturity doesn’t stop at migration. Use freed-up resources to focus on advanced use-cases (threat hunting, proactive detection, automation, investigations).
Ensure audit/compliance reports reflect the new tooling (document changes, validate controls).
Set up periodic reviews of tool performance, vendor roadmap, SOC maturity.
Final thoughts
Migrating to Google SecOps isn’t a simple platform swap, it’s a redesign of how your SOC operates. The upside: cost efficiency, scale, and automation can be immediate. The risks: migration complexity, content gaps, and operational disruption are real and must be managed deliberately.
As a CISO or SOC leader, treat this as a transformation program. Use the table and/or the full Checklist above to drive decisions; follow a strategic landing plan to sequence work; and anchor on the three non-negotiables outlined above:
A clear detection strategy (migrate only if the value is there; rebuild the rest in YARA-L),
Data onboarding at scale with a parser matrix and cost guardrails, and
An operating model that prevents alert debt from day one through automation and measurable KPIs.
If you want help getting there faster, we can provide a SIEM jumpstart (curated + bespoke YARA-L rules, MITRE gap analysis and coverage, detection reviews, continuous improvement with Intezer engineers), a parser/ingestion plan for multi-cloud, and of course, Intezer Forensic AI SOC’s triage to meet on day-one, 100% alert coverage with full auditability so your analysts focus on the few cases that truly need their context and expertise.
The Security Operations Center (SOC) has always been the heart of enterprise defense, but in 2026, it’s evolving faster than ever.
The rise of AI-driven SOC platforms, often referred to as Agentic AI SOCs, is redefining how enterprises detect, investigate, and respond to threats.
For years, security teams relied on a mix of SIEM, EDR, and MDR vendors to stay ahead of attacks. But these stacks often created their own problems: endless alert noise, long investigation times, and an overworked analyst team stuck in repetitive triage.
The new generation of AI SOC platforms changes that. They leverage large language models (LLMs), enabling SOCs to automatically triage and investigate every alert in minutes, not hours.
In this guide, we’ll break down the Top 15 AI SOC platforms to watch in 2026, ranked by how they balance speed, accuracy, explainability, and coverage across modern enterprise environments.
What is an Agentic AI SOC?
“Agentic” AI refers to systems that don’t just respond, they act. In cybersecurity, anAgentic AI SOC is capable of performing end-to-end investigations, drawing conclusions, and recommending (or executing) responses based on forensic evidence and reasoning.
These platforms are trained not only to summarize alerts but to understand their context, correlating data across endpoints, identities, networks, and cloud systems.
The best AI SOCs of 2026 are explainable, autonomous, and fast, providing the confidence enterprises need to trust machine-led decision-making.
Top AI SOC platforms in 2026 comparison table
Platform
Best for
Key strength
Intezer (Forensic AI SOC)
Large Enterprises
Forensic-level, explainable investigations
7AI
Enterprises exploring multi-agent automation
Multi-agent orchestration
AiStrike
Mid-market SOCs
Affordable automated triage
SentinelOne (Purple AI)
Enterprises using SentinelOne EDR
Integrated SOC automation
CrowdStrike (Charlotte AI)
Falcon ecosystem users
Generative AI for summaries
BlinkOps
Security automation teams
Playbook-based automation
Bricklayer AI
Startups
Lightweight triage and reporting
Conifers.ai
Cloud-native companies
Cloud-first visibility
Vectra AI
Mature SOCs
Network threat detection
Dropzone AI
SOC automation innovators
Human-in-the-loop design
Exaforce
Minimizing SIEM Cost
Alert routing and prioritization
Legion Security
SOCs with expert analysts
Workflow management
Prophet.ai
Predictive threat modeling
Proactive threat detection
Qevlar AI
LLM-driven SOCs
AI triage experiments
Radiant Security
Mid-market enterprises
Response recommendations
1. Intezer: Best AI SOC platform for enterprise SOCs
Best for: Large enterprises that prioritize speed, accuracy, and complete alert coverage.
Intezer Forensic AI SOC is built for enterprise and MSSPs, trusted by global brands including NVIDIA, Salesforce, MGM Resorts, Equifax, and Ferguson. Intezer investigates 100% of alerts in under two minutes with 98% accuracy.
Unlike other platforms that rely solely on LLM-generated heuristics, Intezer fuses human-like reasoning with multiple AI models and deterministic forensic methods, including code analysis, sandboxing, reverse engineering, and memory forensics. The result is evidence-backed, explainable verdicts that eliminate the guesswork for SOC analysts.
For enterprises managing millions of alerts across SIEM, EDR, cloud, and identity systems, Intezer delivers full alert coverage and eliminates the low-severity blind spots that MDRs often ignore.
With endpoint-based pricing, Intezer removes the “alert tax” of data-ingest models and helps SOC leaders prove ROI to their boards, without expanding headcount.
Why enterprises choose Intezer
100% alert investigation coverage across SIEM, EDR, phishing, identity, and cloud
7AI is one of the most experimental platforms in the 2026 AI SOC space. It focuses on multi-agent orchestration, where separate AI agents collaborate to triage, enrich, and investigate alerts across different domains.
While its architecture is impressive, 7AI is best suited for innovation-driven security teams that have strong engineering capacity and want to customize workflows. It performs well in large-scale EDR and cloud environments but requires fine-tuning for reliability.
Best for: Enterprises exploring multi-agent SOC architectures.
3. AiStrike: Best for mid-market SOCs
AiStrike targets the mid-market segment with a focus on cost-effective AI triage. It offers a simple, clean dashboard that connects with EDR and SIEM tools to automatically prioritize alerts. While its forensic depth is limited compared to enterprise-grade solutions, AiStrike delivers solid speed and automation for smaller SOCs.
Best for: Mid-market SOCs that want affordable, plug-and-play AI investigations.
4. SentinelOne (Purple AI): Best for endpoint-centric SOCs
SentinelOne’s Purple AI brings native AI investigation and response into the SentinelOne platform. It’s tightly integrated with SentinelOne’s EDR and XDR stack, which makes it a strong option for organizations already using the SentinelOne’s stack.
While Purple AI provides quick, summarized threat analysis and remediation recommendations, it focuses heavily on endpoints rather than full enterprise coverage.
Best for: Enterprises deeply invested in SentinelOne’s ecosystem that want integrated AI triage.
5. CrowdStrike (Charlotte AI): Best for AI-driven summarization
CrowdStrike’s Charlotte AIis the generative assistant within the Falcon platform, built to help analysts ask natural-language questions and interpret alerts faster.
While not a fully autonomous SOC, Charlotte AI improves analyst experience and productivity by summarizing incidents and surfacing relevant insights. It’s ideal for teams that want to augment analysts rather than automate full investigations.
Best for: Enterprises using the CrowdStrike Falcon suite that want faster analyst assistance.
6. BlinkOps: Best for automation engineers
BlinkOps focuses on workflow automation, not investigations per se. It enables security teams to build playbooks and automation pipelines that connect multiple tools (SIEM, EDR, IAM, etc.).
While it doesn’t deliver forensic-level verdicts, BlinkOps is popular among DevSecOps teams that want custom automation flexibility.
Best for: Security engineers looking to automate existing SOC workflows.
7. Bricklayer AI: Best for startups and lean SOCs
Bricklayer AI provides lightweight alert triage and reporting capabilities. It’s built for smaller organizations that want to reduce alert fatigue without complex integrations. Its simplicity and affordability make it a solid entry point for teams without mature SOC processes.
Best for: Startups building early SOC capabilities on a budget.
8. Conifers.ai: Best for cloud-native companies
Conifers.ai specializes in cloud-first security visibility across AWS, Azure, and Google Cloud. Its AI models excel at correlating identity, network, and workload activity to flag potential breaches.
It’s not a full SOC replacement, but it significantly enhances cloud investigation and response.
Best for: Cloud-first organizations seeking AI-enhanced detection and context.
9. Vectra AI: Best for network and identity threat detection
Vectra AI has long been a leader in AI-driven network detection and response (NDR). Its platform now extends into AI SOC territory, combining real-time detection with contextual identity analysis.
Vectra is strong in hybrid environments but remains specialized in network telemetry rather than full-stack coverage.
Best for: Enterprises prioritizing network and identity visibility.
10. Dropzone AI: Best for SOC automation innovators
Dropzone AI represents the new wave of human-in-the-loop SOC automation. It allows analysts to supervise and approve actions initiated by AI, blending human expertise with autonomous investigation.
While not as proven in large enterprises as Intezer, Dropzone’s agentic architecture makes it an intriguing option for forward-thinking SOCs.
Best for: SOCs experimenting with supervised AI autonomy.
Exaforce uses a multi-model AI engine to reduce alert overload, accelerate investigations, and expand detection coverage without relying on a traditional SIEM. Its AI stack, combining data-ingestion models, behavioral machine learning, and large language models, analyzes real-time telemetry while cutting SIEM-related storage and licensing costs.
The platform adapts quickly through feedback loops and natural-language business context, continuously refining accuracy and reducing false positives. With investigative graph visualizations and flexible deployment options, Exaforce helps streamline complex investigations.
Best for: Companies struggling with excessive SIEM spend.
12. Legion Security: Best for companies with expert human analysts
Legion automates SOC investigations by capturing and operationalizing real analyst decision-making. Its browser-based agent records every step of an analyst’s workflow such as data reviewed, actions taken, judgments made and then creating reusable investigative logic.
These recordings evolve into living agents that can be replayed, tested, refined, and re-executed across new alerts. Legion offers flexible deployment options including cloud, hybrid, or customer-hosted to support diverse security and compliance requirements.
Best for: Organizations with expert human analysts, looking to create custom AI agents that can mirror their in-house best practices and knowledge.
13. Prophet Security: Best for predictive SOCs
Prophet focuses on automated alert resolution using agentic reasoning that mirrors how experienced analysts assess user behavior, asset context, and threat indicators. It enriches alerts with data from endpoints, cloud systems, identity platforms, and threat intelligence to deliver high-confidence dispositions without relying on static rules. The platform supports flexible automation, from fully automated closure of benign alerts to analyst-in-the-loop escalation, and includes a copilot-style natural language interface for deeper investigation and threat hunting.
Best for: Enterprises investing in predictive threat modeling and trend forecasting.
14. Qevlar AI: Best for experimental SOCs
Qevlar is an AI-powered investigation co-pilot that enhances analyst workflows by replicating the reasoning and research steps of human investigators. It ingests alerts from various tools and produces structured, evidence-backed reports with clear verdicts, confidence levels, and referenced data sources. Instead of suppressing or prioritizing alerts, Qevlar enriches and interprets them while preserving full analyst oversight. It also offers an automated documentation engine and support for on-prem deployment.
Best for: SOCs experimenting with AI-based triage prototypes.
15. Radiant Security: Best for mid-market enterprises
Radiant Security positions itself as an AI SOC for the mid-market and differentiates itself with claims of adaptive AI that can learn how to handle never-seen-before alerts as well as a built-in, affordable logging solution leveraging customers’ own archive storage.
Best for: Mid-market companies looking to eliminate expensive SIEM costs.
The future of Agentic AI SOCs
The next evolution of SOC automation goes beyond alert management. In 2026 and beyond, Agentic AI SOCs will not only investigate but also take verified actions, quarantining hosts, isolating sessions, and orchestrating containment based on evidence and policy.
This shift demands trust, explainability, and speed. Enterprises can no longer afford “black-box” AI that delivers vague suggestions. They need platforms capable of forensic reasoning, auditability, and full coverage, exactly what Intezer Forensic AI SOC delivers.
SOC leaders who adopt these systems early will gain measurable efficiency, lower operational risk, and stronger security posture, without expanding headcount.
Final thoughts
AI SOC platforms are transforming how enterprises defend against modern threats. While each platform on this list has unique strengths, Intezer stands out as the clear enterprise choice for those who demand accuracy, speed, and complete visibility.
See how Fortune 500 SOCs cut through the noise, reduce risk, and reclaim their time with Intezer.
Modern SOC teams face some real challenges. They are drowning in alert volume, short on experienced analysts, and facing a new generation of AI-driven attacks that operate faster than humans can respond. This combination is eroding SOC effectiveness, slowing response times, and creating blind spots where real threats hide in low-severity alerts that teams no longer have the time or capacity to investigate.
To meet this moment, Intezer is proud to unveil Intezer Forensic AI SOC, the only AI SOC platform battle-tested inside some of the world’s most targeted and security-mature organizations. Already trusted by more than 150 enterprises, including 15 of the Fortune 500, the platform brings forensic-grade accuracy, full alert coverage, and sub-minute triage to modern security operations.
Why enterprises need a Forensic AI SOC
As attack surfaces grow, many organizations turn to MDR providers for 24/7 alert triage. But MDRs often operate as black boxes with inconsistent quality, high escalation rates, and limited visibility, leaving low-severity alerts unaddressed and creating gaps adversaries can exploit.
Most “AI SOC” tools depend entirely on AI agents for alert triage and investigation. This leads to surface-level results, slower performance, and higher compute usage, limiting their ability to process large alert volumes, especially low-severity signals where threats frequently hide.
The way forward requires an approach that removes SOC bottlenecks while delivering stronger, more reliable security outcomes.
Why this matters now
The recent Anthropic AI espionage report marks a turning point. Threat actors are now weaponizing AI agents to automate full intrusion chains at machine speed.
These attacks often leave behind subtle, low-severity breadcrumbs that traditional SOCs and MDRs overlook. Without full alert coverage and forensic-grade triage, organizations cannot detect or contain AI-driven campaigns before they escalate.
This is precisely the gap Intezer’s Forensic AI SOC was built to close.
Intezer Forensic AI SOC flips the AI SOC model on its head. Instead of solely relying on AI Agents and LLMs, our platform combines AI agents and automated orchestration of deterministic forensic tools, to mimicthe triage and investigation methods used by elite responders and perform deep, accurate investigations at speed and scale.
Every alert is examined through a forensic lens using Intezer’s battle-tested capabilities, including endpoint forensics, reverse engineering, network artifact analysis, sandboxing, and other proprietary methods. These are paired with the adaptive research and reasoning of multiple LLMs to ensure both depth and flexibility in every investigation.
Intezer Forensic AI delivers:
100% alert coverage, including low-severity alerts often ignored by SOCs and MDRs
Fewer than 4% of alerts escalated for human review
98% accurate, consistent verdicts backed by deterministic evidence
1-minute median triage time
Predictable, scalable pricing tied to endpoints, not alert volume or costly model usage
Enterprises get both the intelligence of AI and the rigor of forensics, without sacrificing speed, cost, or accuracy.
Proven in the world’s most targeted enterprises
Intezer supports over 150 enterprises, including 15 of the Fortune 500, across verticals such as finance, tech, pharma, critical infrastructure, hospitality and more. These organizations operate some of the most complex and heavily targeted environments in the world and rely on Intezer to keep their businesses secure.
“Intezer’s AI-driven triage has been transformative for our SOC. It integrates seamlessly with our existing systems and delivers analyst-level investigations at scale, giving our team the confidence that every alert is handled with forensic accuracy.”
Branden Newman, CTO, MGM Resorts International
Built for the growing demands of enterprise SOCs
Enterprise SOCs must respond not only to rising alert volume, but also to increasing business pressure for speed, consistency, and measurable risk reduction. Companies using Intezer Forensic AI SOC enjoy:
Lower business risk Every alert, including low-severity signals used by modern attackers, is investigated with dramatically shortened MTTR.
Predictable, cost-efficient pricing Pricing aligned to endpoints avoids the unpredictable costs of LLM-heavy AI SOCs.
Instant time to value Hundreds of integrations enable rapid deployment and immediate time-to-value without training models on customer data.
Doing more with less Reduce MDR dependence and automate analyst workloads to optimize budgets and expand SOC output.
Built by security experts, for security experts
Intezer was founded and shaped by world-class SecOps leaders, security researchers and incident responders who have spent their careers defending some of the most targeted organizations and building foundational cybersecurity technologies.
Our leadership team includes pioneers who helped create and scale major cybersecurity companies. This firsthand experience responding to advanced threats, operating high-pressure SOC environments, and building products used by thousands of security teams worldwide directly informs how Intezer designs its technology.
We understand what analysts need, speed, accuracy, transparency, and trustworthy automation, because we’ve lived those challenges ourselves.
Intezer Forensic AI SOC reflects that operational DNA with a platform built not by generic AI engineers, but by practitioners who have spent years reverse engineering malware, hunting nation-state adversaries, leading global IR engagements, and building tools that analysts rely on every day.
Join the future of the SOC, today!
The SOC is entering a new era. Machine-scaled attacks demand an approach grounded in both forensic rigor and adaptive AI enabling consistent, accurate investigations to defend the enterprise.
tl;dr Greater productivity ≠ greater security outcomes. Kinda like why being able to accelerate from 0-60 MPH doesn’t help when the ice is cracking under your wheels.
And now, the full version.
AI SOC shouldn’t just “augment workflows”, that’s a productivity-locked perspective. The goal and the delivery capability that exists right now is to deliver full-scale enterprise triage of 100% of alerts with forensicly-accurate verdicts. That looks like streamlined triage, explainable verdicts, measurable accuracy, and operational resilience. There’s already an AI SOC platform that has operationalized what Gartner calls “emerging”.
While recent Gartner reports on “AI SOC Agents” and “SecOps Workflow Augmentation” succeed in elevating the conversation, they also reveal how incomplete that conversation still is. Both documents frame AI in the SOC as a promising but premature experiment, a toolset meant to make analysts more productive, not organizations more secure. That framing misses the point. AI isn’t about automation for automation’s sake; it’s about turning expert knowledge, data, context, and expertise into repeatable, scalable decision-making that covers every alert with confidence and context.
The bias in today’s AI SOC conversation
Gartner’s reports argue that AI SOC agents should be treated as “workflow augmentation tools” to reduce analyst fatigue and improve response efficiency. They recommend cautious adoption, structured pilots, and human-in-the-loop validation. Pragmatic? When LLMs are relied upon solely, sure. But the underlying assumption that enterprise-proven AI is not yet mature enough to deliver reliable outcomes is outdated.
In practice, this mindset anchors the market in productivity metrics, not security performance. It evaluates how efficiently teams work, not how effectively they defend. The focus stays on “mean time to detect” and “mean time to respond,” rather than the more critical questions:
Are ALL alerts being triaged?
Are verdicts, not just investigations, consistently accurate?
Are we actually reducing risk, not just improving the process?
Are alerts triaged in seconds & minutes for true containment & response?
That’s where the emerging class of true AI SOC platforms breaks away from the Gartner lens.
Workflow augmentation isn’t security
The distinction matters. Augmentation is an operational improvement; outcomes are a security transformation. Most vendors today build tools that accelerate investigation but still depend on human oversight for every meaningful decision. Those are SOAR 2.0 platforms: automation-centric, workflow-obsessed, and still fundamentally enrichment, not triage.
A true AI SOC, by contrast, triages every alert across the stack autonomously, determines a verdict with auditable reasoning, and escalates only when necessary, typically less than four percent of the time. This isn’t a co-pilot; it’s a teammate that already performs at the level of a seasoned analyst and identifies the needles without the haystack. This is incredible for the SOC analysts that are focused on looking at real alerts.
Security outcome execution is the critical requirement any true AI SOC should provide:
Resolve millions of alerts monthly across distributed environments with <4% escalation rates.
Deliver verdict accuracy above 97.7% through hybrid deterministic and AI reasoning.
Provide explainable decisions, validated by periodic human review and forensic evidence.
Uncover real threats in seconds & minutes, not hours.
The “emerging” technology that’s already operational
Gartner describes AI SOC agents as an “emerging technology” that promises to evolve beyond playbook-driven automation. The irony is that enterprise SOCs are already running on these systems today. Fortune 10 environments and thousands of organizations worldwide are triaging every single alert, not just the critical and high-severity ones, through AI that emulates human reasoning at scale.
These systems don’t “pilot” AI; they operationalize it. They deliver 24/7 SOC capability, instant triage, and consistent decision-making grounded in explainable logic, not black-box inference. They prove that an AI SOC is no longer a future-state concept. It’s production-grade infrastructure that’s rewriting what operational maturity means, and has been for years now.
The difference between Gartner’s caution and what’s happening in practice is simple: proof.
Measuring what actually matters
The reports fixate on efficiency → MTTD, MTTR, analyst satisfaction, but those metrics only tell half the story especially for antiquated SOCs. The next generation of AI SOCs defines success through security outcome metrics, including:
Total alert coverage – Every alert analyzed, across all severities and sources.
Verdict accuracy – The supermajority of decisions must be right, consistently and explainably.
Escalation rate – Only the rarest cases should reach human review.
Explainability – Every verdict is clearly backed by evidence: memory scans, forensic traces, and contextual reasoning.
Feedback velocity – Every corrected verdict feeds back into the detection logic, closing the learning loop.
When you measure what truly matters, accuracy, coverage, trust, the difference between AI that “helps” and AI that defends becomes obvious.
Why “AI SOC Agent” ≠ “AI SOC Platform”
The reports conflate two very different things. An “AI SOC agent” is a single use case, an assistant. An “AI SOC platform” is a full operating model: triage, investigation, and response fused into a continuous feedback loop back to detection engineering. One optimizes efficiency; the other drives security transformation.
That’s the real inflection point the industry is standing at. SOCs that treat AI as a productivity booster will get marginal gains, which is a great thing for the industry. SOCs that rebuild around AI as a core operating principle will experience exponential gains with real risk reduction.
In other words: this isn’t about speeding up analysts, it’s about scaling their expertise across the entire alert surface.
From AI promise to proof
The challenge now isn’t technology, it’s perception. The AI SOC has already proven it can outperform legacy models built on manual triage and brittle playbooks. It has shown that full alert coverage, explainable verdicts, and continuous learning can coexist with human oversight and compliance.
The industry doesn’t need another year of pilots to “validate the promise.” It needs a new standard of performance.
The next evolution of the SOC will be measured not by how well it augments workflows, but by how confidently it can:
Detect and triage every signal.
Deliver verdicts with explainable evidence.
Quantify accuracy in measurable, repeatable terms.
Strengthen analyst trust through transparency.
That’s the AI SOC outcome model, here today.
Final thoughts
Gartner’s perspective is valuable for shaping the taxonomy of an emerging market. But the reality on the ground has already overtaken the research. The world doesn’t need another whitepaper on “potential.” It needs proof ofperformance, and it exists.
The future SOC isn’t augmented.
It’s autonomous, accurate, and accountable for strategic security outcomes that CISOs and leaders require, either now or in the next few months with the executive leadership push to operationalize AI.
The world’s largest enterprises today already benefit from the real market-defining traits of a forensic AI SOC.
This is the first time we have a public, detailed report of a campaign where AI was used at this scale and with this level of sophistication, moving the threat from a collection of AI-assisted tasks to a largely autonomous, orchestrated operation.
This report is a significant new benchmark for our industry. It’s not a reason to panic – it’s a reason to prepare. It provides the first detailed case study of a state-sponsored attack with three critical distinctions:
It was “agentic”: This wasn’t just an attacker using AI for help. This was an AI system executing 80-90% of the attack largely on its own.
It targeted high-value entities: The campaign was aimed at approximately 30 major technology corporations, financial institutions, and government agencies.
It had successful intrusions: Anthropic confirmed the campaign resulted in “a handful of successful intrusions” and obtained access to “confirmed high-value targets for intelligence collection”.
Together, these distinctions show why this case matters. A high-level, autonomous, and successful AI-driven attack is no longer a future theory. It is a documented, current-day reality.
2. What Actually Happened: A Summary of the Attack
The attack (designated GTG-1002) was a “highly sophisticated cyber espionage operation” detected in mid-September 2025.
AI Autonomy: The attacker used Anthropic’s Claude Code as an autonomous agent, which independently executed 80-90% of all tactical work.
Human Role: Human operators acted as “strategic supervisors”. They set the initial targets and authorized critical decisions, like escalating to active exploitation or approving final data exfiltration.
Bypassing Safeguards: The operators bypassed AI safety controls using simple “social engineering”. The report notes, “The key was role-play: the human operators claimed that they were employees of legitimate cybersecurity firms and convinced Claude that it was being used in defensive cybersecurity testing”.
Full Lifecycle: The AI autonomously executed the entire attack chain: reconnaissance, vulnerability discovery, exploitation, lateral movement, credential harvesting, and data collection.
Timeline: After detecting the activity, Anthropic’s team launched an investigation, banned the accounts, and notified partners and affected entities over the “following ten days”.
To have a credible discussion, we must also look at what wasn’t new. This attack wasn’t about secret, magical weapons.
The report is clear that the attack’s sophistication came from orchestration, not novelty.
No Zero-Days: The report does not mention the use of novel zero-day exploits.
Commodity Tools: The report states, “The operational infrastructure relied overwhelmingly on open source penetration testing tools rather than custom malware development”.
This matters because defenders often look for new exploit types or malware indicators. But the shift here is operational, not technical. The attackers didn’t invent a new weapon, they built a far more effective way to use the ones we already know.
4. The New Reality: Why This Is an Evolving Threat
So, if the tools aren’t new, what is? The execution model. And we must assume this new model is here to stay.
This new attack method is a natural evolution of technology. We should not expect it to be “stopped” at the source for two main reasons:
Commercial Safeguards are Limited: AI vendors like Anthropic are building strong safety controls – it’s how this was detected in the first place. But as the report notes, malicious actors are continually trying to find ways around them. No vendor can be expected to block 100% of all malicious activity.
The Open-Source Factor: This is the larger trend. Attackers don’t need to use a commercial, monitored service. With powerful open-source AI models and orchestration frameworks – such as LLaMA, self-hosted inference stacks, and LangChain/LangGraph agents – attackers can build private AI systems on their own infrastructure. This leaves no vendor in the middle to monitor or prevent the abuse.
The attack surface is not necessarily growing, but the attacker’s execution engine is accelerating.
5. Detection: Key Patterns to Hunt For
While the techniques were familiar, their execution creates a different kind of detection challenge. An AI-driven attack doesn’t generate one “smoking gun” alert, like a unique malware hash or a known-bad IP. Instead, it generates a storm of low-fidelity signals. The key is to hunt for the patterns within this noise:
Anomalous Request Volumes: The AI operated at “physically impossible request rates” with “peak activity included thousands of requests, representing sustained request rates of multiple operations per second”. This is a classic low-fidelity, high-volume signal that is often just seen as noise.
Commodity and Open-Source Penetration Testing Tools: The attack utilized a combination of “standard security utilities” and “open source penetration testing tools”.
Traffic from Browser Automation: The report explicitly calls out “Browser automation for web application reconnaissance” to “systematically catalog target infrastructure” and “analyze authentication mechanisms”.
Automated Stolen Credential Testing: The AI didn’t just test one password, it “systematically tested authentication against internal APIs, database systems, container registries, and logging infrastructure”. This automated, broad, and rapid testing looks very different from a human’s manual attempts.
Audit for Unauthorized Account Creation: This is a critical, high-confidence post-exploitation signal. In one successful compromise, the AI’s autonomous actions included the creation of a “persistent backdoor user”.
6. The Defender’s Challenge: A Flood of Low-Fidelity Noise
The detection patterns listed above create the central challenge of defending against AI-orchestrated attacks. The problem isn’t just alert volume, it’s that these attacks generate a massive volume of low-fidelity alerts.
This new execution model creates critical blind spots:
The Volume Blind Spot: The AI’s automated nature creates a flood of low-confidence alerts. No human-only SOC can manually triage this volume.
The Temporal (Speed) Blind Spot: A human-led intrusion might take days or weeks. Here, the AI compressed a full database extraction – from authentication to data parsing – into just 2-6 hours. Our human-based detection and response loops are often too slow to keep up.
The Context Blind Spot: The AI’s real power is connecting many small, seemingly unrelated signals (a scan, a login failure, a data query) into a single, coherent attack chain. A human analyst, looking at these alerts one by one, would likely miss the larger pattern.
7. The Importance of Autonomous Triage and Investigation
When the attack is autonomous, the defense must also have autonomous capabilities.
We cannot hire our way out of this speed and scale problem. The security operations model must shift. The goal of autonomous triage is not just to add context, but to handle the entire investigation process for every single alert, especially the thousands of low-severity signals that AI-driven attacks create.
An autonomous system can automatically investigate these signals at machine speed, determine which ones are irrelevant noise, and suppress them.
This is the true value: the system escalates only the high-confidence, confirmed incidents that actually matter. This frees your human analysts from chasing noise and allows them to focus on real, complex threats.
This is exactly the type of challenge autonomous triage systems like the one we’ve built at Intezer were designed to solve. As Anthropic’s own report concludes, “Security teams should experiment with applying AI for defense in areas like SOC automation, threat detection… and incident response“.
8. Evolving Your Offensive Security Program
To defend against this threat, we must be able to test our defenses against it. All offensive security activities, internal red teams, external penetration tests, and attack simulations, must evolve.
It is no longer enough for offensive security teams to manually simulate attacks. To truly test your defenses, your red teams or external pentesters must adopt agentic AI frameworks themselves.
The new mandate is to simulate the speed, scale, and orchestration of an AI-driven attack, similar to the one detailed in the Anthropic report. Only then can you validate whether your defensive systems and automated processes can withstand this new class of automated onslaught. Naturally, all such simulations must be done safely and ethically to prevent any real-world risk.
9. Conclusion: When the Threat Model Changes, Our Processes Must, Too.
The Anthropic report doesn’t introduce a new magic exploit. It introduces a new execution model that we now need to design our defenses around.
Let’s summarize the key, practical takeaways:
AI-orchestrated attacks are a proven, documented reality.
The primary threat is speed and scale, which is designed to overwhelm manual security processes.
Security leaders must prioritize automating investigation and triage to suppress the noise and escalate what matters.
We must evolve offensive security testing to simulate this new class of autonomous threat.
This report is a clear signal. The threat model has officially changed. Your security architecture, processes, and playbooks must change with it. The same applies if you rely on an MSSP, verify they’re evolving their detection and triage capabilities for this new model. This shift isn’t hype, it’s a practical change in execution speed. With the right adjustments and automation, defenders can meet this challenge.
Gartner’s recent Innovation Insight: AI SOC Agents report is an encouraging signal that the concept of an “AI-powered SOC” has reached mainstream awareness. The report recognizes the potential of AI technologies to transform how security operations centers function, especially in augmenting analysts through automation and intelligent workflows.
Yet, while Gartner’s analysis succeeds in capturing the momentum of this space, it falls short in clarifying how and where AI actually fits within the security operations stack. By treating “AI SOC” as a monolithic, undifferentiated category, the report overlooks the crucial distinctions between detection, triage and response, each of which requires a very different kind of AI capability and delivers very different value.
A closer look at Gartner’s analysis
Gartner’s report provides a valuable overview of how AI SOC can assist with detection, alert investigation, and even response recommendation. We wholeheartedly agree with Gartner’s advice that CISOs should evaluate which security activities are “volumetric, troublesome, or low-performing, and which would benefit the most from augmentation with the application of AI”. However, presenting all of the AI SOC functions (and vendors) as part of a single undifferentiated security ecosystem, can be confusing.
This broad framing misses the fact that an AI model designed to improve SIEM detection logic operates on entirely different data, architecture, and feedback loops than one built to support analyst decision-making or response automation. The result is a flattening of a nuanced market into one monolithic category, useful for taxonomy, but not for decision-making.
For CISOs, this lack of segmentation makes it hard to answer the key strategic question: Where should we apply AI first to get tangible operational value?
By contrast, our view is that organizations should start by identifying which part of their operations needs augmentation most, then evaluate AI solutions purpose-built for that domain.
A clearer way to frame the AI SOC market
To understand where AI truly fits in and how it can deliver measurable outcomes, it helps to zoom out and look at the broader security operations stack. As we described in a previous blog post, “Making sense of the AI SOC market”, we see three main layers where AI can add value:
Detection (SIEM, XDR)
The first layer converts raw telemetry into actionable alerts. Here, AI can strengthen correlation logic, improve detection models, and reduce false positives. This is largely about data pattern recognition and automation of repetitive analysis.
Triage and Investigation (SOC / MDR)
The middle layer is where human analysts determine which alerts are real incidents worth escalating. This is where AI can truly emulate analyst reasoning, gathering context, cross-referencing intelligence, and presenting likely root causes. Done well, AI here acts as a co-analyst, not a replacement.
Response and Case Management (SOAR)
The final layer coordinates remediation and manages incident workflows. AI can accelerate playbook creation, automate routine case handling, and improve overall response time through dynamic decision logic.
Each layer offers opportunities for AI—but they are fundamentally different problems to solve. When vendors use the term “AI SOC” without specifying which layer they’re addressing, it creates confusion and unrealistic expectations.
A more practical evaluation framework
To move the conversation forward, we recommend a more structured approach to evaluating AI SOC solutions.
Step 1: Identify your target layer
Ask: Which layer of our operations needs the most improvement. Is it detection (SIEM/XDR/Cloud), triage (SOC/MDR), or response (SOAR)?
This helps narrow the field to the right class of solutions rather than chasing the broad “AI SOC” label.
Step 2: Define measurable outcomes
Especially for alert triage and investigation (which is usually handled by an internal SOC or external MDR), establish metrics to compare performance, such as:
Reduction in mean time to detect (MTTD)
Noise reduction rate
Scale of alert coverage
Consistency across SOC shifts or analyst tiers
Triage accuracy
These metrics allow organizations to compare vendors on tangible outcomes, not vague AI promises.
Step 3: Evaluate transparency and integration
An effective AI SOC solution should clearly explain its reasoning, integrate easily with your existing tools, and allow human oversight. The goal is augmentation, not opacity.
Gartner deserves credit for bringing visibility to an emerging market, but their analysis underscores how early and fluid this space still is. The future of the AI SOC isn’t one product category. It’s a set of AI capabilities applied intelligently across the detection–triage–response continuum.
Organizations that treat AI as a modular capability rather than a monolithic product will see the most success. The key is knowing your operational priorities and matching them to the layer where AI can have the greatest impact.
Conclusion
AI is not a magic “SOC-in-a-box.” It’s a set of technologies that, when properly targeted, can transform specific parts of security operations. Gartner’s latest report captures the enthusiasm, but not yet the structure, of this market.
At Intezer, we believe the path forward starts with clarity. Understanding the distinct layers of the SOC, the role AI plays in each, and the outcomes that matter most. Only then can organizations cut through the noise and choose the right AI SOC partner for their needs.
There’s been an explosion of buzz around the AI SOC market. More than 40 vendors are now claiming to do something in this space, but as with many emerging technology categories, the result is a lot of excitement and a lot of confusion.
In this video and in the article below it, I want to provide some clarity. What exactly is “AI SOC”? Where did this category come from? And how can security teams cut through the noise to find real value?
The origins of the AI SOC: An old problem meets new tech
The rise of the AI SOC stems from two converging forces. A very old problem and a very new technology.
The old problem is the persistent talent shortage in cybersecurity combined with the overwhelming volume of security alerts. Security teams have been drowning in these alerts for years, struggling to keep up with investigation and response.
The new technology is AI, especially large language models (LLMs) and adjacent innovations, which open up an opportunity to finally address that shortage by automating some of the human decision-making process.
The 3 layers of security operations
To understand where AI fits in and how it can help, let’s zoom out and look at the broader security operations stack.
There are three main layers:
Detection (SIEM, XDR) is the first level which handles converting raw logs and other telemetry data into actionable alerts.
Triage and investigation (SOC) is the middle layer where human analysts determine which alerts are real incidents worth escalating.
Response and case management (SOAR) is the final layer that manages incident remediation with case assignment, and workflow automation.
Each layer presents opportunities for AI. For example, in SIEM/XDR, AI can improve detection logic and reduce false positives. For SOC, AI can simulate the investigative reasoning of human analysts. And when applied to SOAR, AI can accelerate workflow creation and automate routine case handling.
In each of these areas, vendors are loosely using the term AI SOC to describe what they are doing. And that is why it’s important to know what problem you are trying to solve and which ‘AI SOC” solution is appropriate for you.
All that said, when people refer to AI SOC, they’re usually talking about that middle layer. The part focused on automated alert triage, investigation, and escalation.
That’s where Intezer focuses: providing 24/7 managed alert triage, investigation, and response powered by a decade of deep forensic analysis tooling combined with flexible and adaptable LLMs.
Our system automatically investigates alerts, surfaces only what truly requires attention, and escalates only up to 4% of alerts to human analysts.
This is where the market’s energy, and customer need, are currently concentrated. Teams want to scale their response capabilities without adding headcount, and AI SOCs make that possible.
How to evaluate AI SOC vendors
With so many vendors entering the field, it’s important to evaluate them based on clear, measurable criteria. Some of the key metrics that I’m hearing from our customers and prospect that they consider, include:
Accuracy: How precise are the AI-driven investigations?
Speed: How quickly can alerts be triaged?
Scale and coverage: Can the system handle all your alerts in a timely fashion?
Noise reduction: What percentage of alerts still require human review?
Context and transparency: Can you understand how the AI reached its conclusions, or is it a black box?
AI SOC is one of the most exciting and fast-evolving categories in cybersecurity. It’s also one of the messiest, but that’s often a sign of real innovation happening.
For years, the industry has been searching for a way to truly solve the alert overload and talent shortage problem. With the arrival of AI-driven investigation technology, we’re finally seeing that vision come to life.
A recent SACR market analysis report examined these metrics across leading AI SOC vendors which can be very helpful for evaluating which solution is right for you. And I definitely recommend reading about Intezer in the report 🙂.
At Intezer, we’re proud to help security teams reduce noise, focus on real threats, and scale their operations intelligently.
If you’re exploring this space, we’d love to be your partner in building a smarter SOC.