The enterprise security landscape has reached an inflection point. As organizations accelerate adoption of cloud, automation and artificial intelligence, identity has become the primary attack surface of the modern enterprise. Not because defenses have weakened, but because identities have multiplied and now operate continuously at machine speed, often with elevated access.
When attackers succeed today, it almost always starts with identity. Identity is now the number one attack vector. Eighty-seven percent of organizations experienced at least two successful, identity-centric breaches in the past 12 months. These breaches can lead to outages, regulatory exposure, financial loss and reputational damage.
This reality is why today marks such a pivotal moment. CyberArk is officially joining Palo Alto Networks. This step reflects a shared conviction that identity security is no longer a supporting function. To stay ahead of modern attackers, organizations need best-in-class identity security that is deeply integrated into their broader security strategy.
The Reality of the Modern Identity Attack Surface
For years, identity security focused on a relatively small population of human users, administrators and periodic access reviews. That model no longer matches reality.
Today’s enterprises depend on vast numbers of machine identities, including workloads, services, APIs and increasingly, autonomous AI agents. Machine identities now outnumber human identities by more than 80 to 1, while 75 percent of organizations acknowledge that their human identities are governed by outdated, overly permissive privileged models.
Attackers have adapted. Rather than breaking in through vulnerabilities, they increasingly log in using stolen credentials or by exploiting excessive, poorly governed access. Identity-based attacks have become the dominant breach vector because identity sprawl and standing privilege create opportunities that are difficult to detect with traditional tools.
Yet many identity programs remain fragmented. Access management, privileged access and governance often operate in silos, with delayed visibility and manual processes. Risk accumulates silently between reviews, leaving security teams reacting after the fact.
This is the problem CyberArk was built to solve.
Why Identity Security Must Be Continuous
Securing identities in this environment requires a fundamentally different approach. Identity risk changes constantly as new identities are created, permissions shift and systems scale dynamically. Controls must operate continuously, not episodically.
This means three things:
First, organizations need real-time visibility into who or what has access to critical systems across human, machine and AI identities.
Second, privilege must be applied dynamically. Access should be granted only when needed and removed automatically when it is no longer required. Standing privilege should be the exception, not the norm.
Third, governance must evolve from periodic compliance exercises to continuous enforcement that adapts as environments change.
This is the identity security vision that has guided CyberArk for decades and why joining Palo Alto Networks is such a natural next step.
Elevating Identity to a Core Platform
As part of Palo Alto Networks, CyberArk elevates identity security to a core platform pillar.
CyberArk’s Identity Security Platform is proven at enterprise scale and trusted to protect some of the world’s most critical environments. Our approach extends privileged access principles beyond a narrow set of administrators to every identity that matters.
By treating every identity as potentially privileged, organizations can dramatically reduce their attack surface. Excessive access is identified. Unnecessary privilege is removed. Attackers lose the ability to move laterally by using stolen credentials.
Elevating identity security to a platform level also enables tighter alignment with network security, cloud security and security operations. Identity becomes a powerful control plane that informs policy enforcement, detection and response across the enterprise, delivering a more complete and actionable view of risk.
Securing the AI-Driven Enterprise
This shift is especially critical as organizations deploy AI-driven systems and autonomous agents.
These systems often require persistent access to sensitive data and infrastructure, making them attractive targets for attackers and difficult to govern with legacy identity models. Most enterprises today lack effective identity security controls for machine and AI-driven systems, leaving these identities overprivileged and undergoverned.
Applying privileged access principles universally enables organizations to secure AI-driven environments without slowing innovation. Identity security becomes the trust layer that allows enterprises to scale AI responsibly, ensuring access is controlled, monitored and adjusted dynamically as systems evolve.
What This Means for Customers
For customers, elevating identity security to a core platform delivers tangible outcomes.
Organizations gain clearer insight into identity access and risk across human, machine and agentic identities. They gain stronger protection against credential-based attacks by limiting excessive privilege and reducing the paths that attackers rely on to move undetected. They also gain operational simplicity by replacing fragmented tools and manual governance with consistent, scalable controls.
Most importantly, customers gain confidence. Confidence to adopt cloud, automation and AI, knowing that identity risk is governed continuously. Confidence that security can keep pace with change rather than reacting after the fact.
Moving Forward
CyberArk’s Identity Security solutions will continue to be available as a standalone platform. Customers can rely on the solutions they trust today while benefiting from an accelerated roadmap focused on resilience, simplicity and improved security outcomes.
At the same time, integration is underway to bring CyberArk’s best-in-class identity security capabilities more deeply into the Palo Alto Networks security ecosystem. Our priority is to listen closely to customers, meet their immediate needs, and build the path forward together.
The AI era is redefining how enterprises operate and how attackers operate alongside them. Securing every identity, human, machine and AI agent is no longer optional. It is foundational.
By bringing CyberArk into Palo Alto Networks, we are taking a decisive step toward redefining identity security for the modern enterprise and helping our customers stay secure as they innovate at speed.
Microsoft today released updates to fix more than 50 security holes in its Windows operating systems and other software, including patches for a whopping six “zero-day” vulnerabilities that attackers are already exploiting in the wild.
Zero-day #1 this month is CVE-2026-21510, a security feature bypass vulnerability in Windows Shell wherein a single click on a malicious link can quietly bypass Windows protections and run attacker-controlled content without warning or consent dialogs. CVE-2026-21510 affects all currently supported versions of Windows.
The zero-day flaw CVE-2026-21513 is a security bypass bug targeting MSHTML, the proprietary engine of the default Web browser in Windows. CVE-2026-21514 is a related security feature bypass in Microsoft Word.
The zero-day CVE-2026-21533 allows local attackers to elevate their user privileges to “SYSTEM” level access in Windows Remote Desktop Services. CVE-2026-21519 is a zero-day elevation of privilege flaw in the Desktop Window Manager (DWM), a key component of Windows that organizes windows on a user’s screen. Microsoft fixed a different zero-day in DWM just last month.
The sixth zero-day is CVE-2026-21525, a potentially disruptive denial-of-service vulnerability in the Windows Remote Access Connection Manager, the service responsible for maintaining VPN connections to corporate networks.
Chris Goettl at Ivanti reminds us Microsoft has issued several out-of-band security updates since January’s Patch Tuesday. On January 17, Microsoft pushed a fix that resolved a credential prompt failure when attempting remote desktop or remote application connections. On January 26, Microsoft patched a zero-day security feature bypass vulnerability (CVE-2026-21509) in Microsoft Office.
Kev Breen at Immersive notes that this month’s Patch Tuesday includes several fixes for remote code execution vulnerabilities affecting GitHub Copilot and multiple integrated development environments (IDEs), including VS Code, Visual Studio, and JetBrains products. The relevant CVEs are CVE-2026-21516, CVE-2026-21523, and CVE-2026-21256.
Breen said the AI vulnerabilities Microsoft patched this month stem from a command injection flaw that can be triggered through prompt injection, or tricking the AI agent into doing something it shouldn’t — like executing malicious code or commands.
“Developers are high-value targets for threat actors, as they often have access to sensitive data such as API keys and secrets that function as keys to critical infrastructure, including privileged AWS or Azure API keys,” Breen said. “When organizations enable developers and automation pipelines to use LLMs and agentic AI, a malicious prompt can have significant impact. This does not mean organizations should stop using AI. It does mean developers should understand the risks, teams should clearly identify which systems and workflows have access to AI agents, and least-privilege principles should be applied to limit the blast radius if developer secrets are compromised.”
The SANS Internet Storm Center has a clickable breakdown of each individual fix this month from Microsoft, indexed by severity and CVSS score. Enterprise Windows admins involved in testing patches before rolling them out should keep an eye on askwoody.com, which often has the skinny on wonky updates. Please don’t neglect to back up your data if it has been a while since you’ve done that, and feel free to sound off in the comments if you experience problems installing any of these fixes.
Generative AI is transforming cyber defense. Technical expertise remains critical, but AI-driven threats demand more than individual skill – they require the collective intelligence of the organization’s SOC. To understand how businesses are adapting, Infinity Global Services analyzed training consumption trends from 2023 to 2025. The findings reveal a decisive shift from individual courses to team-based subscriptions, signaling a new approach to workforce development in the age of AI. The Data: A Shift in Mindset Infinity Global Services’ training data shows a clear change in procurement strategies. Individual course purchases have declined by 33%, while team-based subscription models have surged, […]
The rapid adoption of AI is transforming the enterprise, unlocking unprecedented productivity and accelerating workflows at a record pace. However, this velocity creates a new productivity paradox: The faster AI moves, the more it can expose the organization to entirely new categories of risk. Without specialized guardrails, unchecked AI can inadvertently bypass company policies, violate legal standards, or ignore ethical norms.
To bridge this gap, Glean, the Work AI platform, and Palo Alto Networks Prisma® AIRS have integrated to provide an essential security layer that empowers organizations to adopt generative AI with confidence, helping ensure that massive productivity gains never come at the cost of trust, security or compliance.
Glean and Prisma AIRS stop AI attacks in runtime.Prompt injection threat blocked in real time.
Real-Time Defense Against the Modern AI Threat Surface
Generic filters often fail to catch the sophisticated nuances of AI-driven attacks. The integration of Glean and Prisma AIRS provides a purpose-built defense that acts in real time across three critical areas:
1. Neutralizing Prompt Injection
Prompt injections are malicious instructions designed to trick AI models into ignoring their safety protocols, potentially leading to the exposure of sensitive data or the execution of unauthorized actions.
For instance, an attacker could craft a prompt that causes the AI to leak its own system instructions leading to data loss. Glean and Prisma AIRS instantly detect these sophisticated manipulation attempts, blocking the request and notifying the user before the organization's integrity is compromised.
2. Safeguarding Against Harmful and Toxic Content
AI interactions must remain professional, ethical and safe.
By scanning both user prompts and AI-generated responses against organizational policy, Glean and Prisma AIRS automatically block requests that contain toxic, biased, or otherwise harmful content. This enables AI to remain a positive and productive asset for the entire workforce.
3. Preventing Malicious Code and Unsafe URLs
AI models can sometimes generate unsafe code snippets, get data from a poisoned source, or provide harmful links that lead to phishing sites or malware downloads.
For example, a developer might ask an AI assistant for a code library to process data, and the model could inadvertently suggest a malicious package that compromises the application. The Glean and Palo Alto Networks integration provides a crucial safety net, inspecting all generated content for malicious patterns and preventing employees from interacting with risky URLs, keeping the entire AI-driven development and research lifecycle secure.
Secure AI in Minutes with Out of the Box Integration
The true power of the Glean and Palo Alto Networks partnership lies in its simplicity. We’ve removed the friction of complex security configurations, enabling organizations to realize value immediately through a seamless, out of the box integration.
Onboarding is completed in three simple steps within the Glean admin console:
Navigate to AI Security and select Palo Alto Networks AI Runtime Security.
Paste your Prisma AIRS Runtime Security API Key.
Click Save.
Activate Prisma AIRS from the Glean admin console.
With these three clicks, the integration is live, providing an invisible but invincible layer of defense across your AI chats and agent interactions.
Glean admin panel showcasing all findings.
Partnering for a Secure AI Future
As enterprises scale their AI initiatives, specialized security becomes non-negotiable. Prisma AIRS provides the advanced, granular protection needed to catch threats that standard vendors can often miss, and its integration with Glean delivers that protection exactly where work happens.
Drive productivity, foster innovation, and secure your future with Glean and Palo Alto Networks.
Key Takeaways
Real-Time Threat Mitigation: Instantly block prompt injections, toxic content, and malicious code, transforming AI from a risk factor into a secure asset.
Frictionless Deployment: Achieve comprehensive AI security in minutes with a simple, three-click API integration within the Glean console.
Time to value: Scale AI adoption across the enterprise by ensuring every interaction complies with internal policies and global safety standards.
The cybersecurity landscape continues to evolve at an extraordinary pace. AI-driven threats are expanding the attack surface, demanding faster, more precise responses and greater resilience. At the same time, customers want fewer vendors, deeper integrations and trusted advisers who can help them achieve positive, measurable outcomes and reduce unnecessary complexity.
Meeting these challenges and expectations requires a potent combination of world-class technology and world-class partnership. That’s why, in 2026, Palo Alto Networks is evolving our partner program and unifying it with our value exchange framework.
We are excited to share that we have rolled out new program features. The changes we’re introducing are designed to strengthen how we work with our ecosystem across every partner motion – from resale and cosell to delivery, support and managed services. The goal of this evolution is simple: Create clearer, more scalable paths for growth and mutual success.
Why We’re Evolving to Meet the Demands of a Changing Market
The same forces transforming the cybersecurity landscape are also changing what it means to be a successful partner. As customers reduce their reliance on disparate point solutions, choose to consolidate platforms and lean harder on AI-driven automation, they’re turning to partners for much more than technology procurement. They want design guidance, integration expertise and ongoing, outcome-focused support.
Our partners are also clear about what they need from us. They’ve asked Palo Alto Networks for a partner program that is simpler to engage with, more predictable in how it rewards impact, and more closely aligned with how they build and deliver value across resale, services and managed offerings. Our partners also seek less complexity and more room to differentiate through their own investments and innovation.
The evolution of our partner program is our response not only to feedback from our partners but also to extensive market research. It will bring greater structure where our partners seek consistency, greater flexibility in how and where they innovate, as well as greater transparency in how the value they deliver is recognized. These strategic changes will help ensure our mutual customers benefit the most when they work with our vast and diverse ecosystem in today’s platform-first, outcome-driven marketplace.
A Unified Growth Model = Partner Program + Value Exchange
Palo Alto Networks NextWave Partner Program and value exchange framework were designed to work together, not as separate tracks, but as one powerful engine for driving growth. This unified framework makes it easier for partners to engage with us and get the most from the partner program. It rewards impact, expertise and customer success rather than focusing narrowly on transactions.
This evolved model is built on the foundation of three guiding principles:
Predictability – Consistent expectations and program structures that support long-term planning.
Repeatability – Enablement and tools that help partners scale practices with confidence.
Profitability – Incentives, rebates and routes to growth tied directly to customer value.
The new framework can help partners build sustainable businesses while accelerating the adoption of platformized AI-powered security. Let’s take a look at the many benefits our partner ecosystem may experience through this reimagined program.
What Our Partners Can Expect
Our redesigned partner program enables greater alignment between the investments you make and the outcomes you achieve. Across Palo Alto Networks NextWave Partner Program, we’re strengthening how partners can scale, differentiate and grow their business with improvements in three key areas.
1. Access That Accelerates Scale
We’re expanding access to the tools and resources that can help partners reach customers faster and deliver solutions with confidence:
Broader on-demand learning and persona-based enablement.
Labs and demos that make it easier to showcase platform value.
Improved quoting tools and API-driven automation that can ease operational friction.
Enhanced support resources that improve quality delivery and the customer experience.
These and other capabilities can help reduce complexity and accelerate your ability to propose, design and deploy high-quality, platform-based solutions for customers.
2. Commitment That Reflects Intentional Investment
As the cybersecurity market evolves, so does the definition of partnership. Our newly evolved program introduces clearer expectations and meaningful rewards for partners who invest in specialization and growth. We’re raising the bar on the program’s standards:
Higher bookings and growth targets.
Increased specialization depth across key areas.
New targeted rebates aligned to value creation.
A strengthened global distribution strategy to support scale.
These enhancements will recognize partners who lean into the platform approach and drive meaningful impact for customers.
3. Profitability That Helps Fuel Long-Term Growth
A top priority for our updated program is helping partners build predictable, repeatable and profitable business practices in 2026 and beyond. Here are some of the measures we’re introducing:
Default service quoting (Authorized Support Center and Authorized Professional Services) to help strengthen delivery economics.
Programmatic discounts and improved quoting tools to speed sales cycles.
A new Partner Development Fund (PDF) to help partners build capabilities and pipeline.
Our aim is to create a more consistent, performance-driven model that supports partner strategy today and creates room for expansion tomorrow.
What These Changes Mean for Customers
A more connected and enabledpartner ecosystem doesn’t just benefit our partners. It elevates the entire customer experience.
Customers can expect smoother, simplified engagement with trusted cybersecurity advisers who speak the same language and share the same goals. And with greater consistency across sales, delivery and ongoing support, organizations won’t be saddled by complexity that slows transformation and makes it harder to adopt, build and deploy AI boldly yet safely.
Customers can also move forward with greater confidence in expanding their use of our Palo Alto Networks integrated, AI-driven cybersecurity platform, knowing their partners are equipped with the training, tools and know-how to help guide them every step of the way.
Driving Shared Success Through the Value Exchange
The value exchange in cybersecurity reinforces a principle that has long guided the approach of Palo Alto Networks to partnering: Growth follows value creation. It’s the foundation for how we work with our ecosystem, strengthening connections among partners, customers and our platform.
This is the power of a global ecosystem moving with purpose. When platform innovation, partner expertise and customer needs are aligned, everything moves faster and desired outcomes are more readily achieved. Deployments accelerate, architectures are simplified, and enterprises gain the resilient security postures needed to withstand the pressures of an AI-driven threat landscape.
What’s Next
We encourage you to review a set of short videos in The Learning Center for Partners, which provide more details about the planned changes to Palo Alto Networks NextWave Partner Program.
We believe the year ahead offers one of the most significant opportunities for innovation and growth our ecosystem has ever seen. By reimagining our partner program and value exchange framework, Palo Alto Networks is doubling down on the promise of our shared success, mutual growth and long-term value.
To our partners, thank you, as always, for your commitment, collaboration and belief in what we’re creating together. What’s ahead is more than an evolution of a long-standing and successful partner program. It’s a new era of partnering with precision to build the future of cybersecurity.
Key Takeaways
A reimagined partner program accelerates sustainable growth. Beginning in early February, a single, scalable framework will guide every partner motion and reward meaningful impact.
Partners have more ways to scale and differentiate. Expanded enablement, automation and incentives can help build stronger, more profitable practices.
Customers will benefit from more consistent experiences. A more aligned ecosystem enables simpler engagement, smoother delivery and increased confidence in the platform.
Forward-Looking Statements
This blog contains forward-looking statements that involve risks, uncertainties and assumptions, including, without limitation, statements regarding the benefits, impact, or performance or potential benefits, impact or performance of our products and technologies or future products and technologies. These forward-looking statements are not guarantees of future performance, and there are a significant number of factors that could cause actual results to differ materially from statements made in this blog. We identify certain important risks and uncertainties that could affect our results and performance in our most recent Annual Report on Form 10-K, our most recent Quarterly Report on Form 10-Q, and our other filings with the U.S. Securities and Exchange Commission from time-to-time, each of which are available on our website at investors.paloaltonetworks.com and on the SEC's website at www.sec.gov. All forward-looking statements in this blog are based on information available to us as of the date hereof, and we do not assume any obligation to update the forward-looking statements provided to reflect events that occur or circumstances that exist after the date on which they were made.
Not every cybersecurity practitioner thinks it’s worth the effort to figure out exactly who’s pulling the strings behind the malware hitting their company. The typical incident investigation algorithm goes something like this: analyst finds a suspicious file → if the antivirus didn’t catch it, puts it into a sandbox to test → confirms some malicious activity → adds the hash to the blocklist → goes for coffee break. These are the go-to steps for many cybersecurity professionals — especially when they’re swamped with alerts, or don’t quite have the forensic skills to unravel a complex attack thread by thread. However, when dealing with a targeted attack, this approach is a one-way ticket to disaster — and here’s why.
If an attacker is playing for keeps, they rarely stick to a single attack vector. There’s a good chance the malicious file has already played its part in a multi-stage attack and is now all but useless to the attacker. Meanwhile, the adversary has already dug deep into corporate infrastructure and is busy operating with an entirely different set of tools. To clear the threat for good, the security team has to uncover and neutralize the entire attack chain.
But how can this be done quickly and effectively before the attackers manage to do some real damage? One way is to dive deep into the context. By analyzing a single file, an expert can identify exactly who’s attacking his company, quickly find out which other tools and tactics that specific group employs, and then sweep infrastructure for any related threats. There are plenty of threat intelligence tools out there for this, but I’ll show you how it works using our Kaspersky Threat Intelligence Portal.
A practical example of why attribution matters
Let’s say we upload a piece of malware we’ve discovered to a threat intelligence portal, and learn that it’s usually being used by, say, the MysterySnail group. What does that actually tell us? Let’s look at the available intel:
First off, these attackers target government institutions in both Russia and Mongolia. They’re a Chinese-speaking group that typically focuses on espionage. According to their profile, they establish a foothold in infrastructure and lay low until they find something worth stealing. We also know that they typically exploit the vulnerability CVE-2021-40449. What kind of vulnerability is that?
As we can see, it’s a privilege escalation vulnerability — meaning it’s used after hackers have already infiltrated the infrastructure. This vulnerability has a high severity rating and is heavily exploited in the wild. So what software is actually vulnerable?
Got it: Microsoft Windows. Time to double-check if the patch that fixes this hole has actually been installed. Alright, besides the vulnerability, what else do we know about the hackers? It turns out they have a peculiar way of checking network configurations — they connect to the public site 2ip.ru:
So it makes sense to add a correlation rule to SIEM to flag that kind of behavior.
Now’s the time to read up on this group in more detail and gather additional indicators of compromise (IoCs) for SIEM monitoring, as well as ready-to-use YARA rules (structured text descriptions used to identify malware). This will help us track down all the tentacles of this kraken that might have already crept into corporate infrastructure, and ensure we can intercept them quickly if they try to break in again.
Kaspersky Threat Intelligence Portal provides a ton of additional reports on MysterySnail attacks, each complete with a list of IoCs and YARA rules. These YARA rules can be used to scan all endpoints, and those IoCs can be added into SIEM for constant monitoring. While we’re at it, let’s check the reports to see how these attackers handle data exfiltration, and what kind of data they’re usually hunting for. Now we can actually take steps to head off the attack.
And just like that, MysterySnail, the infrastructure is now tuned to find you and respond immediately. No more spying for you!
Malware attribution methods
Before diving into specific methods, we need to make one thing clear: for attribution to actually work, the threat intelligence provided needs a massive knowledge base of the tactics, techniques, and procedures (TTPs) used by threat actors. The scope and quality of these databases can vary wildly among vendors. In our case, before even building our tool, we spent years tracking known groups across various campaigns and logging their TTPs, and we continue to actively update that database today.
With a TTP database in place, the following attribution methods can be implemented:
Dynamic attribution: identifying TTPs through the dynamic analysis of specific files, then cross-referencing that set of TTPs against those of known hacking groups
Technical attribution: finding code overlaps between specific files and code fragments known to be used by specific hacking groups in their malware
Dynamic attribution
Identifying TTPs during dynamic analysis is relatively straightforward to implement; in fact, this functionality has been a staple of every modern sandbox for a long time. Naturally, all of our sandboxes also identify TTPs during the dynamic analysis of a malware sample:
The core of this method lies in categorizing malware activity using the MITRE ATT&CK framework. A sandbox report typically contains a list of detected TTPs. While this is highly useful data, it’s not enough for full-blown attribution to a specific group. Trying to identify the perpetrators of an attack using just this method is a lot like the ancient Indian parable of the blind men and the elephant: blindfolded folks touch different parts of an elephant and try to deduce what’s in front of them from just that. The one touching the trunk thinks it’s a python; the one touching the side is sure it’s a wall, and so on.
Technical attribution
The second attribution method is handled via static code analysis (though keep in mind that this type of attribution is always problematic). The core idea here is to cluster even slightly overlapping malware files based on specific unique characteristics. Before analysis can begin, the malware sample must be disassembled. The problem is that alongside the informative and useful bits, the recovered code contains a lot of noise. If the attribution algorithm takes this non-informative junk into account, any malware sample will end up looking similar to a great number of legitimate files, making quality attribution impossible. On the flip side, trying to only attribute malware based on the useful fragments but using a mathematically primitive method will only cause the false positive rate to go through the roof. Furthermore, any attribution result must be cross-checked for similarities with legitimate files — and the quality of that check usually depends heavily on the vendor’s technical capabilities.
Kaspersky’s approach to attribution
Our products leverage a unique database of malware associated with specific hacking groups, built over more than 25 years. On top of that, we use a patented attribution algorithm based on static analysis of disassembled code. This allows us to determine — with high precision, and even a specific probability percentage — how similar an analyzed file is to known samples from a particular group. This way, we can form a well-grounded verdict attributing the malware to a specific threat actor. The results are then cross-referenced against a database of billions of legitimate files to filter out false positives; if a match is found with any of them, the attribution verdict is adjusted accordingly. This approach is the backbone of the Kaspersky Threat Attribution Engine, which powers the threat attribution service on the Kaspersky Threat Intelligence Portal.
The cyber security industry faces a critical challenge: a growing skills gap that leaves organizations exposed to increasingly sophisticated threats. Businesses need qualified professionals who can secure systems and respond effectively, but finding and training those experts remains a global concern. To address this challenge, Infinity Global Services, which delivers practical learning designed to build real-world cyber security expertise, has partnered with CompTIA, a global leader in IT and cyber security education. This collaboration combines Infinity Global Services’ hands-on training approach with CompTIA’s globally recognized certifications, creating a powerful pathway for professionals to advance their careers and organizations to build […]
The New Frontier of Agentic Development: Accelerating Developer Productivity
The world of software development is undergoing a rapid transformation, driven by the rise of AI agents and autonomous tools. Factory is advancing this shift through agent-native development, a new paradigm where developers focus on high-level design and agents, called Droids, handle the execution. Designed to support work across the software development lifecycle, these agents enable a new mode of development, delivering significant gains in speed and productivity, without sacrificing developer control.
As developer workflows increasingly rely on autonomous development agents, the way software is built evolves. This shift introduces important security considerations, such as prompt injection, sensitive data loss, unsafe URL access and malicious code execution, which, if left unaddressed, can undermine the very benefits these agents offer. Accelerating productivity depends not just on deploying agents, but on deploying them securely. This is where Palo Alto Networks, with its purpose-built AI security platform, Prisma® AIRS™, plays a critical role.
The Productivity Paradox: Where Agents Introduce Risk
Autonomous agents operating across the software development lifecycle accelerate developer productivity, while also introducing a complex, language-driven threat surface that traditional security tools are not equipped to handle. As a result, new risks emerge, such as prompt injection or leaking secrets that extend beyond the visibility and control assumptions of traditional security approaches. Addressing these considerations is essential to preserving the benefits that agentic development provides.
Recognizing this shift, Palo Alto Networks has introduced targeted capabilities to accelerate secure development workflows. These efforts focus on three critical defense areas: preventing prompt injection, blocking sensitive data leaks and enabling robust malicious code detection capabilities, all of which are necessary to secure the full lifecycle of agent-driven systems.
The Solution: Securing Agentic Workflows for Acceleration
The solution is designed to convert security challenges directly into deployment confidence, dramatically accelerating productivity. By natively integrating Prisma AIRS within Factory’s Droid Shield Plus, the platform is able to inspect all large language model (LLM) interactions, including prompts, responses and subsequent tool calls, to enable comprehensive security across each interaction with the agent.
Prisma AIRS is a comprehensive platform designed to provide organizations with the visibility and control needed to safeguard AI agents across any environment. The platform continuously monitors agent behavior in real time to detect and prevent threats unique to agent-driven systems.
Droid Shield Plus, powered by Palo Alto Networks
How Security Drives Speed
Embedding security natively into the Factory platform enables two crucial outcomes. To start, it delivers a secure, agent-native development experience for every developer, fostering immediate trust in the integrity of the generated code and documentation. This assurance removes friction often associated with AI-powered workflows, which can accelerate enterprise adoption and scaling of the Factory platform across the organization.
When developers can trust the agents and the integrity of the generated code and documentation, they can innovate faster and deploy with greater confidence. Instead of waiting for security reviews or dealing with fragmentation, security is woven seamlessly into the development lifecycle.
Factory-Prisma AIRS Integration Flow
The integration follows a clear API Intercept design pattern:
• When a user enters a prompt or initiates work in Factory, Prisma AIRS intercepts the workflow. If a malicious prompt is detected, the platform can add logic to coach or block the user.
• Similarly, after the LLM generates code, Prisma AIRS intercepts the generated content. If secrets are detected, the platform again adds logic to coach or block the result before it reaches Factory or the user.
This real-time inspection of prompts and generated code enables development teams to be protected against threats, such as privilege escalation, prompt injection and malicious code execution, without disrupting developer velocity.
Deploy Bravely
Prisma AIRS 2.0 establishes a unified foundation for scalable and secure AI innovation. By combining Factory’s agent-native development platform with the threat detection capabilities of Palo Alto Networks Prisma AIRS, organizations gain a powerful advantage. Together, this approach helps organizations adopt agentic development with confidence by embedding security directly into the development experience.
For enterprises looking to confidently scale AI automation and realize the immense productivity gains offered by Factory’s Droids, integrating Prisma AIRS is the next step. This combined approach enables teams to "Deploy Bravely." To learn more about this strategic partnership and integration, see our latest integration announcement and review the Droid Shield Plus integration documentation.
Key Takeaways for Secure Agentic Development
When adopting Factory with Prisma AIRS, enterprises realize immediate benefits that accelerate their AI strategy:
Specialized Threat Defense
Enterprises gain real-time, targeted protection against agent-specific threats, specifically prompt injection attacks and data leaks, which legacy tools cannot address.
Native, Seamless Security Moving from a fragmented review process to a continuous, automated defense via API Interception, security enables compliance without slowing down development velocity.
Deployment Confidence The native integration transforms security risks into operational assurance, accelerating the large-scale enterprise adoption and scaling of your Factory agent-native automation initiatives.
Artificial intelligence has shifted to being the primary engine for market leadership. To compete, enterprises are shifting from general-purpose computing to AI factories, specialized infrastructures designed to manage the entire lifecycle of AI. However, this transition requires robust security without sacrificing performance and efficiency.
The integrated solution embeds zero trust security directly into the AI infrastructure, providing comprehensive protection without impacting AI performance. By deploying Palo Alto Networks Prisma® AIRS™ Network Intercept directly onto the NVIDIA BlueField and extending to the cloud, Prisma AIRS establishes an essential zero trust governance fabric for the AI factory, enabling enterprises to accelerate innovation while maintaining control.
This critical architectural shift enables optimal AI performance and infrastructure efficiency by offloading security processing to an isolated domain, while leveraging the DPU's hardware acceleration via NVIDIA DOCA to enforce security policies at line speed. The implementation also leverages real-time workload information captured using DOCA Argus, which is then passed to Cortex XSIAM® where it is used for AI-driven responses using the Cortex XSOAR® orchestration platform.
The AI Factory is the new engine for value creation, and securing it is a board-level imperative. The validation of Palo Alto Networks Prisma AIRS accelerated with NVIDIA BlueField within the NVIDIA Enterprise AI Factory enables a new security architecture for the AI era. We are embedding trust directly into the infrastructure, giving leaders the confidence to safeguard their proprietary intelligence and deploy AI bravely.
Kevin Deierling, senior vice president of Networking at NVIDIA said:
AI is transforming every industry and security must evolve to protect AI factories. To be scalable, security must be distributed and embedded within the AI infrastructure. This is achieved with NVIDIA BlueField running Palo Alto Networks Prisma AIRS to deliver robust, runtime security for the AI factory, with optimal AI performance and efficiency.
Deploy AI Bravely with a Future-Proof Foundation
The Future of Secure AI Factories
In addition to deploying Palo Alto Networks Prisma AIRS on NVIDIA BlueField in a distributed model, it’s essential to maintain a centralized Hyperscale Security Firewall (HSF) cluster at the ingress and egress points of the AI factory to enforce a defense-in-depth strategy. Beyond network segmentation, individual workloads can selectively route traffic through hyperscale clusters to detect advanced application-layer threats and prevent lateral movement. These hyperscale firewall clusters scale elastically with demand, delivering session resiliency and the high availability required for critical AI operations.
This architecture fundamentally improves the Total Cost of Ownership (TCO) for AI infrastructure. By isolating security functions on BlueField, enterprises enable 100% of host computing resources to be dedicated to AI applications. This elimination of resource contention allows the AI Factory to maximize token throughput and capital efficiency.
This validated design is the blueprint for immediate efficiency. It provides a seamless path for enterprises to shift from general-purpose clusters to secure AI factory infrastructure without costly overhauls. More importantly, this collaboration establishes an unparalleled roadmap for future-proofing your investment. By securing operations with the high-performance NVIDIA BlueField-3 today, the architecture is inherently ready for the next generation, NVIDIA BlueField-4. This forward compatibility helps AI factories immediately handle gigascale demands, scaling up to 6X the compute power and doubling the bandwidth when BlueField-4 becomes available.
The inclusion of the Palo Alto Networks Prisma AIRS platform in the NVIDIA Enterprise AI Factory Validated Design bolsters enterprise AI security. By establishing the zero trust governance fabric of Prisma AIRS runtime security on NVIDIA BlueField, organizations gain a comprehensive defense. Proprietary and sensitive data is secured throughout the entire stack, and models are protected from adversarial threats, such as prompt injection attacks. With Prisma AIRS, the world's most comprehensive AI security platform, leaders gain the confidence to innovate and deploy AI bravely. This validated design is the essential blueprint for securely accelerating your market leadership without compromising security.
Join our "How to Secure the AI Factory" breakout session at NVIDIA GTC 2026, March 16-19, in San Jose, CA to hear more about this transformative solution and accelerate your AI innovation securely.
Today’s security teams work in complex, multi-tool environments. Alerts flow from SIEMs, tickets are created in ITSM platforms, actions occur in cloud and network controls, and workflows span countless third-party services. To keep pace, automation must be open, flexible, and seamlessly connected across every system that matters. We’re excited to introduce two powerful new capabilities in Infinity Playblocks that take us one step closer to a truly open automation ecosystem: API Request Step and Webhook Trigger. Together, they unlock a new open garden approach to security automation – where Infinity Playblocks seamlessly integrates with any system, inbound or outbound, without […]
Transforming Real-Time Protection with Cloud-Delivered Security Services
Discover how unified prevention provides IT leaders with real-time protection across every attack surface.
The line between innovation and exposure has never been blurred in today’s hyperconnected digital world. Every new device, application and cloud workload expands the modern attack surface, creating endless opportunities for adversaries who are scaling faster and becoming more sophisticated than ever before. The threat landscape is no longer defined by isolated malware or phishing emails. The modern attack surface has evolved into a dynamic, adaptive ecosystem, driven by automation and artificial intelligence. Traditional security is no longer enough against AI-powered adversaries; protecting the modern attack surface demands a unified, intelligent Cloud-Delivered Security Services (CDSS) platform.
Attackers have learned to weaponize the same innovations that once gave defenders an advantage. Generative AI now enables them to craft convincing phishing messages, generate polymorphic malware that changes with every delivery, and automate reconnaissance at an unprecedented scale. Ransomware groups operate with the speed and agility of modern startups, using AI to identify weaknesses while staying one step ahead of detection. The result is an era where breaches unfold in minutes rather than days. Organizations are left with little room for error, underscoring the urgent need for security that goes beyond traditional approaches.
The Power of a Unified, Cloud-Delivered Security Service Platform
The power of our Cloud-Delivered Security Services lies in our ability to bring together every layer of protection into a single, intelligent, connected system. This unified platform combines Advanced Threat Prevention, Advanced WildFire® (AWF), Advanced DNS Security (ADNS) and Advanced URL Filtering (AURL) into a single AI-powered fabric that operates at the speed of the cloud.
Powered by Precision AI®, this framework delivers real-time contextual awareness across every stage of the attack lifecycle. It continuously analyzes billions of signals across networks, users and applications to transform raw data into actionable insights. This capability enables organizations to move from reactive detection to proactive prevention, stopping threats before they can disrupt operations.
Every day, our CDSS services analyze up to 5.43 billion new events, detect nearly 8.95 million never-before-seen attacks, and block up to 30.9 billion threats inline. This scale of visibility is strengthened by AI that’s trained on shared threat data from more than 70,000 customers, creating a powerful network effect that delivers patient-zero prevention everywhere. This depth of intelligence provides the visibility and context needed to understand and stop even the most sophisticated attacks as they evolve.
CDSS Advanced Core Security Services
Shared telemetry flows naturally across services, helping ensure threats are detected and prevented without operating in silos. A phishing domain identified by Advanced DNS Security can immediately inform Advanced URL Filtering to block the malicious site. When Advanced WildFire uncovers a new zero-day technique or malicious artifact, that intelligence is shared instantly across the CDSS intelligence layer. Inline services, like Advanced Threat Prevention and Advanced URL Filtering, are enabled to strengthen protections in real time without manual intervention.
For IT leaders and security teams, this unified approach delivers comprehensive visibility and protection that keeps pace with their environment. Continuous intelligence adapts as conditions change, reducing complexity and improving operational efficiency. With consistent policy enforcement, faster decision-making and unified management across the enterprise, organizations can shift their focus from maintenance to innovation and growth.
The Moment Traditional Security Stopped Being Enough
There was a time when traditional network security was enough. Perimeter defenses and signature-based tools could reliably detect and block most threats before they caused harm. For years, this layered approach gave organizations a sense of confidence and control. But that moment has passed.
Our Unit 42® team has found that attacks are now faster, more sophisticated and more disruptive than ever, with 86% of major incidents in 2024 resulting in business disruption. This shift underscores how quickly traditional defenses are being outpaced and why yesterday’s security models no longer match today’s threat landscape.
Traditional, siloed architectures simply cannot keep up with modern attackers. What once served as a strong defense is now outmatched by adversaries who use AI to move faster and slip through the cracks of static security controls. Attackers no longer need to rely on predictable patterns or known exploits. They can use machine learning to probe defenses, mimic legitimate activity and disguise malicious activity within normal traffic, allowing them to bypass systems once they are considered unbreakable.
Older security products that depend on static signatures or manual policy updates cannot match this speed and scale. They respond to what has already happened, not to what is happening right now. By the time a new rule is written or a patch is applied, the threat has already evolved. Fragmented visibility and delayed response times give adversaries the upper hand, leaving IT teams defending blind against threats that adapt and shift faster than their defenses ever could.
In an effort to compensate, many organizations continue to add more tools: One for web filtering, one for DNS, one for malware analysis and one for data protection. While each solution provides value, they rarely work together. The result is an overcomplicated ecosystem of disconnected products that create visibility gaps, duplicate alerts, inconsistent policy enforcement and operational overhead. These gaps are exactly where attackers find their opportunity.
The moment traditional security stopped being enough was the moment attackers learned to think and move like machines. The rise of AI-powered threats marked the end of static defense and the beginning of a new kind of warfare, one that demands prevention and is predictive, adaptive and unified.
Today, enterprises don’t need more point products. They need a single, intelligent security fabric so they can see, understand and act across every vector, from DNS to SaaS to endpoint, in one coordinated motion. Attackers increasingly weaponize GenAI to craft more evasive phishing pages, malware and domain infrastructures. So, security teams must rely on defenses that can counter these techniques in real time by effectively battling AI with AI.
Having the proper security tools is only part of the equation. Real protection comes when those tools are fully enabled, integrated and working together to secure the organization. Cloud-delivered security services deliver their greatest value when they are live and continuously analyzing traffic, sharing intelligence and adapting in real time to support the business.
Too often, organizations have the right capabilities in place but leave them underutilized or inactive. Protection begins the moment each service is turned on and working in unison to deliver real-time prevention at scale. Ensuring that these capabilities are fully enabled and actively defending the network is what turns investment into impact.
Prevention is about readiness, not reaction. The most resilient organizations are those that activate early, integrate completely and allow automation to amplify what human oversight cannot. When IT leaders enable Advanced Threat Prevention, AWF, ADNS and AURL, prevention becomes continuous, intelligent and aligned with the pace of modern threats.
The power of our CDSS lies in both their advanced technology and the unity they create. Together, these services form an intelligent defense that connects detection, prevention and response into one seamless operation, all powered by Precision AI.
Precision AI Foundation for Advanced Security Services
Now, with AI reshaping both innovation and risk, CDSS helps organizations stay confidently ahead. IT leaders who enable these capabilities strengthen visibility, simplify operations and elevate their overall security posture.
Fully enable your defenses and have them ready to prevent threats at each stage of the attack lifecycle. Learn more about activating CDSS through Strata™ Cloud manager or speak with your Palo Alto Networks representative to see how unified, AI-powered prevention can strengthen your organization’s security posture.
Key Takeaways
Traditional Security Is Insufficient Against Modern Threats
The rise of AI and automation has created an era in which attackers move faster and are more sophisticated than traditional, siloed security products can handle, leading to an increasing number of major incidents that disrupt business.
Unified Cloud-Delivered Security Services (CDSS) Are Necessary for Proactive Prevention
Protecting the modern attack surface requires a single, intelligent, connected CDSS platform that unifies all layers of protection (e.g., Advanced Threat Prevention, AWF, ADNS, AURL) into an AI-powered fabric, enabling proactive, real-time prevention rather than reactive detection.
Real Protection Depends on Full Activation and Integration Having the right security tools is only part of the solution. The greatest value and protection are realized when your fully enabled CDSS is integrated and working in unison to continuously analyze traffic and share intelligence.
Define Security Success Organizations face an escalating threat landscape and a widening cyber security skills gap. Compliance-driven training alone cannot prepare teams for real-world challenges like incident response, SOC operations, and threat hunting. Without robust, practical training, defenses weaken, and vulnerabilities multiply. Recent data from Cybrary – a leading cyber security training platform – shows how modern approaches are transforming readiness. Cybrary specializes in practical, role-based learning for security professionals. Through its partnership with Check Point’s Infinity Global Services, organizations gain access to structured programs that combine industry-recognized certifications, hands-on labs, and customized learning paths. The Impact of Cyber Security […]
If you’re a penetration tester, you know that lateral movement is becoming increasingly difficult, especially in well-defended environments. One common technique for remote command execution has been the use of DCOM objects.
Over the years, many different DCOM objects have been discovered. Some rely on native Windows components, others depend on third-party software such as Microsoft Office, and some are undocumented objects found through reverse engineering. While certain objects still work, others no longer function in newer versions of Windows.
This research presents a previously undescribed DCOM object that can be used for both command execution and potential persistence. This new technique abuses older initial access and persistence methods through Control Panel items.
First, we will discuss COM technology. After that, we will review the current state of the Impacket dcomexec script, focusing on objects that still function, and discuss potential fixes and improvements, then move on to techniques for enumerating objects on the system. Next, we will examine Control Panel items, how adversaries have used them for initial access and persistence, and how these items can be leveraged through a DCOM object to achieve command execution.
Finally, we will cover detection strategies to identify and respond to this type of activity.
COM/DCOM technology
What is COM?
COM stands for Component Object Model, a Microsoft technology that defines a binary standard for interoperability. It enables the creation of reusable software components that can interact at runtime without the need to compile COM libraries directly into an application.
These software components operate in a client–server model. A COM object exposes its functionality through one or more interfaces. An interface is essentially a collection of related member functions (methods).
COM also enables communication between processes running on the same machine by using local RPC (Remote Procedure Call) to handle cross-process communication.
Terms
To ensure a better understanding of its structure and functionality, let’s revise COM-related terminology.
COM interface A COM interface defines the functionality that a COM object exposes. Each COM interface is identified by a unique GUID known as the IID (Interface ID). All COM interfaces can be found in the Windows Registry under HKEY_CLASSES_ROOT\Interface, where they are organized by GUID.
COM class (COM CoClass) A COM class is the actual implementation of one or more COM interfaces. Like COM interfaces, classes are identified by unique GUIDs, but in this case the GUID is called the CLSID (Class ID). This GUID is used to locate the COM server and activate the corresponding COM class.
All COM classes must be registered in the registry under HKEY_CLASSES_ROOT\CLSID, where each class’s GUID is stored. Under each GUID, you may find multiple subkeys that serve different purposes, such as:
InprocServer32/LocalServer32: Specifies the system path of the COM server where the class is defined. InprocServer32 is used for in-process servers (DLLs), while LocalServer32 is used for out-of-process servers (EXEs). We’ll describe this in more detail later.
ProgID: A human-readable name assigned to the COM class.
TypeLib: A binary description of the COM class (essentially documentation for the class).
AppID: Used to describe security configuration for the class.
COM server A COM is the module where a COM class is defined. The server can be implemented as an EXE, in which case it is called an out-of-process server, or as a DLL, in which case it is called an in-process server. Each COM server has a unique file path or location in the system. Information about COM servers is stored in the Windows Registry. The COM runtime uses the registry to locate the server and perform further actions. Registry entries for COM servers are located under the HKEY_CLASSES_ROOT root key for both 32- and 64-bit servers.
Component Object Model implementation
Client–server model
In-process server In the case of an in-process server, the server is implemented as a DLL. The client loads this DLL into its own address space and directly executes functions exposed by the COM object. This approach is efficient since both client and server run within the same process.
In-process COM server
Out-of-process server Here, the server is implemented and compiled as an executable (EXE). Since the client cannot load an EXE into its address space, the server runs in its own process, separate from the client. Communication between the two processes is handled via ALPC (Advanced Local Procedure Call) ports, which serve as the RPC transport layer for COM.
Out-of-process COM server
What is DCOM?
DCOM is an extension of COM where the D stands for Distributed. It enables the client and server to reside on different machines. From the user’s perspective, there is no difference: DCOM provides an abstraction layer that makes both the client and the server appear as if they are on the same machine.
Under the hood, however, COM uses TCP as the RPC transport layer to enable communication across machines.
Distributed COM implementation
Certain requirements must be met to extend a COM object into a DCOM object. The most important one for our research is the presence of the AppID subkey in the registry, located under the COM CLSID entry.
The AppID value contains a GUID that maps to a corresponding key under HKEY_CLASSES_ROOT\AppID. Several subkeys may exist under this GUID. Two critical ones are:
These registry settings grant remote clients permissions to activate and interact with DCOM objects.
Lateral movement via DCOM
After attackers compromise a host, their next objective is often to compromise additional machines. This is what we call lateral movement. One common lateral movement technique is to achieve remote command execution on a target machine. There are many ways to do this, one of which involves abusing DCOM objects.
In recent years, many DCOM objects have been discovered. This research focuses on the objects exposed by the Impacket script dcomexec.py that can be used for command execution. More specifically, three exposed objects are used: ShellWindows, ShellBrowserWindow and MMC20.
ShellWindows
ShellWindows was one of the first DCOM objects to be identified. It represents a collection of open shell windows and is hosted by explorer.exe, meaning any COM client communicates with that process.
In Impacket’s dcomexec.py, once an instance of this COM object is created on a remote machine, the script provides a semi-interactive shell.
Each time a user enters a command, the function exposed by the COM object is called. The command output is redirected to a file, which the script retrieves via SMB and displays back to simulate a regular shell.
Internally, the script runs this command when connecting:
cmd.exe /Q /c cd \ 1> \\127.0.0.1\ADMIN$\__17602 2>&1
This sets the working directory to C:\ and redirects the output to the ADMIN$ share under the filename __17602. After that, the script checks whether the file exists; if it does, execution is considered successful and the output appears as if in a shell.
When running dcomexec.py against Windows 10 and 11 using the ShellWindows object, the script hangs after confirming SMB connection initialization and printing the SMB banner. As I mentioned in my personal blog post, it appears that this DCOM object no longer has permission to write to the ADMIN$ share. A simple fix is to redirect the output to a directory the DCOM object can write to, such as the Temp folder. The Temp folder can then be accessed under the same ADMIN$ share. A small change in the code resolves the issue. For example:
ShellBrowserWindow
The ShellBrowserWindow object behaves almost identically to ShellWindows and exhibits the same behavior on Windows 10. The same workaround that we used for ShellWindows applies in this case. However, on Windows 11, this object no longer works for command execution.
MMC20
The MMC20.Application COM object is the automation interface for Microsoft Management Console (MMC). It exposes methods and properties that allow MMC snap-ins to be automated.
This object has historically worked across all Windows versions. Starting with Windows Server 2025, however, attempting to use it triggers a Defender alert, and execution is blocked.
As shown in earlier examples, the dcomexec.py script writes the command output to a file under ADMIN$, with a filename that begins with __:
OUTPUT_FILENAME = '__' + str(time.time())[:5]
Defender appears to check for files written under ADMIN$ that start with __, and when it detects one, it blocks the process and alerts the user. A quick fix is to simply remove the double underscores from the output filename.
Another way to bypass this issue is to use the same workaround used for ShellWindows – redirecting the output to the Temp folder. The table below outlines the status of these objects across different Windows versions.
Windows Server 2025
Windows Server 2022
Windows 11
Windows 10
ShellWindows
Doesn’t work
Doesn’t work
Works but needs a fix
Works but needs a fix
ShellBrowserWindow
Doesn’t work
Doesn’t work
Doesn’t work
Works but needs a fix
MMC20
Detected by Defender
Works
Works
Works
Enumerating COM/DCOM objects
The first step to identifying which DCOM objects could be used for lateral movement is to enumerate them. By enumerating, I don’t just mean listing the objects. Enumeration involves:
Finding objects and filtering specifically for DCOM objects.
Identifying their interfaces.
Inspecting the exposed functions.
Automating enumeration is difficult because most COM objects lack a type library (TypeLib). A TypeLib acts as documentation for an object: which interfaces it supports, which functions are exposed, and the definitions of those functions. Even when TypeLibs are available, manual inspection is often still required, as we will explain later.
There are several approaches to enumerating COM objects depending on their use cases. Next, we’ll describe the methods I used while conducting this research, taking into account both automated and manual methods.
Automation using PowerShell In PowerShell, you can use .NET to create and interact with DCOM objects. Objects can be created using either their ProgID or CLSID, after which you can call their functions (as shown in the figure below).
Shell.Application COM object function list in PowerShell
Under the hood, PowerShell checks whether the COM object has a TypeLib and implements the IDispatch interface. IDispatch enables late binding, which allows runtime dynamic object creation and function invocation. With these two conditions met, PowerShell can dynamically interact with COM objects at runtime.
Our strategy looks like this:
As you can see in the last box, we perform manual inspection to look for functions with names that could be of interest, such as Execute, Exec, Shell, etc. These names often indicate potential command execution capabilities.
However, this approach has several limitations:
TypeLib requirement: Not all COM objects have a TypeLib, so many objects cannot be enumerated this way.
IDispatch requirement: Not all COM objects implement the IDispatch interface, which is required for PowerShell interaction.
Interface control: When you instantiate an object in PowerShell, you cannot choose which interface the instance will be tied to. If a COM class implements multiple interfaces, PowerShell will automatically select the one marked as [default] in the TypeLib. This means that other non-default interfaces, which may contain additional relevant functionality, such as command execution, could be overlooked.
Automation using C++ As you might expect, C++ is one of the languages that natively supports COM clients. Using C++, you can create instances of COM objects and call their functions via header files that define the interfaces.However, with this approach, we are not necessarily interested in calling functions directly. Instead, the goal is to check whether a specific COM object supports certain interfaces. The reasoning is that many interfaces have been found to contain functions that can be abused for command execution or other purposes.
This strategy primarily relies on an interface called IUnknown. All COM interfaces should inherit from this interface, and all COM classes should implement it.The IUnknown interface exposes three main functions. The most important is QueryInterface(), which is used to ask a COM object for a pointer to one of its interfaces.So, the strategy is to:
Enumerate COM classes in the system by reading CLSIDs under the HKEY_CLASSES_ROOT\CLSID key.
Check whether they support any known valuable interfaces. If they do, those classes may be leveraged for command execution or other useful functionality.
This method has several advantages:
No TypeLib dependency: Unlike PowerShell, this approach does not require the COM object to have a TypeLib.
Use of IUnknown: In C++, you can use the QueryInterface function from the base IUnknown interface to check if a particular interface is supported by a COM class.
No need for interface definitions: Even without knowing the exact interface structure, you can obtain a pointer to its virtual function table (vtable), typically cast as a void*. This is enough to confirm the existence of the interface and potentially inspect it further.
The figure below illustrates this strategy:
This approach is good in terms of automation because it eliminates the need for manual inspection. However, we are still only checking well-known interfaces commonly used for lateral movement, while potentially missing others.
Manual inspection using open-source tools
As you can see, automation can be difficult since it requires several prerequisites and, in many cases, still ends with a manual inspection. An alternative approach is manual inspection using a tool called OleViewDotNet, developed by James Forshaw. This tool allows you to:
List all COM classes in the system.
Create instances of those classes.
Check their supported interfaces.
Call specific functions.
Apply various filters for easier analysis.
Perform other inspection tasks.
Open-source tool for inspecting COM interfaces
One of the most valuable features of this tool is its naming visibility. OleViewDotNet extracts the names of interfaces and classes (when available) from the Windows Registry and displays them, along with any associated type libraries.
This makes manual inspection easier, since you can analyze the names of classes, interfaces, or type libraries and correlate them with potentially interesting functionality, for example, functions that could lead to command execution or persistence techniques.
Control Panel items as attack surfaces
Control Panel items allow users to view and adjust their computer settings. These items are implemented as DLLs that export the CPlApplet function and typically have the .cpl extension. Control Panel items can also be executables, but our research will focus on DLLs only.
Control Panel items
Attackers can abuse CPL files for initial access. When a user executes a malicious .cpl file (e.g., delivered via phishing), the system may be compromised – a technique mapped to MITRE ATT&CK T1218.002.
Adversaries may also modify the extensions of malicious DLLs to .cpl and register them in the corresponding locations in the registry.
Under HKEY_CURRENT_USER:
HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
Under HKEY_LOCAL_MACHINE:
For 64-bit DLLs:
HKLM\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
For 32-bit DLLs:
HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
These locations are important when Control Panel DLLs need to be available to the current logged-in user or to all users on the machine. However, the “Control Panel” subkey and its “Cpls” subkey under HKCU should be created manually, unlike the “Control Panel” and “Cpls” subkeys under HKLM, which are created automatically by the operating system.
Once registered, the DLL (CPL file) will load every time the Control Panel is opened, enabling persistence on the victim’s system.
It’s worth noting that even DLLs that do not comply with the CPL specification, do not export CPlApplet, or do not have the .cpl extension can still be executed via their DllEntryPoint function if they are registered under the registry keys listed above.
There are multiple ways to execute Control Panel items:
This calls the Control_RunDLL function from shell32.dll, passing the CPL file as an argument. Everything inside the CPlApplet function will then be executed.
However, if the CPL file has been registered in the registry as shown earlier, then every time the Control Panel is opened, the file is loaded into memory through the COM Surrogate process (dllhost.exe):
COM Surrogate process loading the CPL file
What happened was that a Control Panel with a COM client used a COM object to load these CPL files. We will talk about this COM object in more detail later.
The COM Surrogate process was designed to host COM server DLLs in a separate process rather than loading them directly into the client process’s address space. This isolation improves stability for the in-process server model. This hosting behavior can be configured for a COM object in the registry if you want a COM server DLL to run inside a separate process because, by default, it is loaded in the same process.
‘DCOMing’ through Control Panel items
While following the manual approach of enumerating COM/DCOM objects that could be useful for lateral movement, I came across a COM object called COpenControlPanel, which is exposed through shell32.dll and has the CLSID {06622D85-6856-4460-8DE1-A81921B41C4B}. This object exposes multiple interfaces, one of which is IOpenControlPanel with IID {D11AD862-66DE-4DF4-BF6C-1F5621996AF1}.
IOpenControlPanel interface in the OleViewDotNet output
I immediately thought of its potential to compromise Control Panel items, so I wanted to check which functions were exposed by this interface. Unfortunately, neither the interface nor the COM class has a type library.
COpenControlPanel interfaces without TypeLib
Normally, checking the interface definition would require reverse engineering, so at first, it looked like we needed to take a different research path. However, it turned out that the IOpenControlPanel interface is documented on MSDN, and according to the documentation, it exposes several functions. One of them, called Open, allows a specified Control Panel item to be opened using its name as the first argument.
Full type and function definitions are provided in the shobjidl_core.h Windows header file.
Open function exposed by IOpenControlPanel interface
It’s worth noting that in newer versions of Windows (e.g., Windows Server 2025 and Windows 11), Microsoft has removed interface names from the registry, which means they can no longer be identified through OleViewDotNet.
COpenControlPanel interfaces without names
Returning to the COpenControlPanel COM object, I found that the Open function can trigger a DLL to be loaded into memory if it has been registered in the registry. For the purposes of this research, I created a DLL that basically just spawns a message box which is defined under the DllEntryPoint function. I registered it under HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls and then created a simple C++ COM client to call the Open function on this interface.
As expected, the DLL was loaded into memory. It was hosted in the same way that it would be if the Control Panel itself was opened: through the COM Surrogate process (dllhost.exe). Using Process Explorer, it was clear that dllhost.exe loaded my DLL while simultaneously hosting the COpenControlPanel object along with other COM objects.
COM Surrogate loading a custom DLL and hosting the COpenControlPanel object
Based on my testing, I made the following observations:
The DLL that needs to be registered does not necessarily have to be a .cpl file; any DLL with a valid entry point will be loaded.
The Open() function accepts the name of a Control Panel item as its first argument. However, it appears that even if a random string is supplied, it still causes all DLLs registered in the relevant registry location to be loaded into memory.
Now, what if we could trigger this COM object remotely? In other words, what if it is not just a COM object but also a DCOM object? To verify this, we checked the AppID of the COpenControlPanel object using OleViewDotNet.
COpenControlPanel object in OleViewDotNet
Both the launch and access permissions are empty, which means the object will follow the system’s default DCOM security policy. By default, members of the Administrators group are allowed to launch and access the DCOM object.
Based on this, we can build a remote strategy. First, upload the “malicious” DLL, then use the Remote Registry service to register it in the appropriate registry location. Finally, use a trigger acting as a DCOM client to remotely invoke the Open() function, causing our DLL to be loaded. The diagram below illustrates the flow of this approach.
Malicious DLL loading using DCOM
The trigger can be written in either C++ or Python, for example, using Impacket. I chose Python because of its flexibility. The trigger itself is straightforward: we define the DCOM class, the interface, and the function to call. The full code example can be found here.
Once the trigger runs, the behavior will be the same as when executing the COM client locally: our DLL will be loaded through the COM Surrogate process (dllhost.exe).
As you can see, this technique not only achieves command execution but also provides persistence. It can be triggered in two ways: when a user opens the Control Panel or remotely at any time via DCOM.
Detection
The first step in detecting such activity is to check whether any Control Panel items have been registered under the following registry paths:
Although commonly known best practices and research papers regarding Windows security advise monitoring only the first subkey, for thorough coverage it is important to monitor all of the above.
In addition, monitoring dllhost.exe (COM Surrogate) for unusual COM objects such as COpenControlPanel can provide indicators of malicious activity.
Finally, it is always recommended to monitor Remote Registry usage because it is commonly abused in many types of attacks, not just in this scenario.
Conclusion
In conclusion, I hope this research has clarified yet another attack vector and emphasized the importance of implementing hardening practices. Below are a few closing points for security researchers to take into account:
As shown, DCOM represents a large attack surface. Windows exposes many DCOM classes, a significant number of which lack type libraries – meaning reverse engineering can reveal additional classes that may be abused for lateral movement.
Changing registry values to register malicious CPLs is not good practice from a red teaming ethics perspective. Defender products tend to monitor common persistence paths, but Control Panel applets can be registered in multiple registry locations, so there is always a gap that can be exploited.
Bitness also matters. On x64 systems, loading a 32-bit DLL will spawn a 32-bit COM Surrogate process (dllhost.exe *32). This is unusual on 64-bit hosts and therefore serves as a useful detection signal for defenders and an interesting red flag for red teamers to consider.
If you’re a penetration tester, you know that lateral movement is becoming increasingly difficult, especially in well-defended environments. One common technique for remote command execution has been the use of DCOM objects.
Over the years, many different DCOM objects have been discovered. Some rely on native Windows components, others depend on third-party software such as Microsoft Office, and some are undocumented objects found through reverse engineering. While certain objects still work, others no longer function in newer versions of Windows.
This research presents a previously undescribed DCOM object that can be used for both command execution and potential persistence. This new technique abuses older initial access and persistence methods through Control Panel items.
First, we will discuss COM technology. After that, we will review the current state of the Impacket dcomexec script, focusing on objects that still function, and discuss potential fixes and improvements, then move on to techniques for enumerating objects on the system. Next, we will examine Control Panel items, how adversaries have used them for initial access and persistence, and how these items can be leveraged through a DCOM object to achieve command execution.
Finally, we will cover detection strategies to identify and respond to this type of activity.
COM/DCOM technology
What is COM?
COM stands for Component Object Model, a Microsoft technology that defines a binary standard for interoperability. It enables the creation of reusable software components that can interact at runtime without the need to compile COM libraries directly into an application.
These software components operate in a client–server model. A COM object exposes its functionality through one or more interfaces. An interface is essentially a collection of related member functions (methods).
COM also enables communication between processes running on the same machine by using local RPC (Remote Procedure Call) to handle cross-process communication.
Terms
To ensure a better understanding of its structure and functionality, let’s revise COM-related terminology.
COM interface A COM interface defines the functionality that a COM object exposes. Each COM interface is identified by a unique GUID known as the IID (Interface ID). All COM interfaces can be found in the Windows Registry under HKEY_CLASSES_ROOT\Interface, where they are organized by GUID.
COM class (COM CoClass) A COM class is the actual implementation of one or more COM interfaces. Like COM interfaces, classes are identified by unique GUIDs, but in this case the GUID is called the CLSID (Class ID). This GUID is used to locate the COM server and activate the corresponding COM class.
All COM classes must be registered in the registry under HKEY_CLASSES_ROOT\CLSID, where each class’s GUID is stored. Under each GUID, you may find multiple subkeys that serve different purposes, such as:
InprocServer32/LocalServer32: Specifies the system path of the COM server where the class is defined. InprocServer32 is used for in-process servers (DLLs), while LocalServer32 is used for out-of-process servers (EXEs). We’ll describe this in more detail later.
ProgID: A human-readable name assigned to the COM class.
TypeLib: A binary description of the COM class (essentially documentation for the class).
AppID: Used to describe security configuration for the class.
COM server A COM is the module where a COM class is defined. The server can be implemented as an EXE, in which case it is called an out-of-process server, or as a DLL, in which case it is called an in-process server. Each COM server has a unique file path or location in the system. Information about COM servers is stored in the Windows Registry. The COM runtime uses the registry to locate the server and perform further actions. Registry entries for COM servers are located under the HKEY_CLASSES_ROOT root key for both 32- and 64-bit servers.
Component Object Model implementation
Client–server model
In-process server In the case of an in-process server, the server is implemented as a DLL. The client loads this DLL into its own address space and directly executes functions exposed by the COM object. This approach is efficient since both client and server run within the same process.
In-process COM server
Out-of-process server Here, the server is implemented and compiled as an executable (EXE). Since the client cannot load an EXE into its address space, the server runs in its own process, separate from the client. Communication between the two processes is handled via ALPC (Advanced Local Procedure Call) ports, which serve as the RPC transport layer for COM.
Out-of-process COM server
What is DCOM?
DCOM is an extension of COM where the D stands for Distributed. It enables the client and server to reside on different machines. From the user’s perspective, there is no difference: DCOM provides an abstraction layer that makes both the client and the server appear as if they are on the same machine.
Under the hood, however, COM uses TCP as the RPC transport layer to enable communication across machines.
Distributed COM implementation
Certain requirements must be met to extend a COM object into a DCOM object. The most important one for our research is the presence of the AppID subkey in the registry, located under the COM CLSID entry.
The AppID value contains a GUID that maps to a corresponding key under HKEY_CLASSES_ROOT\AppID. Several subkeys may exist under this GUID. Two critical ones are:
These registry settings grant remote clients permissions to activate and interact with DCOM objects.
Lateral movement via DCOM
After attackers compromise a host, their next objective is often to compromise additional machines. This is what we call lateral movement. One common lateral movement technique is to achieve remote command execution on a target machine. There are many ways to do this, one of which involves abusing DCOM objects.
In recent years, many DCOM objects have been discovered. This research focuses on the objects exposed by the Impacket script dcomexec.py that can be used for command execution. More specifically, three exposed objects are used: ShellWindows, ShellBrowserWindow and MMC20.
ShellWindows
ShellWindows was one of the first DCOM objects to be identified. It represents a collection of open shell windows and is hosted by explorer.exe, meaning any COM client communicates with that process.
In Impacket’s dcomexec.py, once an instance of this COM object is created on a remote machine, the script provides a semi-interactive shell.
Each time a user enters a command, the function exposed by the COM object is called. The command output is redirected to a file, which the script retrieves via SMB and displays back to simulate a regular shell.
Internally, the script runs this command when connecting:
cmd.exe /Q /c cd \ 1> \\127.0.0.1\ADMIN$\__17602 2>&1
This sets the working directory to C:\ and redirects the output to the ADMIN$ share under the filename __17602. After that, the script checks whether the file exists; if it does, execution is considered successful and the output appears as if in a shell.
When running dcomexec.py against Windows 10 and 11 using the ShellWindows object, the script hangs after confirming SMB connection initialization and printing the SMB banner. As I mentioned in my personal blog post, it appears that this DCOM object no longer has permission to write to the ADMIN$ share. A simple fix is to redirect the output to a directory the DCOM object can write to, such as the Temp folder. The Temp folder can then be accessed under the same ADMIN$ share. A small change in the code resolves the issue. For example:
ShellBrowserWindow
The ShellBrowserWindow object behaves almost identically to ShellWindows and exhibits the same behavior on Windows 10. The same workaround that we used for ShellWindows applies in this case. However, on Windows 11, this object no longer works for command execution.
MMC20
The MMC20.Application COM object is the automation interface for Microsoft Management Console (MMC). It exposes methods and properties that allow MMC snap-ins to be automated.
This object has historically worked across all Windows versions. Starting with Windows Server 2025, however, attempting to use it triggers a Defender alert, and execution is blocked.
As shown in earlier examples, the dcomexec.py script writes the command output to a file under ADMIN$, with a filename that begins with __:
OUTPUT_FILENAME = '__' + str(time.time())[:5]
Defender appears to check for files written under ADMIN$ that start with __, and when it detects one, it blocks the process and alerts the user. A quick fix is to simply remove the double underscores from the output filename.
Another way to bypass this issue is to use the same workaround used for ShellWindows – redirecting the output to the Temp folder. The table below outlines the status of these objects across different Windows versions.
Windows Server 2025
Windows Server 2022
Windows 11
Windows 10
ShellWindows
Doesn’t work
Doesn’t work
Works but needs a fix
Works but needs a fix
ShellBrowserWindow
Doesn’t work
Doesn’t work
Doesn’t work
Works but needs a fix
MMC20
Detected by Defender
Works
Works
Works
Enumerating COM/DCOM objects
The first step to identifying which DCOM objects could be used for lateral movement is to enumerate them. By enumerating, I don’t just mean listing the objects. Enumeration involves:
Finding objects and filtering specifically for DCOM objects.
Identifying their interfaces.
Inspecting the exposed functions.
Automating enumeration is difficult because most COM objects lack a type library (TypeLib). A TypeLib acts as documentation for an object: which interfaces it supports, which functions are exposed, and the definitions of those functions. Even when TypeLibs are available, manual inspection is often still required, as we will explain later.
There are several approaches to enumerating COM objects depending on their use cases. Next, we’ll describe the methods I used while conducting this research, taking into account both automated and manual methods.
Automation using PowerShell In PowerShell, you can use .NET to create and interact with DCOM objects. Objects can be created using either their ProgID or CLSID, after which you can call their functions (as shown in the figure below).
Shell.Application COM object function list in PowerShell
Under the hood, PowerShell checks whether the COM object has a TypeLib and implements the IDispatch interface. IDispatch enables late binding, which allows runtime dynamic object creation and function invocation. With these two conditions met, PowerShell can dynamically interact with COM objects at runtime.
Our strategy looks like this:
As you can see in the last box, we perform manual inspection to look for functions with names that could be of interest, such as Execute, Exec, Shell, etc. These names often indicate potential command execution capabilities.
However, this approach has several limitations:
TypeLib requirement: Not all COM objects have a TypeLib, so many objects cannot be enumerated this way.
IDispatch requirement: Not all COM objects implement the IDispatch interface, which is required for PowerShell interaction.
Interface control: When you instantiate an object in PowerShell, you cannot choose which interface the instance will be tied to. If a COM class implements multiple interfaces, PowerShell will automatically select the one marked as [default] in the TypeLib. This means that other non-default interfaces, which may contain additional relevant functionality, such as command execution, could be overlooked.
Automation using C++ As you might expect, C++ is one of the languages that natively supports COM clients. Using C++, you can create instances of COM objects and call their functions via header files that define the interfaces.However, with this approach, we are not necessarily interested in calling functions directly. Instead, the goal is to check whether a specific COM object supports certain interfaces. The reasoning is that many interfaces have been found to contain functions that can be abused for command execution or other purposes.
This strategy primarily relies on an interface called IUnknown. All COM interfaces should inherit from this interface, and all COM classes should implement it.The IUnknown interface exposes three main functions. The most important is QueryInterface(), which is used to ask a COM object for a pointer to one of its interfaces.So, the strategy is to:
Enumerate COM classes in the system by reading CLSIDs under the HKEY_CLASSES_ROOT\CLSID key.
Check whether they support any known valuable interfaces. If they do, those classes may be leveraged for command execution or other useful functionality.
This method has several advantages:
No TypeLib dependency: Unlike PowerShell, this approach does not require the COM object to have a TypeLib.
Use of IUnknown: In C++, you can use the QueryInterface function from the base IUnknown interface to check if a particular interface is supported by a COM class.
No need for interface definitions: Even without knowing the exact interface structure, you can obtain a pointer to its virtual function table (vtable), typically cast as a void*. This is enough to confirm the existence of the interface and potentially inspect it further.
The figure below illustrates this strategy:
This approach is good in terms of automation because it eliminates the need for manual inspection. However, we are still only checking well-known interfaces commonly used for lateral movement, while potentially missing others.
Manual inspection using open-source tools
As you can see, automation can be difficult since it requires several prerequisites and, in many cases, still ends with a manual inspection. An alternative approach is manual inspection using a tool called OleViewDotNet, developed by James Forshaw. This tool allows you to:
List all COM classes in the system.
Create instances of those classes.
Check their supported interfaces.
Call specific functions.
Apply various filters for easier analysis.
Perform other inspection tasks.
Open-source tool for inspecting COM interfaces
One of the most valuable features of this tool is its naming visibility. OleViewDotNet extracts the names of interfaces and classes (when available) from the Windows Registry and displays them, along with any associated type libraries.
This makes manual inspection easier, since you can analyze the names of classes, interfaces, or type libraries and correlate them with potentially interesting functionality, for example, functions that could lead to command execution or persistence techniques.
Control Panel items as attack surfaces
Control Panel items allow users to view and adjust their computer settings. These items are implemented as DLLs that export the CPlApplet function and typically have the .cpl extension. Control Panel items can also be executables, but our research will focus on DLLs only.
Control Panel items
Attackers can abuse CPL files for initial access. When a user executes a malicious .cpl file (e.g., delivered via phishing), the system may be compromised – a technique mapped to MITRE ATT&CK T1218.002.
Adversaries may also modify the extensions of malicious DLLs to .cpl and register them in the corresponding locations in the registry.
Under HKEY_CURRENT_USER:
HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
Under HKEY_LOCAL_MACHINE:
For 64-bit DLLs:
HKLM\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
For 32-bit DLLs:
HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
These locations are important when Control Panel DLLs need to be available to the current logged-in user or to all users on the machine. However, the “Control Panel” subkey and its “Cpls” subkey under HKCU should be created manually, unlike the “Control Panel” and “Cpls” subkeys under HKLM, which are created automatically by the operating system.
Once registered, the DLL (CPL file) will load every time the Control Panel is opened, enabling persistence on the victim’s system.
It’s worth noting that even DLLs that do not comply with the CPL specification, do not export CPlApplet, or do not have the .cpl extension can still be executed via their DllEntryPoint function if they are registered under the registry keys listed above.
There are multiple ways to execute Control Panel items:
This calls the Control_RunDLL function from shell32.dll, passing the CPL file as an argument. Everything inside the CPlApplet function will then be executed.
However, if the CPL file has been registered in the registry as shown earlier, then every time the Control Panel is opened, the file is loaded into memory through the COM Surrogate process (dllhost.exe):
COM Surrogate process loading the CPL file
What happened was that a Control Panel with a COM client used a COM object to load these CPL files. We will talk about this COM object in more detail later.
The COM Surrogate process was designed to host COM server DLLs in a separate process rather than loading them directly into the client process’s address space. This isolation improves stability for the in-process server model. This hosting behavior can be configured for a COM object in the registry if you want a COM server DLL to run inside a separate process because, by default, it is loaded in the same process.
‘DCOMing’ through Control Panel items
While following the manual approach of enumerating COM/DCOM objects that could be useful for lateral movement, I came across a COM object called COpenControlPanel, which is exposed through shell32.dll and has the CLSID {06622D85-6856-4460-8DE1-A81921B41C4B}. This object exposes multiple interfaces, one of which is IOpenControlPanel with IID {D11AD862-66DE-4DF4-BF6C-1F5621996AF1}.
IOpenControlPanel interface in the OleViewDotNet output
I immediately thought of its potential to compromise Control Panel items, so I wanted to check which functions were exposed by this interface. Unfortunately, neither the interface nor the COM class has a type library.
COpenControlPanel interfaces without TypeLib
Normally, checking the interface definition would require reverse engineering, so at first, it looked like we needed to take a different research path. However, it turned out that the IOpenControlPanel interface is documented on MSDN, and according to the documentation, it exposes several functions. One of them, called Open, allows a specified Control Panel item to be opened using its name as the first argument.
Full type and function definitions are provided in the shobjidl_core.h Windows header file.
Open function exposed by IOpenControlPanel interface
It’s worth noting that in newer versions of Windows (e.g., Windows Server 2025 and Windows 11), Microsoft has removed interface names from the registry, which means they can no longer be identified through OleViewDotNet.
COpenControlPanel interfaces without names
Returning to the COpenControlPanel COM object, I found that the Open function can trigger a DLL to be loaded into memory if it has been registered in the registry. For the purposes of this research, I created a DLL that basically just spawns a message box which is defined under the DllEntryPoint function. I registered it under HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls and then created a simple C++ COM client to call the Open function on this interface.
As expected, the DLL was loaded into memory. It was hosted in the same way that it would be if the Control Panel itself was opened: through the COM Surrogate process (dllhost.exe). Using Process Explorer, it was clear that dllhost.exe loaded my DLL while simultaneously hosting the COpenControlPanel object along with other COM objects.
COM Surrogate loading a custom DLL and hosting the COpenControlPanel object
Based on my testing, I made the following observations:
The DLL that needs to be registered does not necessarily have to be a .cpl file; any DLL with a valid entry point will be loaded.
The Open() function accepts the name of a Control Panel item as its first argument. However, it appears that even if a random string is supplied, it still causes all DLLs registered in the relevant registry location to be loaded into memory.
Now, what if we could trigger this COM object remotely? In other words, what if it is not just a COM object but also a DCOM object? To verify this, we checked the AppID of the COpenControlPanel object using OleViewDotNet.
COpenControlPanel object in OleViewDotNet
Both the launch and access permissions are empty, which means the object will follow the system’s default DCOM security policy. By default, members of the Administrators group are allowed to launch and access the DCOM object.
Based on this, we can build a remote strategy. First, upload the “malicious” DLL, then use the Remote Registry service to register it in the appropriate registry location. Finally, use a trigger acting as a DCOM client to remotely invoke the Open() function, causing our DLL to be loaded. The diagram below illustrates the flow of this approach.
Malicious DLL loading using DCOM
The trigger can be written in either C++ or Python, for example, using Impacket. I chose Python because of its flexibility. The trigger itself is straightforward: we define the DCOM class, the interface, and the function to call. The full code example can be found here.
Once the trigger runs, the behavior will be the same as when executing the COM client locally: our DLL will be loaded through the COM Surrogate process (dllhost.exe).
As you can see, this technique not only achieves command execution but also provides persistence. It can be triggered in two ways: when a user opens the Control Panel or remotely at any time via DCOM.
Detection
The first step in detecting such activity is to check whether any Control Panel items have been registered under the following registry paths:
Although commonly known best practices and research papers regarding Windows security advise monitoring only the first subkey, for thorough coverage it is important to monitor all of the above.
In addition, monitoring dllhost.exe (COM Surrogate) for unusual COM objects such as COpenControlPanel can provide indicators of malicious activity.
Finally, it is always recommended to monitor Remote Registry usage because it is commonly abused in many types of attacks, not just in this scenario.
Conclusion
In conclusion, I hope this research has clarified yet another attack vector and emphasized the importance of implementing hardening practices. Below are a few closing points for security researchers to take into account:
As shown, DCOM represents a large attack surface. Windows exposes many DCOM classes, a significant number of which lack type libraries – meaning reverse engineering can reveal additional classes that may be abused for lateral movement.
Changing registry values to register malicious CPLs is not good practice from a red teaming ethics perspective. Defender products tend to monitor common persistence paths, but Control Panel applets can be registered in multiple registry locations, so there is always a gap that can be exploited.
Bitness also matters. On x64 systems, loading a 32-bit DLL will spawn a 32-bit COM Surrogate process (dllhost.exe *32). This is unusual on 64-bit hosts and therefore serves as a useful detection signal for defenders and an interesting red flag for red teamers to consider.
The transformative potential of artificial intelligence (AI) across industries is undeniable. But realizing AI's true value hinges on three cybersecurity imperatives: Understanding the AI-cybersecurity nexus, harnessing AI to supercharge cyber defense, and embedding security into AI tools from the ground up through Secure AI by Design.
Nowhere is this convergence more urgent than in financial services. Sitting at the center of our global economy, financial institutions face a dual mandate: Embrace AI for cybersecurityandcybersecurity for AI.
I was honored to cover these key principals in my testimony before the House Committee on Financial Services, led by Chairman French Hill. The hearing, entitled “From Principles to Policy: Enabling 21st Century AI Innovation in Financial Services” convened witnesses from Palo Alto Networks, Google, NASDAQ, Zillow and Public Citizen. Together, we examined AI use cases in the financial services and housing sectors, including those specific to cybersecurity. We assessed how existing laws and frameworks apply in the age of AI.
The Defense Advantage Is AI-Powered Security Operations
Attacks have become faster, with the time from compromise to data exfiltration now 100 times faster than four years ago. The financial sector bears disproportionate risk, given the value of its data and interconnected systems, while firms contend with evolving regulatory expectations, talent shortages and the persistent tendency to elevate cybersecurity only after an incident.
Generative and agentic AI intensify these pressures by accelerating every phase of the attack chain, from deepfake-driven fraud to tailored spear phishing campaigns. Our researchers at Unit 42® have found that agentic AI, autonomous systems that can reason and act without human intervention, can compress what was once a multiday ransomware campaign into roughly 25 minutes.
To keep pace, financial institutions must pivot to AI-driven defenses that operate at machine speed.
Security operations centers (SOC) have long been overwhelmed by traditional alerts and fragmented data. Security teams, forced into manual triage across dozens of disparate tools, face an inefficient model that leaves vulnerabilities exposed, burns out analysts and makes it impossible to operate at the speed necessary to outpace modern attacks.
The average enterprise SOC ingests data from 83 security solutions across 29 vendors. In 75% of breaches, logging existed that should have flagged anomalous behavior, but critical signals were buried. With 90% of SOCs still relying on manual processes, adversaries have the clear advantage.
AI-driven SOCs flip this paradigm, acting as a force multiplier to substantially reduce detection and response times. To illustrate the scale of this necessity, consider our own security operations. Palo Alto Networks SOC analyzes over 90 billion events daily. Without AI, this would be an impossible task for human analysts. But by applying AI, we distill that down to a single actionable incident.
Financial institutions migrating to AI-driven SOC platforms are seeing transformative results:
One customer reduced the Mean Time to Respond (MTTR) from one day to 14 minutes.
Another prevented 22,831 threats and processed 113,271 threat indicators in less than 5 seconds.
A large bank saved 180 hours per year by automating security information and event management reporting; 500 hours through automated data collection; 360 hours by automating four Chief Technology Officer playbooks; and 240 hours with automated threat intelligence enrichment.
These improvements are critical to stopping threat actors. But none of this would be possible without AI.
Securing the New AI Attack Surface
As AI adoption grows, it will further expand the attack surface, creating new vectors targeting training data and model environments. AI's rapid growth is outpacing the adoption of security measures designed to protect it. Nearly three-quarters of S&P 500 companies now flag AI as a material risk in their public disclosures, up from just 12% in 2023.
Traditional security tools rely on static rules that miss advanced attacks, like multistep prompt injections or adversarial manipulations. Autonomous AI agents can take unpredictable actions that are difficult to monitor with legacy methods.
Rapid AI adoption has exposed organizations' infrastructure, data, models, applications and agents to unique threats. Unlike traditional cyber exploits that target software vulnerabilities, AI-specific attacks can manipulate the foundation of how an AI system learns and operates.
A Secure AI by Design
Even with an understanding of the risks, many organizations struggle with the lack of clarity on what effective AI security looks like in practice. Recognizing the gap between intent and execution, Palo Alto Networks developed a Secure AI by Design policy roadmap that provides organizations with a comprehensive roadmap that integrates security throughout the entire AI lifecycle.
A proactive stance ensures security is a feature, not an afterthought, crucial for building trust, maintaining compliance and mitigating risks. The approach addresses four imperatives organizations most pressingly face in AI adoption:
1. Secure the use of external AI tools.
2. Secure the underlying AI infrastructure and data.
3. Safely build and deploy AI applications.
4. Monitor and control AI agents.
The Path Forward
For financial institutions, Secure AI by Design must be anchored in enterprise governance. Institutions should maintain risk-tiered AI inventories, enforce strict access controls and implement testing commensurate with risk. Governance structures should enable board oversight and align with established model risk practices.
Policymakers also have a critical role to play in promoting AI-driven security operations, championing voluntary Secure AI by Design frameworks, ensuring policies safeguard innovation, enabling controlled experimentation and strengthening public-private collaboration.
Ultimately, the financial institutions that will thrive will recognize cybersecurity as the foundation that makes innovation possible. By embracing AI-driven defenses and securing AI systems from the ground up, the sector can confidently unlock AI's transformative potential while safeguarding the trust and stability that underpin the global economy.
Read the full testimony to learn more about how cybersecurity can enable AI innovation in financial services.
Artificial Intelligence is transforming every industry, unlocking new opportunities while introducing new risks. That is why Infinity Global Services (IGS) is proud to announce the launch of our first dedicated AI security training courses. This is the first release in a growing IGS AI services portfolio, with upcoming offerings focused on AI red teaming, AI governance and AI implantation consulting services. The new courses are part of Infinity Global Services’ mission to empower organizations with the knowledge and tools to defend against emerging AI-driven threats and implementing AI securely in their operations and product development. Through hands-on training and expert-led […]
The attack surface for today’s enterprises is incredibly heterogeneous and dynamic. Applications and data are in constant motion, spanning public clouds, private data centers and edge locations. Users connect from anywhere.
For security leaders, this environment has led to an explosion in not only operational complexity, but in many cases, uncertainty. Together, Nutanix and Palo Alto Networks enable security to finally match the speed and scale of these dynamic hybrid cloud environments.
The security ecosystem has become vast and complex. Point solutions accumulate to address specific gaps, yet each adds another interface, another policy language and another integration to manage. However well intentioned, this sprawl can lead directly to fractured visibility, overlapping tools and operational fatigue.
Elevate Perimeter Protection to Defense-in-Depth
Enterprises today face unprecedented security complexity as hybrid and multicloud environments become the new normal. Currently, 94% of enterprises use some form of cloud service, while 89% report having a multicloud strategy in place. This distributed reality means security is paramount: while managing cloud spending is the number one operational challenge (82% overall), security remains a major concern, affecting 79% of all organizations.
Hybrid cloud adoption offers agility, but it also introduces distinct security challenges that strain traditional approaches. Adversaries have taken notice. Hybrid and multicloud environments are prime targets because they connect sensitive data, privileged accounts and critical systems across public, and on-premises infrastructure. Perimeter-based security models, built for static networks and centralized data centers, cannot keep pace in a world where apps and data continuously move between platforms.
Defense-in-depth has become essential for addressing the inherent dynamism of today’s environments. Network visibility is required to monitor and contain east-west traffic and lateral movement of threats inside cloud environments. Identity controls must verify every user, device and interaction across a distributed workforce. Data protection must follow sensitive information as it traverses multiple clouds, data centers and edge locations.
Yet managing these protections as distinct layers is no longer viable. Each cloud provider introduces its own native security controls. Each additional tool adds another interface and another policy set to maintain. Defense-in-depth only achieves its purpose when its layers are fully unified, providing consistent control enforcement from the edge to the core, comprehensive visibility across traffic, and essential data protections for all workloads, wherever they reside.
Freedom of Choice Without Fragmentation
Hybrid environments span public clouds, private infrastructure, SaaS ecosystems and legacy on-premises systems. No single vendor can realistically cover that entire landscape, and forcing security into a single closed ecosystem risks creating gaps where those environments meet.
The answer lies in an open ecosystem approach that allows organizations to assemble best-of-breed capabilities rather than being locked into a single provider’s stack.
This flexibility empowers security teams to adapt to the unique requirements of each environment while still operating through a unified security model. Policies can be applied consistently, intelligence can be shared across layers, and protections can move in step with workloads, regardless of platform. In short, this model can effectively support freedom of choice while relieving the operational burden of managing hybrid and multicloud security.
A Unified Security Layer Across Every Environment
Open ecosystems solve the problem of choice. What remains is the challenge of bringing those best-of-breed capabilities together into a solution that is coherent and scalable.
Consistent policy enforcementacross public cloud, private data centers and edge locations through a centralized management plane:
A single set of policies should be authored once and pushed everywhere, assuring a consistent security posture across all clouds and environments.
Abstraction of security intent from network coordinates through tag-driven automation, an approach that allows security policies to be expressed in terms of workload attributes (rather than IPs or locations):
These protections follow workloads automatically as they move. Through integration with orchestration pipelines, this approach aligns controls with rapid application rollouts in CI/CD workflows, all without manual reconfiguration.
With these core capabilities, security can finally catch up to the fluidity promised by hybrid cloud operating models.
Explore howPalo Alto Networks and Nutanix, work together to make this unified vision a reality, including joint offerings, like Palo Alto Networks secured Nutanix clusters with VM-Series Firewalls for AWS® and Microsoft® Azure.
While social engineering attacks such as phishing are a great way to gain a foothold in a target environment, direct attacks against externally exploitable services are continuing to make headlines. […]
Kent Ickler & Jordan Drysdale // BHIS Webcast and Podcast This post accompanies BHIS’s webcast recorded on August 7, 2018, Active Directory Best Practices to Frustrate Attackers, which you can view below. […]
Jordan Drysdale// Full disclosure and tl;dr: The NCC Group has developed an amazing toolkit for analyzing your AWS infrastructure against Amazon’s best practices guidelines. Start here: https://github.com/nccgroup/Scout2 Then, access your […]