Reading view

How does cyberthreat attribution help in practice?

Not every cybersecurity practitioner thinks it’s worth the effort to figure out exactly who’s pulling the strings behind the malware hitting their company. The typical incident investigation algorithm goes something like this: analyst finds a suspicious file → if the antivirus didn’t catch it, puts it into a sandbox to test → confirms some malicious activity → adds the hash to the blocklist → goes for coffee break. These are the go-to steps for many cybersecurity professionals — especially when they’re swamped with alerts, or don’t quite have the forensic skills to unravel a complex attack thread by thread. However, when dealing with a targeted attack, this approach is a one-way ticket to disaster — and here’s why.

If an attacker is playing for keeps, they rarely stick to a single attack vector. There’s a good chance the malicious file has already played its part in a multi-stage attack and is now all but useless to the attacker. Meanwhile, the adversary has already dug deep into corporate infrastructure and is busy operating with an entirely different set of tools. To clear the threat for good, the security team has to uncover and neutralize the entire attack chain.

But how can this be done quickly and effectively before the attackers manage to do some real damage? One way is to dive deep into the context. By analyzing a single file, an expert can identify exactly who’s attacking his company, quickly find out which other tools and tactics that specific group employs, and then sweep infrastructure for any related threats. There are plenty of threat intelligence tools out there for this, but I’ll show you how it works using our Kaspersky Threat Intelligence Portal.

A practical example of why attribution matters

Let’s say we upload a piece of malware we’ve discovered to a threat intelligence portal, and learn that it’s usually being used by, say, the MysterySnail group. What does that actually tell us? Let’s look at the available intel:

MysterySnail group information

First off, these attackers target government institutions in both Russia and Mongolia. They’re a Chinese-speaking group that typically focuses on espionage. According to their profile, they establish a foothold in infrastructure and lay low until they find something worth stealing. We also know that they typically exploit the vulnerability CVE-2021-40449. What kind of vulnerability is that?

CVE-2021-40449 vulnerability details

As we can see, it’s a privilege escalation vulnerability — meaning it’s used after hackers have already infiltrated the infrastructure. This vulnerability has a high severity rating and is heavily exploited in the wild. So what software is actually vulnerable?

Vulnerable software

Got it: Microsoft Windows. Time to double-check if the patch that fixes this hole has actually been installed. Alright, besides the vulnerability, what else do we know about the hackers? It turns out they have a peculiar way of checking network configurations — they connect to the public site 2ip.ru:

Technique details

So it makes sense to add a correlation rule to SIEM to flag that kind of behavior.

Now’s the time to read up on this group in more detail and gather additional indicators of compromise (IoCs) for SIEM monitoring, as well as ready-to-use YARA rules (structured text descriptions used to identify malware). This will help us track down all the tentacles of this kraken that might have already crept into corporate infrastructure, and ensure we can intercept them quickly if they try to break in again.

Additional MysterySnail reports

Kaspersky Threat Intelligence Portal provides a ton of additional reports on MysterySnail attacks, each complete with a list of IoCs and YARA rules. These YARA rules can be used to scan all endpoints, and those IoCs can be added into SIEM for constant monitoring. While we’re at it, let’s check the reports to see how these attackers handle data exfiltration, and what kind of data they’re usually hunting for. Now we can actually take steps to head off the attack.

And just like that, MysterySnail, the infrastructure is now tuned to find you and respond immediately. No more spying for you!

Malware attribution methods

Before diving into specific methods, we need to make one thing clear: for attribution to actually work, the threat intelligence provided needs a massive knowledge base of the tactics, techniques, and procedures (TTPs) used by threat actors. The scope and quality of these databases can vary wildly among vendors. In our case, before even building our tool, we spent years tracking known groups across various campaigns and logging their TTPs, and we continue to actively update that database today.

With a TTP database in place, the following attribution methods can be implemented:

  1. Dynamic attribution: identifying TTPs through the dynamic analysis of specific files, then cross-referencing that set of TTPs against those of known hacking groups
  2. Technical attribution: finding code overlaps between specific files and code fragments known to be used by specific hacking groups in their malware

Dynamic attribution

Identifying TTPs during dynamic analysis is relatively straightforward to implement; in fact, this functionality has been a staple of every modern sandbox for a long time. Naturally, all of our sandboxes also identify TTPs during the dynamic analysis of a malware sample:

TTPs of a malware sample

The core of this method lies in categorizing malware activity using the MITRE ATT&CK framework. A sandbox report typically contains a list of detected TTPs. While this is highly useful data, it’s not enough for full-blown attribution to a specific group. Trying to identify the perpetrators of an attack using just this method is a lot like the ancient Indian parable of the blind men and the elephant: blindfolded folks touch different parts of an elephant and try to deduce what’s in front of them from just that. The one touching the trunk thinks it’s a python; the one touching the side is sure it’s a wall, and so on.

Blind men and an elephant

Technical attribution

The second attribution method is handled via static code analysis (though keep in mind that this type of attribution is always problematic). The core idea here is to cluster even slightly overlapping malware files based on specific unique characteristics. Before analysis can begin, the malware sample must be disassembled. The problem is that alongside the informative and useful bits, the recovered code contains a lot of noise. If the attribution algorithm takes this non-informative junk into account, any malware sample will end up looking similar to a great number of legitimate files, making quality attribution impossible. On the flip side, trying to only attribute malware based on the useful fragments but using a mathematically primitive method will only cause the false positive rate to go through the roof. Furthermore, any attribution result must be cross-checked for similarities with legitimate files — and the quality of that check usually depends heavily on the vendor’s technical capabilities.

Kaspersky’s approach to attribution

Our products leverage a unique database of malware associated with specific hacking groups, built over more than 25 years. On top of that, we use a patented attribution algorithm based on static analysis of disassembled code. This allows us to determine — with high precision, and even a specific probability percentage — how similar an analyzed file is to known samples from a particular group. This way, we can form a well-grounded verdict attributing the malware to a specific threat actor. The results are then cross-referenced against a database of billions of legitimate files to filter out false positives; if a match is found with any of them, the attribution verdict is adjusted accordingly. This approach is the backbone of the Kaspersky Threat Attribution Engine, which powers the threat attribution service on the Kaspersky Threat Intelligence Portal.

  •  

Closing the Cyber Security Skills Gap: Check Point Partners with CompTIA

The cyber security industry faces a critical challenge: a growing skills gap that leaves organizations exposed to increasingly sophisticated threats. Businesses need qualified professionals who can secure systems and respond effectively, but finding and training those experts remains a global concern. To address this challenge, Infinity Global Services, which delivers practical learning designed to build real-world cyber security expertise, has partnered with CompTIA, a global leader in IT and cyber security education. This collaboration combines Infinity Global Services’ hands-on training approach with CompTIA’s globally recognized certifications, creating a powerful pathway for professionals to advance their careers and organizations to build […]

The post Closing the Cyber Security Skills Gap: Check Point Partners with CompTIA appeared first on Check Point Blog.

  •  

Prisma AIRS Secures the Power of Factory’s Software Development Agents

The New Frontier of Agentic Development: Accelerating Developer Productivity

The world of software development is undergoing a rapid transformation, driven by the rise of AI agents and autonomous tools. Factory is advancing this shift through agent-native development, a new paradigm where developers focus on high-level design and agents, called Droids, handle the execution. Designed to support work across the software development lifecycle, these agents enable a new mode of development, delivering significant gains in speed and productivity, without sacrificing developer control.

As developer workflows increasingly rely on autonomous development agents, the way software is built evolves. This shift introduces important security considerations, such as prompt injection, sensitive data loss, unsafe URL access and malicious code execution, which, if left unaddressed, can undermine the very benefits these agents offer. Accelerating productivity depends not just on deploying agents, but on deploying them securely. This is where Palo Alto Networks, with its purpose-built AI security platform, Prisma® AIRS™, plays a critical role.

The Productivity Paradox: Where Agents Introduce Risk

Autonomous agents operating across the software development lifecycle accelerate developer productivity, while also introducing a complex, language-driven threat surface that traditional security tools are not equipped to handle. As a result, new risks emerge, such as prompt injection or leaking secrets that extend beyond the visibility and control assumptions of traditional security approaches. Addressing these considerations is essential to preserving the benefits that agentic development provides.

Recognizing this shift, Palo Alto Networks has introduced targeted capabilities to accelerate secure development workflows. These efforts focus on three critical defense areas: preventing prompt injection, blocking sensitive data leaks and enabling robust malicious code detection capabilities, all of which are necessary to secure the full lifecycle of agent-driven systems.

The Solution: Securing Agentic Workflows for Acceleration

The solution is designed to convert security challenges directly into deployment confidence, dramatically accelerating productivity. By natively integrating Prisma AIRS within Factory’s Droid Shield Plus, the platform is able to inspect all large language model (LLM) interactions, including prompts, responses and subsequent tool calls, to enable comprehensive security across each interaction with the agent.

Prisma AIRS is a comprehensive platform designed to provide organizations with the visibility and control needed to safeguard AI agents across any environment. The platform continuously monitors agent behavior in real time to detect and prevent threats unique to agent-driven systems.

Droid Shield Plus key features: prompt injection detection, advanced secrets scanning, sensitive data protection, malicious code detection.
Droid Shield Plus, powered by Palo Alto Networks

How Security Drives Speed

Embedding security natively into the Factory platform enables two crucial outcomes. To start, it delivers a secure, agent-native development experience for every developer, fostering immediate trust in the integrity of the generated code and documentation. This assurance removes friction often associated with AI-powered workflows, which can accelerate enterprise adoption and scaling of the Factory platform across the organization.

When developers can trust the agents and the integrity of the generated code and documentation, they can innovate faster and deploy with greater confidence. Instead of waiting for security reviews or dealing with fragmentation, security is woven seamlessly into the development lifecycle.

Sequence of events from user to user with Prisma AIRS and Factory AI.
Factory-Prisma AIRS Integration Flow

The integration follows a clear API Intercept design pattern:

• When a user enters a prompt or initiates work in Factory, Prisma AIRS intercepts the workflow. If a malicious prompt is detected, the platform can add logic to coach or block the user.

• Similarly, after the LLM generates code, Prisma AIRS intercepts the generated content. If secrets are detected, the platform again adds logic to coach or block the result before it reaches Factory or the user.

This real-time inspection of prompts and generated code enables development teams to be protected against threats, such as privilege escalation, prompt injection and malicious code execution, without disrupting developer velocity.

Deploy Bravely

Prisma AIRS 2.0 establishes a unified foundation for scalable and secure AI innovation. By combining Factory’s agent-native development platform with the threat detection capabilities of Palo Alto Networks Prisma AIRS, organizations gain a powerful advantage. Together, this approach helps organizations adopt agentic development with confidence by embedding security directly into the development experience.

For enterprises looking to confidently scale AI automation and realize the immense productivity gains offered by Factory’s Droids, integrating Prisma AIRS is the next step. This combined approach enables teams to "Deploy Bravely." To learn more about this strategic partnership and integration, see our latest integration announcement and review the Droid Shield Plus integration documentation.


Key Takeaways for Secure Agentic Development

When adopting Factory with Prisma AIRS, enterprises realize immediate benefits that accelerate their AI strategy:

  1. Specialized Threat Defense
    Enterprises gain real-time, targeted protection against agent-specific threats, specifically prompt injection attacks and data leaks, which legacy tools cannot address.
  2. Native, Seamless Security
    Moving from a fragmented review process to a continuous, automated defense via API Interception, security enables compliance without slowing down development velocity.
  3. Deployment Confidence
    The native integration transforms security risks into operational assurance, accelerating the large-scale enterprise adoption and scaling of your Factory agent-native automation initiatives.

The post Prisma AIRS Secures the Power of Factory’s Software Development Agents appeared first on Palo Alto Networks Blog.

  •  

Palo Alto Networks Announces Support for NVIDIA Enterprise AI Factory

Artificial intelligence has shifted to being the primary engine for market leadership. To compete, enterprises are shifting from general-purpose computing to AI factories, specialized infrastructures designed to manage the entire lifecycle of AI. However, this transition requires robust security without sacrificing performance and efficiency.

We are proud to announce that Palo Alto Networks Prisma® AIRS™, accelerated on the NVIDIA BlueField data processing unit (DPU), is now part of the NVIDIA Enterprise AI Factory validated design.

The integrated solution embeds zero trust security directly into the AI infrastructure, providing comprehensive protection without impacting AI performance. By deploying Palo Alto Networks Prisma® AIRS™ Network Intercept directly onto the NVIDIA BlueField and extending to the cloud, Prisma AIRS establishes an essential zero trust governance fabric for the AI factory, enabling enterprises to accelerate innovation while maintaining control.

This critical architectural shift enables optimal AI performance and infrastructure efficiency by offloading security processing to an isolated domain, while leveraging the DPU's hardware acceleration via NVIDIA DOCA to enforce security policies at line speed. The implementation also leverages real-time workload information captured using DOCA Argus, which is then passed to Cortex XSIAM® where it is used for AI-driven responses using the Cortex XSOAR® orchestration platform.

Rich Campagna, SVP Product Management, Palo Alto Networks said:

The AI Factory is the new engine for value creation, and securing it is a board-level imperative. The validation of Palo Alto Networks Prisma AIRS accelerated with NVIDIA BlueField within the NVIDIA Enterprise AI Factory enables a new security architecture for the AI era. We are embedding trust directly into the infrastructure, giving leaders the confidence to safeguard their proprietary intelligence and deploy AI bravely.

Kevin Deierling, senior vice president of Networking at NVIDIA said:

AI is transforming every industry and security must evolve to protect AI factories. To be scalable, security must be distributed and embedded within the AI infrastructure. This is achieved with NVIDIA BlueField running Palo Alto Networks Prisma AIRS to deliver robust, runtime security for the AI factory, with optimal AI performance and efficiency.

Deploy AI Bravely with a Future-Proof Foundation

The Future of Secure AI Factories

NVIDIA AI Factory with Prisma AIRS and Strata.

In addition to deploying Palo Alto Networks Prisma AIRS on NVIDIA BlueField in a distributed model, it’s essential to maintain a centralized Hyperscale Security Firewall (HSF) cluster at the ingress and egress points of the AI factory to enforce a defense-in-depth strategy. Beyond network segmentation, individual workloads can selectively route traffic through hyperscale clusters to detect advanced application-layer threats and prevent lateral movement. These hyperscale firewall clusters scale elastically with demand, delivering session resiliency and the high availability required for critical AI operations.

This architecture fundamentally improves the Total Cost of Ownership (TCO) for AI infrastructure. By isolating security functions on BlueField, enterprises enable 100% of host computing resources to be dedicated to AI applications. This elimination of resource contention allows the AI Factory to maximize token throughput and capital efficiency.

This validated design is the blueprint for immediate efficiency. It provides a seamless path for enterprises to shift from general-purpose clusters to secure AI factory infrastructure without costly overhauls. More importantly, this collaboration establishes an unparalleled roadmap for future-proofing your investment. By securing operations with the high-performance NVIDIA BlueField-3 today, the architecture is inherently ready for the next generation, NVIDIA BlueField-4. This forward compatibility helps AI factories immediately handle gigascale demands, scaling up to 6X the compute power and doubling the bandwidth when BlueField-4 becomes available.

The inclusion of the Palo Alto Networks Prisma AIRS platform in the NVIDIA Enterprise AI Factory Validated Design bolsters enterprise AI security. By establishing the zero trust governance fabric of Prisma AIRS runtime security on NVIDIA BlueField, organizations gain a comprehensive defense. Proprietary and sensitive data is secured throughout the entire stack, and models are protected from adversarial threats, such as prompt injection attacks. With Prisma AIRS, the world's most comprehensive AI security platform, leaders gain the confidence to innovate and deploy AI bravely. This validated design is the essential blueprint for securely accelerating your market leadership without compromising security.

Join our "How to Secure the AI Factory" breakout session at NVIDIA GTC 2026, March 16-19, in San Jose, CA to hear more about this transformative solution and accelerate your AI innovation securely.

The post Palo Alto Networks Announces Support for NVIDIA Enterprise AI Factory appeared first on Palo Alto Networks Blog.

  •  

Opening the Automation Garden: API Request & Webhook Trigger in Infinity Playblocks

Today’s security teams work in complex, multi-tool environments. Alerts flow from SIEMs, tickets are created in ITSM platforms, actions occur in cloud and network controls, and workflows span countless third-party services. To keep pace, automation must be open, flexible, and seamlessly connected across every system that matters. We’re excited to introduce two powerful new capabilities in Infinity Playblocks that take us one step closer to a truly open automation ecosystem: API Request Step and Webhook Trigger. Together, they unlock a new open garden approach to security automation – where Infinity Playblocks seamlessly integrates with any system, inbound or outbound, without […]

The post Opening the Automation Garden: API Request & Webhook Trigger in Infinity Playblocks appeared first on Check Point Blog.

  •  

The Power of Unity

Transforming Real-Time Protection with Cloud-Delivered Security Services

Discover how unified prevention provides IT leaders with real-time protection across every attack surface.

The line between innovation and exposure has never been blurred in today’s hyperconnected digital world. Every new device, application and cloud workload expands the modern attack surface, creating endless opportunities for adversaries who are scaling faster and becoming more sophisticated than ever before. The threat landscape is no longer defined by isolated malware or phishing emails. The modern attack surface has evolved into a dynamic, adaptive ecosystem, driven by automation and artificial intelligence. Traditional security is no longer enough against AI-powered adversaries; protecting the modern attack surface demands a unified, intelligent Cloud-Delivered Security Services (CDSS) platform.

Attackers have learned to weaponize the same innovations that once gave defenders an advantage. Generative AI now enables them to craft convincing phishing messages, generate polymorphic malware that changes with every delivery, and automate reconnaissance at an unprecedented scale. Ransomware groups operate with the speed and agility of modern startups, using AI to identify weaknesses while staying one step ahead of detection. The result is an era where breaches unfold in minutes rather than days. Organizations are left with little room for error, underscoring the urgent need for security that goes beyond traditional approaches.

The Power of a Unified, Cloud-Delivered Security Service Platform

The power of our Cloud-Delivered Security Services lies in our ability to bring together every layer of protection into a single, intelligent, connected system. This unified platform combines Advanced Threat Prevention, Advanced WildFire® (AWF), Advanced DNS Security (ADNS) and Advanced URL Filtering (AURL) into a single AI-powered fabric that operates at the speed of the cloud.

Powered by Precision AI®, this framework delivers real-time contextual awareness across every stage of the attack lifecycle. It continuously analyzes billions of signals across networks, users and applications to transform raw data into actionable insights. This capability enables organizations to move from reactive detection to proactive prevention, stopping threats before they can disrupt operations.

Every day, our CDSS services analyze up to 5.43 billion new events, detect nearly 8.95 million never-before-seen attacks, and block up to 30.9 billion threats inline. This scale of visibility is strengthened by AI that’s trained on shared threat data from more than 70,000 customers, creating a powerful network effect that delivers patient-zero prevention everywhere. This depth of intelligence provides the visibility and context needed to understand and stop even the most sophisticated attacks as they evolve.

CDSS services prevent zero day injection, evasive malware, phishing, DNS hijacking attacks.
CDSS Advanced Core Security Services

Shared telemetry flows naturally across services, helping ensure threats are detected and prevented without operating in silos. A phishing domain identified by Advanced DNS Security can immediately inform Advanced URL Filtering to block the malicious site. When Advanced WildFire uncovers a new zero-day technique or malicious artifact, that intelligence is shared instantly across the CDSS intelligence layer. Inline services, like Advanced Threat Prevention and Advanced URL Filtering, are enabled to strengthen protections in real time without manual intervention.

For IT leaders and security teams, this unified approach delivers comprehensive visibility and protection that keeps pace with their environment. Continuous intelligence adapts as conditions change, reducing complexity and improving operational efficiency. With consistent policy enforcement, faster decision-making and unified management across the enterprise, organizations can shift their focus from maintenance to innovation and growth.

The Moment Traditional Security Stopped Being Enough

There was a time when traditional network security was enough. Perimeter defenses and signature-based tools could reliably detect and block most threats before they caused harm. For years, this layered approach gave organizations a sense of confidence and control. But that moment has passed.

Our Unit 42® team has found that attacks are now faster, more sophisticated and more disruptive than ever, with 86% of major incidents in 2024 resulting in business disruption. This shift underscores how quickly traditional defenses are being outpaced and why yesterday’s security models no longer match today’s threat landscape.

Traditional, siloed architectures simply cannot keep up with modern attackers. What once served as a strong defense is now outmatched by adversaries who use AI to move faster and slip through the cracks of static security controls. Attackers no longer need to rely on predictable patterns or known exploits. They can use machine learning to probe defenses, mimic legitimate activity and disguise malicious activity within normal traffic, allowing them to bypass systems once they are considered unbreakable.

Older security products that depend on static signatures or manual policy updates cannot match this speed and scale. They respond to what has already happened, not to what is happening right now. By the time a new rule is written or a patch is applied, the threat has already evolved. Fragmented visibility and delayed response times give adversaries the upper hand, leaving IT teams defending blind against threats that adapt and shift faster than their defenses ever could.

In an effort to compensate, many organizations continue to add more tools: One for web filtering, one for DNS, one for malware analysis and one for data protection. While each solution provides value, they rarely work together. The result is an overcomplicated ecosystem of disconnected products that create visibility gaps, duplicate alerts, inconsistent policy enforcement and operational overhead. These gaps are exactly where attackers find their opportunity.

The moment traditional security stopped being enough was the moment attackers learned to think and move like machines. The rise of AI-powered threats marked the end of static defense and the beginning of a new kind of warfare, one that demands prevention and is predictive, adaptive and unified.

Today, enterprises don’t need more point products. They need a single, intelligent security fabric so they can see, understand and act across every vector, from DNS to SaaS to endpoint, in one coordinated motion. Attackers increasingly weaponize GenAI to craft more evasive phishing pages, malware and domain infrastructures. So, security teams must rely on defenses that can counter these techniques in real time by effectively battling AI with AI.

That is where cloud-delivered security services (CDSS) redefine the game, bringing AI-driven prevention to every corner of the network.

Your Defense Is Only as Strong as What’s Enabled

Having the proper security tools is only part of the equation. Real protection comes when those tools are fully enabled, integrated and working together to secure the organization. Cloud-delivered security services deliver their greatest value when they are live and continuously analyzing traffic, sharing intelligence and adapting in real time to support the business.

Too often, organizations have the right capabilities in place but leave them underutilized or inactive. Protection begins the moment each service is turned on and working in unison to deliver real-time prevention at scale. Ensuring that these capabilities are fully enabled and actively defending the network is what turns investment into impact.

Prevention is about readiness, not reaction. The most resilient organizations are those that activate early, integrate completely and allow automation to amplify what human oversight cannot. When IT leaders enable Advanced Threat Prevention, AWF, ADNS and AURL, prevention becomes continuous, intelligent and aligned with the pace of modern threats.

The power of our CDSS lies in both their advanced technology and the unity they create. Together, these services form an intelligent defense that connects detection, prevention and response into one seamless operation, all powered by Precision AI.

Powering all core security services with Precision AI.
Precision AI Foundation for Advanced Security Services

Now, with AI reshaping both innovation and risk, CDSS helps organizations stay confidently ahead. IT leaders who enable these capabilities strengthen visibility, simplify operations and elevate their overall security posture.

Fully enable your defenses and have them ready to prevent threats at each stage of the attack lifecycle. Learn more about activating CDSS through Strata™ Cloud manager or speak with your Palo Alto Networks representative to see how unified, AI-powered prevention can strengthen your organization’s security posture.


Key Takeaways

  1. Traditional Security Is Insufficient Against Modern Threats
    The rise of AI and automation has created an era in which attackers move faster and are more sophisticated than traditional, siloed security products can handle, leading to an increasing number of major incidents that disrupt business.
  2. Unified Cloud-Delivered Security Services (CDSS) Are Necessary for Proactive Prevention
    Protecting the modern attack surface requires a single, intelligent, connected CDSS platform that unifies all layers of protection (e.g., Advanced Threat Prevention, AWF, ADNS, AURL) into an AI-powered fabric, enabling proactive, real-time prevention rather than reactive detection.
  3. Real Protection Depends on Full Activation and Integration
    Having the right security tools is only part of the solution. The greatest value and protection are realized when your fully enabled CDSS is integrated and working in unison to continuously analyze traffic and share intelligence.

The post The Power of Unity appeared first on Palo Alto Networks Blog.

  •  

Cyber Resilience Starts with Training: Why Skills Define Security Success

Define Security Success Organizations face an escalating threat landscape and a widening cyber security skills gap. Compliance-driven training alone cannot prepare teams for real-world challenges like incident response, SOC operations, and threat hunting. Without robust, practical training, defenses weaken, and vulnerabilities multiply. Recent data from Cybrary – a leading cyber security training platform – shows how modern approaches are transforming readiness. Cybrary specializes in practical, role-based learning for security professionals. Through its partnership with Check Point’s Infinity Global Services, organizations gain access to structured programs that combine industry-recognized certifications, hands-on labs, and customized learning paths. The Impact of Cyber Security […]

The post Cyber Resilience Starts with Training: Why Skills Define Security Success appeared first on Check Point Blog.

  •  

Yet another DCOM object for lateral movement

Introduction

If you’re a penetration tester, you know that lateral movement is becoming increasingly difficult, especially in well-defended environments. One common technique for remote command execution has been the use of DCOM objects.

Over the years, many different DCOM objects have been discovered. Some rely on native Windows components, others depend on third-party software such as Microsoft Office, and some are undocumented objects found through reverse engineering. While certain objects still work, others no longer function in newer versions of Windows.

This research presents a previously undescribed DCOM object that can be used for both command execution and potential persistence. This new technique abuses older initial access and persistence methods through Control Panel items.

First, we will discuss COM technology. After that, we will review the current state of the Impacket dcomexec script, focusing on objects that still function, and discuss potential fixes and improvements, then move on to techniques for enumerating objects on the system. Next, we will examine Control Panel items, how adversaries have used them for initial access and persistence, and how these items can be leveraged through a DCOM object to achieve command execution.

Finally, we will cover detection strategies to identify and respond to this type of activity.

COM/DCOM technology

What is COM?

COM stands for Component Object Model, a Microsoft technology that defines a binary standard for interoperability. It enables the creation of reusable software components that can interact at runtime without the need to compile COM libraries directly into an application.

These software components operate in a client–server model. A COM object exposes its functionality through one or more interfaces. An interface is essentially a collection of related member functions (methods).

COM also enables communication between processes running on the same machine by using local RPC (Remote Procedure Call) to handle cross-process communication.

Terms

To ensure a better understanding of its structure and functionality, let’s revise COM-related terminology.

  1. COM interface
    A COM interface defines the functionality that a COM object exposes. Each COM interface is identified by a unique GUID known as the IID (Interface ID). All COM interfaces can be found in the Windows Registry under HKEY_CLASSES_ROOT\Interface, where they are organized by GUID.
  2. COM class (COM CoClass)
    A COM class is the actual implementation of one or more COM interfaces. Like COM interfaces, classes are identified by unique GUIDs, but in this case the GUID is called the CLSID (Class ID). This GUID is used to locate the COM server and activate the corresponding COM class.

    All COM classes must be registered in the registry under HKEY_CLASSES_ROOT\CLSID, where each class’s GUID is stored. Under each GUID, you may find multiple subkeys that serve different purposes, such as:

    • InprocServer32/LocalServer32: Specifies the system path of the COM server where the class is defined. InprocServer32 is used for in-process servers (DLLs), while LocalServer32 is used for out-of-process servers (EXEs). We’ll describe this in more detail later.
    • ProgID: A human-readable name assigned to the COM class.
    • TypeLib: A binary description of the COM class (essentially documentation for the class).
    • AppID: Used to describe security configuration for the class.
  3. COM server
    A COM is the module where a COM class is defined. The server can be implemented as an EXE, in which case it is called an out-of-process server, or as a DLL, in which case it is called an in-process server. Each COM server has a unique file path or location in the system. Information about COM servers is stored in the Windows Registry. The COM runtime uses the registry to locate the server and perform further actions. Registry entries for COM servers are located under the HKEY_CLASSES_ROOT root key for both 32- and 64-bit servers.
Component Object Model implementation

Component Object Model implementation

Client–server model

  1. In-process server
    In the case of an in-process server, the server is implemented as a DLL. The client loads this DLL into its own address space and directly executes functions exposed by the COM object. This approach is efficient since both client and server run within the same process.
    In-process COM server

    In-process COM server

  2. Out-of-process server
    Here, the server is implemented and compiled as an executable (EXE). Since the client cannot load an EXE into its address space, the server runs in its own process, separate from the client. Communication between the two processes is handled via ALPC (Advanced Local Procedure Call) ports, which serve as the RPC transport layer for COM.
Out-of-process COM server

Out-of-process COM server

What is DCOM?

DCOM is an extension of COM where the D stands for Distributed. It enables the client and server to reside on different machines. From the user’s perspective, there is no difference: DCOM provides an abstraction layer that makes both the client and the server appear as if they are on the same machine.

Under the hood, however, COM uses TCP as the RPC transport layer to enable communication across machines.

Distributed COM implementation

Distributed COM implementation

Certain requirements must be met to extend a COM object into a DCOM object. The most important one for our research is the presence of the AppID subkey in the registry, located under the COM CLSID entry.

The AppID value contains a GUID that maps to a corresponding key under HKEY_CLASSES_ROOT\AppID. Several subkeys may exist under this GUID. Two critical ones are:

  • AccessPermission: controls access permissions.
  • LaunchPermission: controls activation permissions.

These registry settings grant remote clients permissions to activate and interact with DCOM objects.

Lateral movement via DCOM

After attackers compromise a host, their next objective is often to compromise additional machines. This is what we call lateral movement. One common lateral movement technique is to achieve remote command execution on a target machine. There are many ways to do this, one of which involves abusing DCOM objects.

In recent years, many DCOM objects have been discovered. This research focuses on the objects exposed by the Impacket script dcomexec.py that can be used for command execution. More specifically, three exposed objects are used: ShellWindows, ShellBrowserWindow and MMC20.

  1. ShellWindows
    ShellWindows was one of the first DCOM objects to be identified. It represents a collection of open shell windows and is hosted by explorer.exe, meaning any COM client communicates with that process.

    In Impacket’s dcomexec.py, once an instance of this COM object is created on a remote machine, the script provides a semi-interactive shell.

    Each time a user enters a command, the function exposed by the COM object is called. The command output is redirected to a file, which the script retrieves via SMB and displays back to simulate a regular shell.

    Internally, the script runs this command when connecting:

    cmd.exe /Q /c cd \ 1> \\127.0.0.1\ADMIN$\__17602 2>&1

    This sets the working directory to C:\ and redirects the output to the ADMIN$ share under the filename __17602. After that, the script checks whether the file exists; if it does, execution is considered successful and the output appears as if in a shell.

    When running dcomexec.py against Windows 10 and 11 using the ShellWindows object, the script hangs after confirming SMB connection initialization and printing the SMB banner. As I mentioned in my personal blog post, it appears that this DCOM object no longer has permission to write to the ADMIN$ share. A simple fix is to redirect the output to a directory the DCOM object can write to, such as the Temp folder. The Temp folder can then be accessed under the same ADMIN$ share. A small change in the code resolves the issue. For example:

    OUTPUT_FILENAME = 'Temp\\__' + str(time.time())[:5]

  2. ShellBrowserWindow
    The ShellBrowserWindow object behaves almost identically to ShellWindows and exhibits the same behavior on Windows 10. The same workaround that we used for ShellWindows applies in this case. However, on Windows 11, this object no longer works for command execution.
  3. MMC20
    The MMC20.Application COM object is the automation interface for Microsoft Management Console (MMC). It exposes methods and properties that allow MMC snap-ins to be automated.

    This object has historically worked across all Windows versions. Starting with Windows Server 2025, however, attempting to use it triggers a Defender alert, and execution is blocked.

    As shown in earlier examples, the dcomexec.py script writes the command output to a file under ADMIN$, with a filename that begins with __:

    OUTPUT_FILENAME = '__' + str(time.time())[:5]

    Defender appears to check for files written under ADMIN$ that start with __, and when it detects one, it blocks the process and alerts the user. A quick fix is to simply remove the double underscores from the output filename.

    Another way to bypass this issue is to use the same workaround used for ShellWindows – redirecting the output to the Temp folder. The table below outlines the status of these objects across different Windows versions.

    Windows Server 2025 Windows Server 2022 Windows 11 Windows 10
    ShellWindows Doesn’t work Doesn’t work Works but needs a fix Works but needs a fix
    ShellBrowserWindow Doesn’t work Doesn’t work Doesn’t work Works but needs a fix
    MMC20 Detected by Defender Works Works Works

Enumerating COM/DCOM objects

The first step to identifying which DCOM objects could be used for lateral movement is to enumerate them. By enumerating, I don’t just mean listing the objects. Enumeration involves:

  • Finding objects and filtering specifically for DCOM objects.
  • Identifying their interfaces.
  • Inspecting the exposed functions.

Automating enumeration is difficult because most COM objects lack a type library (TypeLib). A TypeLib acts as documentation for an object: which interfaces it supports, which functions are exposed, and the definitions of those functions. Even when TypeLibs are available, manual inspection is often still required, as we will explain later.

There are several approaches to enumerating COM objects depending on their use cases. Next, we’ll describe the methods I used while conducting this research, taking into account both automated and manual methods.

  1. Automation using PowerShell
    In PowerShell, you can use .NET to create and interact with DCOM objects. Objects can be created using either their ProgID or CLSID, after which you can call their functions (as shown in the figure below).
    Shell.Application COM object function list in PowerShell

    Shell.Application COM object function list in PowerShell

    Under the hood, PowerShell checks whether the COM object has a TypeLib and implements the IDispatch interface. IDispatch enables late binding, which allows runtime dynamic object creation and function invocation. With these two conditions met, PowerShell can dynamically interact with COM objects at runtime.

    Our strategy looks like this:

    As you can see in the last box, we perform manual inspection to look for functions with names that could be of interest, such as Execute, Exec, Shell, etc. These names often indicate potential command execution capabilities.

    However, this approach has several limitations:

    • TypeLib requirement: Not all COM objects have a TypeLib, so many objects cannot be enumerated this way.
    • IDispatch requirement: Not all COM objects implement the IDispatch interface, which is required for PowerShell interaction.
    • Interface control: When you instantiate an object in PowerShell, you cannot choose which interface the instance will be tied to. If a COM class implements multiple interfaces, PowerShell will automatically select the one marked as [default] in the TypeLib. This means that other non-default interfaces, which may contain additional relevant functionality, such as command execution, could be overlooked.
  2. Automation using C++
    As you might expect, C++ is one of the languages that natively supports COM clients. Using C++, you can create instances of COM objects and call their functions via header files that define the interfaces.However, with this approach, we are not necessarily interested in calling functions directly. Instead, the goal is to check whether a specific COM object supports certain interfaces. The reasoning is that many interfaces have been found to contain functions that can be abused for command execution or other purposes.

    This strategy primarily relies on an interface called IUnknown. All COM interfaces should inherit from this interface, and all COM classes should implement it.The IUnknown interface exposes three main functions. The most important is QueryInterface(), which is used to ask a COM object for a pointer to one of its interfaces.So, the strategy is to:

    • Enumerate COM classes in the system by reading CLSIDs under the HKEY_CLASSES_ROOT\CLSID key.
    • Check whether they support any known valuable interfaces. If they do, those classes may be leveraged for command execution or other useful functionality.

    This method has several advantages:

    • No TypeLib dependency: Unlike PowerShell, this approach does not require the COM object to have a TypeLib.
    • Use of IUnknown: In C++, you can use the QueryInterface function from the base IUnknown interface to check if a particular interface is supported by a COM class.
    • No need for interface definitions: Even without knowing the exact interface structure, you can obtain a pointer to its virtual function table (vtable), typically cast as a void*. This is enough to confirm the existence of the interface and potentially inspect it further.

    The figure below illustrates this strategy:

    This approach is good in terms of automation because it eliminates the need for manual inspection. However, we are still only checking well-known interfaces commonly used for lateral movement, while potentially missing others.

  3. Manual inspection using open-source tools

    As you can see, automation can be difficult since it requires several prerequisites and, in many cases, still ends with a manual inspection. An alternative approach is manual inspection using a tool called OleViewDotNet, developed by James Forshaw. This tool allows you to:
    • List all COM classes in the system.
    • Create instances of those classes.
    • Check their supported interfaces.
    • Call specific functions.
    • Apply various filters for easier analysis.
    • Perform other inspection tasks.
    Open-source tool for inspecting COM interfaces

    Open-source tool for inspecting COM interfaces

    One of the most valuable features of this tool is its naming visibility. OleViewDotNet extracts the names of interfaces and classes (when available) from the Windows Registry and displays them, along with any associated type libraries.

    This makes manual inspection easier, since you can analyze the names of classes, interfaces, or type libraries and correlate them with potentially interesting functionality, for example, functions that could lead to command execution or persistence techniques.

Control Panel items as attack surfaces

Control Panel items allow users to view and adjust their computer settings. These items are implemented as DLLs that export the CPlApplet function and typically have the .cpl extension. Control Panel items can also be executables, but our research will focus on DLLs only.

Control Panel items

Control Panel items

Attackers can abuse CPL files for initial access. When a user executes a malicious .cpl file (e.g., delivered via phishing), the system may be compromised – a technique mapped to MITRE ATT&CK T1218.002.

Adversaries may also modify the extensions of malicious DLLs to .cpl and register them in the corresponding locations in the registry.

  • Under HKEY_CURRENT_USER:
    HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
  • Under HKEY_LOCAL_MACHINE:
    • For 64-bit DLLs:
      HKLM\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
    • For 32-bit DLLs:
      HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Control Panel\Cpls

These locations are important when Control Panel DLLs need to be available to the current logged-in user or to all users on the machine. However, the “Control Panel” subkey and its “Cpls” subkey under HKCU should be created manually, unlike the “Control Panel” and “Cpls” subkeys under HKLM, which are created automatically by the operating system.

Once registered, the DLL (CPL file) will load every time the Control Panel is opened, enabling persistence on the victim’s system.

It’s worth noting that even DLLs that do not comply with the CPL specification, do not export CPlApplet, or do not have the .cpl extension can still be executed via their DllEntryPoint function if they are registered under the registry keys listed above.

There are multiple ways to execute Control Panel items:

  • From cmd: control.exe [filename].cpl
  • By double-clicking the .cpl file.

Both methods use rundll32.exe under the hood:

rundll32.exe shell32.dll,Control_RunDLL [filename].cpl

This calls the Control_RunDLL function from shell32.dll, passing the CPL file as an argument. Everything inside the CPlApplet function will then be executed.

However, if the CPL file has been registered in the registry as shown earlier, then every time the Control Panel is opened, the file is loaded into memory through the COM Surrogate process (dllhost.exe):

COM Surrogate process loading the CPL file

COM Surrogate process loading the CPL file

What happened was that a Control Panel with a COM client used a COM object to load these CPL files. We will talk about this COM object in more detail later.

The COM Surrogate process was designed to host COM server DLLs in a separate process rather than loading them directly into the client process’s address space. This isolation improves stability for the in-process server model. This hosting behavior can be configured for a COM object in the registry if you want a COM server DLL to run inside a separate process because, by default, it is loaded in the same process.

‘DCOMing’ through Control Panel items

While following the manual approach of enumerating COM/DCOM objects that could be useful for lateral movement, I came across a COM object called COpenControlPanel, which is exposed through shell32.dll and has the CLSID {06622D85-6856-4460-8DE1-A81921B41C4B}. This object exposes multiple interfaces, one of which is IOpenControlPanel with IID {D11AD862-66DE-4DF4-BF6C-1F5621996AF1}.

IOpenControlPanel interface in the OleViewDotNet output

IOpenControlPanel interface in the OleViewDotNet output

I immediately thought of its potential to compromise Control Panel items, so I wanted to check which functions were exposed by this interface. Unfortunately, neither the interface nor the COM class has a type library.

COpenControlPanel interfaces without TypeLib

COpenControlPanel interfaces without TypeLib

Normally, checking the interface definition would require reverse engineering, so at first, it looked like we needed to take a different research path. However, it turned out that the IOpenControlPanel interface is documented on MSDN, and according to the documentation, it exposes several functions. One of them, called Open, allows a specified Control Panel item to be opened using its name as the first argument.

Full type and function definitions are provided in the shobjidl_core.h Windows header file.

Open function exposed by IOpenControlPanel interface

Open function exposed by IOpenControlPanel interface

It’s worth noting that in newer versions of Windows (e.g., Windows Server 2025 and Windows 11), Microsoft has removed interface names from the registry, which means they can no longer be identified through OleViewDotNet.

COpenControlPanel interfaces without names

COpenControlPanel interfaces without names

Returning to the COpenControlPanel COM object, I found that the Open function can trigger a DLL to be loaded into memory if it has been registered in the registry. For the purposes of this research, I created a DLL that basically just spawns a message box which is defined under the DllEntryPoint function. I registered it under HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls and then created a simple C++ COM client to call the Open function on this interface.

As expected, the DLL was loaded into memory. It was hosted in the same way that it would be if the Control Panel itself was opened: through the COM Surrogate process (dllhost.exe). Using Process Explorer, it was clear that dllhost.exe loaded my DLL while simultaneously hosting the COpenControlPanel object along with other COM objects.

COM Surrogate loading a custom DLL and hosting the COpenControlPanel object

COM Surrogate loading a custom DLL and hosting the COpenControlPanel object

Based on my testing, I made the following observations:

  1. The DLL that needs to be registered does not necessarily have to be a .cpl file; any DLL with a valid entry point will be loaded.
  2. The Open() function accepts the name of a Control Panel item as its first argument. However, it appears that even if a random string is supplied, it still causes all DLLs registered in the relevant registry location to be loaded into memory.

Now, what if we could trigger this COM object remotely? In other words, what if it is not just a COM object but also a DCOM object? To verify this, we checked the AppID of the COpenControlPanel object using OleViewDotNet.

COpenControlPanel object in OleViewDotNet

COpenControlPanel object in OleViewDotNet

Both the launch and access permissions are empty, which means the object will follow the system’s default DCOM security policy. By default, members of the Administrators group are allowed to launch and access the DCOM object.

Based on this, we can build a remote strategy. First, upload the “malicious” DLL, then use the Remote Registry service to register it in the appropriate registry location. Finally, use a trigger acting as a DCOM client to remotely invoke the Open() function, causing our DLL to be loaded. The diagram below illustrates the flow of this approach.

Malicious DLL loading using DCOM

Malicious DLL loading using DCOM

The trigger can be written in either C++ or Python, for example, using Impacket. I chose Python because of its flexibility. The trigger itself is straightforward: we define the DCOM class, the interface, and the function to call. The full code example can be found here.

Once the trigger runs, the behavior will be the same as when executing the COM client locally: our DLL will be loaded through the COM Surrogate process (dllhost.exe).

As you can see, this technique not only achieves command execution but also provides persistence. It can be triggered in two ways: when a user opens the Control Panel or remotely at any time via DCOM.

Detection

The first step in detecting such activity is to check whether any Control Panel items have been registered under the following registry paths:

  • HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
  • HKLM\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
  • HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Control Panel\Cpls

Although commonly known best practices and research papers regarding Windows security advise monitoring only the first subkey, for thorough coverage it is important to monitor all of the above.

In addition, monitoring dllhost.exe (COM Surrogate) for unusual COM objects such as COpenControlPanel can provide indicators of malicious activity.
Finally, it is always recommended to monitor Remote Registry usage because it is commonly abused in many types of attacks, not just in this scenario.

Conclusion

In conclusion, I hope this research has clarified yet another attack vector and emphasized the importance of implementing hardening practices. Below are a few closing points for security researchers to take into account:

  • As shown, DCOM represents a large attack surface. Windows exposes many DCOM classes, a significant number of which lack type libraries – meaning reverse engineering can reveal additional classes that may be abused for lateral movement.
  • Changing registry values to register malicious CPLs is not good practice from a red teaming ethics perspective. Defender products tend to monitor common persistence paths, but Control Panel applets can be registered in multiple registry locations, so there is always a gap that can be exploited.
  • Bitness also matters. On x64 systems, loading a 32-bit DLL will spawn a 32-bit COM Surrogate process (dllhost.exe *32). This is unusual on 64-bit hosts and therefore serves as a useful detection signal for defenders and an interesting red flag for red teamers to consider.

  •  

Yet another DCOM object for lateral movement

Introduction

If you’re a penetration tester, you know that lateral movement is becoming increasingly difficult, especially in well-defended environments. One common technique for remote command execution has been the use of DCOM objects.

Over the years, many different DCOM objects have been discovered. Some rely on native Windows components, others depend on third-party software such as Microsoft Office, and some are undocumented objects found through reverse engineering. While certain objects still work, others no longer function in newer versions of Windows.

This research presents a previously undescribed DCOM object that can be used for both command execution and potential persistence. This new technique abuses older initial access and persistence methods through Control Panel items.

First, we will discuss COM technology. After that, we will review the current state of the Impacket dcomexec script, focusing on objects that still function, and discuss potential fixes and improvements, then move on to techniques for enumerating objects on the system. Next, we will examine Control Panel items, how adversaries have used them for initial access and persistence, and how these items can be leveraged through a DCOM object to achieve command execution.

Finally, we will cover detection strategies to identify and respond to this type of activity.

COM/DCOM technology

What is COM?

COM stands for Component Object Model, a Microsoft technology that defines a binary standard for interoperability. It enables the creation of reusable software components that can interact at runtime without the need to compile COM libraries directly into an application.

These software components operate in a client–server model. A COM object exposes its functionality through one or more interfaces. An interface is essentially a collection of related member functions (methods).

COM also enables communication between processes running on the same machine by using local RPC (Remote Procedure Call) to handle cross-process communication.

Terms

To ensure a better understanding of its structure and functionality, let’s revise COM-related terminology.

  1. COM interface
    A COM interface defines the functionality that a COM object exposes. Each COM interface is identified by a unique GUID known as the IID (Interface ID). All COM interfaces can be found in the Windows Registry under HKEY_CLASSES_ROOT\Interface, where they are organized by GUID.
  2. COM class (COM CoClass)
    A COM class is the actual implementation of one or more COM interfaces. Like COM interfaces, classes are identified by unique GUIDs, but in this case the GUID is called the CLSID (Class ID). This GUID is used to locate the COM server and activate the corresponding COM class.

    All COM classes must be registered in the registry under HKEY_CLASSES_ROOT\CLSID, where each class’s GUID is stored. Under each GUID, you may find multiple subkeys that serve different purposes, such as:

    • InprocServer32/LocalServer32: Specifies the system path of the COM server where the class is defined. InprocServer32 is used for in-process servers (DLLs), while LocalServer32 is used for out-of-process servers (EXEs). We’ll describe this in more detail later.
    • ProgID: A human-readable name assigned to the COM class.
    • TypeLib: A binary description of the COM class (essentially documentation for the class).
    • AppID: Used to describe security configuration for the class.
  3. COM server
    A COM is the module where a COM class is defined. The server can be implemented as an EXE, in which case it is called an out-of-process server, or as a DLL, in which case it is called an in-process server. Each COM server has a unique file path or location in the system. Information about COM servers is stored in the Windows Registry. The COM runtime uses the registry to locate the server and perform further actions. Registry entries for COM servers are located under the HKEY_CLASSES_ROOT root key for both 32- and 64-bit servers.
Component Object Model implementation

Component Object Model implementation

Client–server model

  1. In-process server
    In the case of an in-process server, the server is implemented as a DLL. The client loads this DLL into its own address space and directly executes functions exposed by the COM object. This approach is efficient since both client and server run within the same process.
    In-process COM server

    In-process COM server

  2. Out-of-process server
    Here, the server is implemented and compiled as an executable (EXE). Since the client cannot load an EXE into its address space, the server runs in its own process, separate from the client. Communication between the two processes is handled via ALPC (Advanced Local Procedure Call) ports, which serve as the RPC transport layer for COM.
Out-of-process COM server

Out-of-process COM server

What is DCOM?

DCOM is an extension of COM where the D stands for Distributed. It enables the client and server to reside on different machines. From the user’s perspective, there is no difference: DCOM provides an abstraction layer that makes both the client and the server appear as if they are on the same machine.

Under the hood, however, COM uses TCP as the RPC transport layer to enable communication across machines.

Distributed COM implementation

Distributed COM implementation

Certain requirements must be met to extend a COM object into a DCOM object. The most important one for our research is the presence of the AppID subkey in the registry, located under the COM CLSID entry.

The AppID value contains a GUID that maps to a corresponding key under HKEY_CLASSES_ROOT\AppID. Several subkeys may exist under this GUID. Two critical ones are:

  • AccessPermission: controls access permissions.
  • LaunchPermission: controls activation permissions.

These registry settings grant remote clients permissions to activate and interact with DCOM objects.

Lateral movement via DCOM

After attackers compromise a host, their next objective is often to compromise additional machines. This is what we call lateral movement. One common lateral movement technique is to achieve remote command execution on a target machine. There are many ways to do this, one of which involves abusing DCOM objects.

In recent years, many DCOM objects have been discovered. This research focuses on the objects exposed by the Impacket script dcomexec.py that can be used for command execution. More specifically, three exposed objects are used: ShellWindows, ShellBrowserWindow and MMC20.

  1. ShellWindows
    ShellWindows was one of the first DCOM objects to be identified. It represents a collection of open shell windows and is hosted by explorer.exe, meaning any COM client communicates with that process.

    In Impacket’s dcomexec.py, once an instance of this COM object is created on a remote machine, the script provides a semi-interactive shell.

    Each time a user enters a command, the function exposed by the COM object is called. The command output is redirected to a file, which the script retrieves via SMB and displays back to simulate a regular shell.

    Internally, the script runs this command when connecting:

    cmd.exe /Q /c cd \ 1> \\127.0.0.1\ADMIN$\__17602 2>&1

    This sets the working directory to C:\ and redirects the output to the ADMIN$ share under the filename __17602. After that, the script checks whether the file exists; if it does, execution is considered successful and the output appears as if in a shell.

    When running dcomexec.py against Windows 10 and 11 using the ShellWindows object, the script hangs after confirming SMB connection initialization and printing the SMB banner. As I mentioned in my personal blog post, it appears that this DCOM object no longer has permission to write to the ADMIN$ share. A simple fix is to redirect the output to a directory the DCOM object can write to, such as the Temp folder. The Temp folder can then be accessed under the same ADMIN$ share. A small change in the code resolves the issue. For example:

    OUTPUT_FILENAME = 'Temp\\__' + str(time.time())[:5]

  2. ShellBrowserWindow
    The ShellBrowserWindow object behaves almost identically to ShellWindows and exhibits the same behavior on Windows 10. The same workaround that we used for ShellWindows applies in this case. However, on Windows 11, this object no longer works for command execution.
  3. MMC20
    The MMC20.Application COM object is the automation interface for Microsoft Management Console (MMC). It exposes methods and properties that allow MMC snap-ins to be automated.

    This object has historically worked across all Windows versions. Starting with Windows Server 2025, however, attempting to use it triggers a Defender alert, and execution is blocked.

    As shown in earlier examples, the dcomexec.py script writes the command output to a file under ADMIN$, with a filename that begins with __:

    OUTPUT_FILENAME = '__' + str(time.time())[:5]

    Defender appears to check for files written under ADMIN$ that start with __, and when it detects one, it blocks the process and alerts the user. A quick fix is to simply remove the double underscores from the output filename.

    Another way to bypass this issue is to use the same workaround used for ShellWindows – redirecting the output to the Temp folder. The table below outlines the status of these objects across different Windows versions.

    Windows Server 2025 Windows Server 2022 Windows 11 Windows 10
    ShellWindows Doesn’t work Doesn’t work Works but needs a fix Works but needs a fix
    ShellBrowserWindow Doesn’t work Doesn’t work Doesn’t work Works but needs a fix
    MMC20 Detected by Defender Works Works Works

Enumerating COM/DCOM objects

The first step to identifying which DCOM objects could be used for lateral movement is to enumerate them. By enumerating, I don’t just mean listing the objects. Enumeration involves:

  • Finding objects and filtering specifically for DCOM objects.
  • Identifying their interfaces.
  • Inspecting the exposed functions.

Automating enumeration is difficult because most COM objects lack a type library (TypeLib). A TypeLib acts as documentation for an object: which interfaces it supports, which functions are exposed, and the definitions of those functions. Even when TypeLibs are available, manual inspection is often still required, as we will explain later.

There are several approaches to enumerating COM objects depending on their use cases. Next, we’ll describe the methods I used while conducting this research, taking into account both automated and manual methods.

  1. Automation using PowerShell
    In PowerShell, you can use .NET to create and interact with DCOM objects. Objects can be created using either their ProgID or CLSID, after which you can call their functions (as shown in the figure below).
    Shell.Application COM object function list in PowerShell

    Shell.Application COM object function list in PowerShell

    Under the hood, PowerShell checks whether the COM object has a TypeLib and implements the IDispatch interface. IDispatch enables late binding, which allows runtime dynamic object creation and function invocation. With these two conditions met, PowerShell can dynamically interact with COM objects at runtime.

    Our strategy looks like this:

    As you can see in the last box, we perform manual inspection to look for functions with names that could be of interest, such as Execute, Exec, Shell, etc. These names often indicate potential command execution capabilities.

    However, this approach has several limitations:

    • TypeLib requirement: Not all COM objects have a TypeLib, so many objects cannot be enumerated this way.
    • IDispatch requirement: Not all COM objects implement the IDispatch interface, which is required for PowerShell interaction.
    • Interface control: When you instantiate an object in PowerShell, you cannot choose which interface the instance will be tied to. If a COM class implements multiple interfaces, PowerShell will automatically select the one marked as [default] in the TypeLib. This means that other non-default interfaces, which may contain additional relevant functionality, such as command execution, could be overlooked.
  2. Automation using C++
    As you might expect, C++ is one of the languages that natively supports COM clients. Using C++, you can create instances of COM objects and call their functions via header files that define the interfaces.However, with this approach, we are not necessarily interested in calling functions directly. Instead, the goal is to check whether a specific COM object supports certain interfaces. The reasoning is that many interfaces have been found to contain functions that can be abused for command execution or other purposes.

    This strategy primarily relies on an interface called IUnknown. All COM interfaces should inherit from this interface, and all COM classes should implement it.The IUnknown interface exposes three main functions. The most important is QueryInterface(), which is used to ask a COM object for a pointer to one of its interfaces.So, the strategy is to:

    • Enumerate COM classes in the system by reading CLSIDs under the HKEY_CLASSES_ROOT\CLSID key.
    • Check whether they support any known valuable interfaces. If they do, those classes may be leveraged for command execution or other useful functionality.

    This method has several advantages:

    • No TypeLib dependency: Unlike PowerShell, this approach does not require the COM object to have a TypeLib.
    • Use of IUnknown: In C++, you can use the QueryInterface function from the base IUnknown interface to check if a particular interface is supported by a COM class.
    • No need for interface definitions: Even without knowing the exact interface structure, you can obtain a pointer to its virtual function table (vtable), typically cast as a void*. This is enough to confirm the existence of the interface and potentially inspect it further.

    The figure below illustrates this strategy:

    This approach is good in terms of automation because it eliminates the need for manual inspection. However, we are still only checking well-known interfaces commonly used for lateral movement, while potentially missing others.

  3. Manual inspection using open-source tools

    As you can see, automation can be difficult since it requires several prerequisites and, in many cases, still ends with a manual inspection. An alternative approach is manual inspection using a tool called OleViewDotNet, developed by James Forshaw. This tool allows you to:
    • List all COM classes in the system.
    • Create instances of those classes.
    • Check their supported interfaces.
    • Call specific functions.
    • Apply various filters for easier analysis.
    • Perform other inspection tasks.
    Open-source tool for inspecting COM interfaces

    Open-source tool for inspecting COM interfaces

    One of the most valuable features of this tool is its naming visibility. OleViewDotNet extracts the names of interfaces and classes (when available) from the Windows Registry and displays them, along with any associated type libraries.

    This makes manual inspection easier, since you can analyze the names of classes, interfaces, or type libraries and correlate them with potentially interesting functionality, for example, functions that could lead to command execution or persistence techniques.

Control Panel items as attack surfaces

Control Panel items allow users to view and adjust their computer settings. These items are implemented as DLLs that export the CPlApplet function and typically have the .cpl extension. Control Panel items can also be executables, but our research will focus on DLLs only.

Control Panel items

Control Panel items

Attackers can abuse CPL files for initial access. When a user executes a malicious .cpl file (e.g., delivered via phishing), the system may be compromised – a technique mapped to MITRE ATT&CK T1218.002.

Adversaries may also modify the extensions of malicious DLLs to .cpl and register them in the corresponding locations in the registry.

  • Under HKEY_CURRENT_USER:
    HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
  • Under HKEY_LOCAL_MACHINE:
    • For 64-bit DLLs:
      HKLM\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
    • For 32-bit DLLs:
      HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Control Panel\Cpls

These locations are important when Control Panel DLLs need to be available to the current logged-in user or to all users on the machine. However, the “Control Panel” subkey and its “Cpls” subkey under HKCU should be created manually, unlike the “Control Panel” and “Cpls” subkeys under HKLM, which are created automatically by the operating system.

Once registered, the DLL (CPL file) will load every time the Control Panel is opened, enabling persistence on the victim’s system.

It’s worth noting that even DLLs that do not comply with the CPL specification, do not export CPlApplet, or do not have the .cpl extension can still be executed via their DllEntryPoint function if they are registered under the registry keys listed above.

There are multiple ways to execute Control Panel items:

  • From cmd: control.exe [filename].cpl
  • By double-clicking the .cpl file.

Both methods use rundll32.exe under the hood:

rundll32.exe shell32.dll,Control_RunDLL [filename].cpl

This calls the Control_RunDLL function from shell32.dll, passing the CPL file as an argument. Everything inside the CPlApplet function will then be executed.

However, if the CPL file has been registered in the registry as shown earlier, then every time the Control Panel is opened, the file is loaded into memory through the COM Surrogate process (dllhost.exe):

COM Surrogate process loading the CPL file

COM Surrogate process loading the CPL file

What happened was that a Control Panel with a COM client used a COM object to load these CPL files. We will talk about this COM object in more detail later.

The COM Surrogate process was designed to host COM server DLLs in a separate process rather than loading them directly into the client process’s address space. This isolation improves stability for the in-process server model. This hosting behavior can be configured for a COM object in the registry if you want a COM server DLL to run inside a separate process because, by default, it is loaded in the same process.

‘DCOMing’ through Control Panel items

While following the manual approach of enumerating COM/DCOM objects that could be useful for lateral movement, I came across a COM object called COpenControlPanel, which is exposed through shell32.dll and has the CLSID {06622D85-6856-4460-8DE1-A81921B41C4B}. This object exposes multiple interfaces, one of which is IOpenControlPanel with IID {D11AD862-66DE-4DF4-BF6C-1F5621996AF1}.

IOpenControlPanel interface in the OleViewDotNet output

IOpenControlPanel interface in the OleViewDotNet output

I immediately thought of its potential to compromise Control Panel items, so I wanted to check which functions were exposed by this interface. Unfortunately, neither the interface nor the COM class has a type library.

COpenControlPanel interfaces without TypeLib

COpenControlPanel interfaces without TypeLib

Normally, checking the interface definition would require reverse engineering, so at first, it looked like we needed to take a different research path. However, it turned out that the IOpenControlPanel interface is documented on MSDN, and according to the documentation, it exposes several functions. One of them, called Open, allows a specified Control Panel item to be opened using its name as the first argument.

Full type and function definitions are provided in the shobjidl_core.h Windows header file.

Open function exposed by IOpenControlPanel interface

Open function exposed by IOpenControlPanel interface

It’s worth noting that in newer versions of Windows (e.g., Windows Server 2025 and Windows 11), Microsoft has removed interface names from the registry, which means they can no longer be identified through OleViewDotNet.

COpenControlPanel interfaces without names

COpenControlPanel interfaces without names

Returning to the COpenControlPanel COM object, I found that the Open function can trigger a DLL to be loaded into memory if it has been registered in the registry. For the purposes of this research, I created a DLL that basically just spawns a message box which is defined under the DllEntryPoint function. I registered it under HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls and then created a simple C++ COM client to call the Open function on this interface.

As expected, the DLL was loaded into memory. It was hosted in the same way that it would be if the Control Panel itself was opened: through the COM Surrogate process (dllhost.exe). Using Process Explorer, it was clear that dllhost.exe loaded my DLL while simultaneously hosting the COpenControlPanel object along with other COM objects.

COM Surrogate loading a custom DLL and hosting the COpenControlPanel object

COM Surrogate loading a custom DLL and hosting the COpenControlPanel object

Based on my testing, I made the following observations:

  1. The DLL that needs to be registered does not necessarily have to be a .cpl file; any DLL with a valid entry point will be loaded.
  2. The Open() function accepts the name of a Control Panel item as its first argument. However, it appears that even if a random string is supplied, it still causes all DLLs registered in the relevant registry location to be loaded into memory.

Now, what if we could trigger this COM object remotely? In other words, what if it is not just a COM object but also a DCOM object? To verify this, we checked the AppID of the COpenControlPanel object using OleViewDotNet.

COpenControlPanel object in OleViewDotNet

COpenControlPanel object in OleViewDotNet

Both the launch and access permissions are empty, which means the object will follow the system’s default DCOM security policy. By default, members of the Administrators group are allowed to launch and access the DCOM object.

Based on this, we can build a remote strategy. First, upload the “malicious” DLL, then use the Remote Registry service to register it in the appropriate registry location. Finally, use a trigger acting as a DCOM client to remotely invoke the Open() function, causing our DLL to be loaded. The diagram below illustrates the flow of this approach.

Malicious DLL loading using DCOM

Malicious DLL loading using DCOM

The trigger can be written in either C++ or Python, for example, using Impacket. I chose Python because of its flexibility. The trigger itself is straightforward: we define the DCOM class, the interface, and the function to call. The full code example can be found here.

Once the trigger runs, the behavior will be the same as when executing the COM client locally: our DLL will be loaded through the COM Surrogate process (dllhost.exe).

As you can see, this technique not only achieves command execution but also provides persistence. It can be triggered in two ways: when a user opens the Control Panel or remotely at any time via DCOM.

Detection

The first step in detecting such activity is to check whether any Control Panel items have been registered under the following registry paths:

  • HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
  • HKLM\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
  • HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Control Panel\Cpls

Although commonly known best practices and research papers regarding Windows security advise monitoring only the first subkey, for thorough coverage it is important to monitor all of the above.

In addition, monitoring dllhost.exe (COM Surrogate) for unusual COM objects such as COpenControlPanel can provide indicators of malicious activity.
Finally, it is always recommended to monitor Remote Registry usage because it is commonly abused in many types of attacks, not just in this scenario.

Conclusion

In conclusion, I hope this research has clarified yet another attack vector and emphasized the importance of implementing hardening practices. Below are a few closing points for security researchers to take into account:

  • As shown, DCOM represents a large attack surface. Windows exposes many DCOM classes, a significant number of which lack type libraries – meaning reverse engineering can reveal additional classes that may be abused for lateral movement.
  • Changing registry values to register malicious CPLs is not good practice from a red teaming ethics perspective. Defender products tend to monitor common persistence paths, but Control Panel applets can be registered in multiple registry locations, so there is always a gap that can be exploited.
  • Bitness also matters. On x64 systems, loading a 32-bit DLL will spawn a 32-bit COM Surrogate process (dllhost.exe *32). This is unusual on 64-bit hosts and therefore serves as a useful detection signal for defenders and an interesting red flag for red teamers to consider.

  •  

From the Hill: The AI-Cybersecurity Imperative in Financial Services

The transformative potential of artificial intelligence (AI) across industries is undeniable. But realizing AI's true value hinges on three cybersecurity imperatives: Understanding the AI-cybersecurity nexus, harnessing AI to supercharge cyber defense, and embedding security into AI tools from the ground up through Secure AI by Design.

Nowhere is this convergence more urgent than in financial services. Sitting at the center of our global economy, financial institutions face a dual mandate: Embrace AI for cybersecurity and cybersecurity for AI.

I was honored to cover these key principals in my testimony before the House Committee on Financial Services, led by Chairman French Hill. The hearing, entitled “From Principles to Policy: Enabling 21st Century AI Innovation in Financial Services” convened witnesses from Palo Alto Networks, Google, NASDAQ, Zillow and Public Citizen. Together, we examined AI use cases in the financial services and housing sectors, including those specific to cybersecurity. We assessed how existing laws and frameworks apply in the age of AI.

The Defense Advantage Is AI-Powered Security Operations

Attacks have become faster, with the time from compromise to data exfiltration now 100 times faster than four years ago. The financial sector bears disproportionate risk, given the value of its data and interconnected systems, while firms contend with evolving regulatory expectations, talent shortages and the persistent tendency to elevate cybersecurity only after an incident.

Generative and agentic AI intensify these pressures by accelerating every phase of the attack chain, from deepfake-driven fraud to tailored spear phishing campaigns. Our researchers at Unit 42® have found that agentic AI, autonomous systems that can reason and act without human intervention, can compress what was once a multiday ransomware campaign into roughly 25 minutes.

To keep pace, financial institutions must pivot to AI-driven defenses that operate at machine speed.

Security operations centers (SOC) have long been overwhelmed by traditional alerts and fragmented data. Security teams, forced into manual triage across dozens of disparate tools, face an inefficient model that leaves vulnerabilities exposed, burns out analysts and makes it impossible to operate at the speed necessary to outpace modern attacks.

The average enterprise SOC ingests data from 83 security solutions across 29 vendors. In 75% of breaches, logging existed that should have flagged anomalous behavior, but critical signals were buried. With 90% of SOCs still relying on manual processes, adversaries have the clear advantage.

AI-driven SOCs flip this paradigm, acting as a force multiplier to substantially reduce detection and response times. To illustrate the scale of this necessity, consider our own security operations. Palo Alto Networks SOC analyzes over 90 billion events daily. Without AI, this would be an impossible task for human analysts. But by applying AI, we distill that down to a single actionable incident.

Financial institutions migrating to AI-driven SOC platforms are seeing transformative results:

  • One customer reduced the Mean Time to Respond (MTTR) from one day to 14 minutes.
  • Another prevented 22,831 threats and processed 113,271 threat indicators in less than 5 seconds.
  • A large bank saved 180 hours per year by automating security information and event management reporting; 500 hours through automated data collection; 360 hours by automating four Chief Technology Officer playbooks; and 240 hours with automated threat intelligence enrichment.

These improvements are critical to stopping threat actors. But none of this would be possible without AI.

Securing the New AI Attack Surface

As AI adoption grows, it will further expand the attack surface, creating new vectors targeting training data and model environments. AI's rapid growth is outpacing the adoption of security measures designed to protect it. Nearly three-quarters of S&P 500 companies now flag AI as a material risk in their public disclosures, up from just 12% in 2023.

Traditional security tools rely on static rules that miss advanced attacks, like multistep prompt injections or adversarial manipulations. Autonomous AI agents can take unpredictable actions that are difficult to monitor with legacy methods.

Rapid AI adoption has exposed organizations' infrastructure, data, models, applications and agents to unique threats. Unlike traditional cyber exploits that target software vulnerabilities, AI-specific attacks can manipulate the foundation of how an AI system learns and operates.

A Secure AI by Design

Even with an understanding of the risks, many organizations struggle with the lack of clarity on what effective AI security looks like in practice. Recognizing the gap between intent and execution, Palo Alto Networks developed a Secure AI by Design policy roadmap that provides organizations with a comprehensive roadmap that integrates security throughout the entire AI lifecycle.

A proactive stance ensures security is a feature, not an afterthought, crucial for building trust, maintaining compliance and mitigating risks. The approach addresses four imperatives organizations most pressingly face in AI adoption:

1. Secure the use of external AI tools.

2. Secure the underlying AI infrastructure and data.

3. Safely build and deploy AI applications.

4. Monitor and control AI agents.

The Path Forward

For financial institutions, Secure AI by Design must be anchored in enterprise governance. Institutions should maintain risk-tiered AI inventories, enforce strict access controls and implement testing commensurate with risk. Governance structures should enable board oversight and align with established model risk practices.

Policymakers also have a critical role to play in promoting AI-driven security operations, championing voluntary Secure AI by Design frameworks, ensuring policies safeguard innovation, enabling controlled experimentation and strengthening public-private collaboration.

Ultimately, the financial institutions that will thrive will recognize cybersecurity as the foundation that makes innovation possible. By embracing AI-driven defenses and securing AI systems from the ground up, the sector can confidently unlock AI's transformative potential while safeguarding the trust and stability that underpin the global economy.

Read the full testimony to learn more about how cybersecurity can enable AI innovation in financial services.

The post From the Hill: The AI-Cybersecurity Imperative in Financial Services appeared first on Palo Alto Networks Blog.

  •  

Check Point Infinity Global Services Launches First AI Security Training Courses

Artificial Intelligence is transforming every industry, unlocking new opportunities while introducing new risks. That is why Infinity Global Services (IGS) is proud to announce the launch of our first dedicated AI security training courses. This is the first release in a growing IGS AI services portfolio, with upcoming offerings focused on AI red teaming, AI governance and AI implantation consulting services. The new courses are part of Infinity Global Services’ mission to empower organizations with the knowledge and tools to defend against emerging AI-driven threats and implementing AI securely in their operations and product development. Through hands-on training and expert-led […]

The post Check Point Infinity Global Services Launches First AI Security Training Courses appeared first on Check Point Blog.

  •  

Untangling Hybrid Cloud Security

From Fragmented Fences to Cohesive Control

The attack surface for today’s enterprises is incredibly heterogeneous and dynamic. Applications and data are in constant motion, spanning public clouds, private data centers and edge locations. Users connect from anywhere.

For security leaders, this environment has led to an explosion in not only operational complexity, but in many cases, uncertainty. ​​Together, Nutanix and Palo Alto Networks enable security to finally match the speed and scale of these dynamic hybrid cloud environments.

The security ecosystem has become vast and complex. Point solutions accumulate to address specific gaps, yet each adds another interface, another policy language and another integration to manage. However well intentioned, this sprawl can lead directly to fractured visibility, overlapping tools and operational fatigue.

Elevate Perimeter Protection to Defense-in-Depth

Enterprises today face unprecedented security complexity as hybrid and multicloud environments become the new normal. Currently, 94% of enterprises use some form of cloud service, while 89% report having a multicloud strategy in place. This distributed reality means security is paramount: while managing cloud spending is the number one operational challenge (82% overall), security remains a major concern, affecting 79% of all organizations.

Hybrid cloud adoption offers agility, but it also introduces distinct security challenges that strain traditional approaches. Adversaries have taken notice. Hybrid and multicloud environments are prime targets because they connect sensitive data, privileged accounts and critical systems across public, and on-premises infrastructure. Perimeter-based security models, built for static networks and centralized data centers, cannot keep pace in a world where apps and data continuously move between platforms.

Defense-in-depth has become essential for addressing the inherent dynamism of today’s environments. Network visibility is required to monitor and contain east-west traffic and lateral movement of threats inside cloud environments. Identity controls must verify every user, device and interaction across a distributed workforce. Data protection must follow sensitive information as it traverses multiple clouds, data centers and edge locations.

Yet managing these protections as distinct layers is no longer viable. Each cloud provider introduces its own native security controls. Each additional tool adds another interface and another policy set to maintain. Defense-in-depth only achieves its purpose when its layers are fully unified, providing consistent control enforcement from the edge to the core, comprehensive visibility across traffic, and essential data protections for all workloads, wherever they reside.

Freedom of Choice Without Fragmentation

Hybrid environments span public clouds, private infrastructure, SaaS ecosystems and legacy on-premises systems. No single vendor can realistically cover that entire landscape, and forcing security into a single closed ecosystem risks creating gaps where those environments meet.

The answer lies in an open ecosystem approach that allows organizations to assemble best-of-breed capabilities rather than being locked into a single provider’s stack.

This flexibility empowers security teams to adapt to the unique requirements of each environment while still operating through a unified security model. Policies can be applied consistently, intelligence can be shared across layers, and protections can move in step with workloads, regardless of platform. In short, this model can effectively support freedom of choice while relieving the operational burden of managing hybrid and multicloud security.

A Unified Security Layer Across Every Environment

Open ecosystems solve the problem of choice. What remains is the challenge of bringing those best-of-breed capabilities together into a solution that is coherent and scalable.

To transform defense-in-depth from a conceptual framework into a practical system aligned to the realities of hybrid and multicloud deployments, this unified layer should be built on core capabilities:

  • Inline visibility for east-west traffic within virtualized and cloud environments, enabled by deploying next-generation firewalls directly inside virtual private networks:
    This approach inspects workload-to-workload traffic, identifies anomalous behavior and stops lateral movement before it spreads.
  • Consistent policy enforcement across public cloud, private data centers and edge locations through a centralized management plane:
    A single set of policies should be authored once and pushed everywhere, assuring a consistent security posture across all clouds and environments.
  • Abstraction of security intent from network coordinates through tag-driven automation, an approach that allows security policies to be expressed in terms of workload attributes (rather than IPs or locations):
    These protections follow workloads automatically as they move. Through integration with orchestration pipelines, this approach aligns controls with rapid application rollouts in CI/CD workflows, all without manual reconfiguration.

With these core capabilities, security can finally catch up to the fluidity promised by hybrid cloud operating models.

Explore how Palo Alto Networks and Nutanix, work together to make this unified vision a reality, including joint offerings, like Palo Alto Networks secured Nutanix clusters with VM-Series Firewalls for AWS® and Microsoft® Azure.

The post Untangling Hybrid Cloud Security appeared first on Palo Alto Networks Blog.

  •  

In Through the Front Door – Protecting Your Perimeter  

While social engineering attacks such as phishing are a great way to gain a foothold in a target environment, direct attacks against externally exploitable services are continuing to make headlines. […]

The post In Through the Front Door – Protecting Your Perimeter   appeared first on Black Hills Information Security, Inc..

  •  

Scout2 Usage: AWS Infrastructure Security Best Practices

Jordan Drysdale// Full disclosure and tl;dr: The NCC Group has developed an amazing toolkit for analyzing your AWS infrastructure against Amazon’s best practices guidelines. Start here: https://github.com/nccgroup/Scout2 Then, access your […]

The post Scout2 Usage: AWS Infrastructure Security Best Practices appeared first on Black Hills Information Security, Inc..

  •  
❌