A newly discovered vulnerability named WhisperPair can turn Bluetooth headphones and headsets from many well-known brands into personal tracking beacons — regardless of whether the accessories are currently connected to an iPhone, Android smartphone, or even a laptop. Even though the technology behind this flaw was originally developed by Google for Android devices, the tracking risks are actually much higher for those using vulnerable headsets with other operating systems — like iOS, macOS, Windows, or Linux. For iPhone owners, this is especially concerning.
Connecting Bluetooth headphones to Android smartphones became a whole lot faster when Google rolled out Fast Pair, a technology now used by dozens of accessory manufacturers. To pair a new headset, you just turn it on and hold it near your phone. If your device is relatively modern (produced after 2019), a pop-up appears inviting you to connect and download the accompanying app, if it exists. One tap, and you’re good to go.
Unfortunately, it seems quite a few manufacturers didn’t pay attention to the particulars of this tech when implementing it, and now their accessories can be hijacked by a stranger’s smartphone in seconds — even if the headset isn’t actually in pairing mode. This is the core of the WhisperPair vulnerability, recently discovered by researchers at KU Leuven and recorded as CVE-2025-36911.
The attacking device — which can be a standard smartphone, tablet or laptop — broadcasts Google Fast Pair requests to any Bluetooth devices within a 14-meter radius. As it turns out, a long list of headphones from Sony, JBL, Redmi, Anker, Marshall, Jabra, OnePlus, and even Google itself (the Pixel Buds 2) will respond to these pings even when they aren’t looking to pair. On average, the attack takes just 10 seconds.
Once the headphones are paired, the attacker can do pretty much anything the owner can: listen in through the microphone, blast music, or — in some cases — locate the headset on a map if it supports Google Find Hub. That latter feature, designed strictly for finding lost headphones, creates a perfect opening for stealthy remote tracking. And here’s the twist: it’s actually most dangerous for Apple users and anyone else rocking non-Android hardware.
Remote tracking and the risks for iPhones
When headphones or a headset first shake hands with an Android device via the Fast Pair protocol, an owner key tied to that smartphone’s Google account is tucked away in the accessory’s memory. This info allows the headphones to be found later by leveraging data collected from millions of Android devices. If any random smartphone spots the target device nearby via Bluetooth, it reports its location to the Google servers. This feature — Google Find Hub — is essentially the Android version of Apple’s Find My, and it introduces the same unauthorized tracking risks as a rogue AirTag.
When an attacker hijacks the pairing, their key can be saved as the headset owner’s key — but only if the headset targeted via WhisperPair hasn’t previously been linked to an Android device and has only been used with an iPhone, or other hardware like a laptop with a different OS. Once the headphones are paired, the attacker can stalk their location on a map at their leisure — crucially, anywhere at all (not just within the 14-meter range).
Android users who’ve already used Fast Pair to link their vulnerable headsets are safe from this specific move, since they’re already logged in as the official owners. Everyone else, however, should probably double-check their manufacturer’s documentation to see if they’re in the clear — thankfully, not every device vulnerable to the exploit actually supports Google Find Hub.
How to neutralize the WhisperPair threat
The only truly effective way to fix this bug is to update your headphones’ firmware, provided an update is actually available. You can typically check for and install updates through the headset’s official companion app. The researchers have compiled a list of vulnerable devices on their site, but it’s almost certainly not exhaustive.
After updating the firmware, you absolutely must perform a factory reset to wipe the list of paired devices — including any unwanted guests.
If no firmware update is available and you’re using your headset with iOS, macOS, Windows, or Linux, your only remaining option is to track down an Android smartphone (or find a trusted friend who has one) and use it to reserve the role of the original owner. This will prevent anyone else from adding your headphones to Google Find Hub behind your back.
The update from Google
In January 2026, Google pushed an Android update to patch the vulnerability on the OS side. Unfortunately, the specifics haven’t been made public, so we’re left guessing exactly what they tweaked under the hood. Most likely, updated smartphones will no longer report the location of accessories hijacked via WhisperPair to the Google Find Hub network. But given that not everyone is exactly speedy when it comes to installing Android updates, it’s a safe bet that this type of headset tracking will remain viable for at least another couple of years.
Want to find out how else your gadgets might be spying on you? Check out these posts:
Brand, website, and corporate mailout impersonation is becoming an increasingly common technique used by cybercriminals. The World Intellectual Property Organization (WIPO) reported a spike in such incidents in 2025. While tech companies and consumer brands are the most frequent targets, every industry in every country is generally at risk. The only thing that changes is how the imposters exploit the fakes In practice, we typically see the following attack scenarios:
Luring clients and customers to a fake website to harvest login credentials for the real online store, or to steal payment details for direct theft.
Luring employees and business partners to a fake corporate login portal to acquire legitimate credentials for infiltrating the corporate network.
Prompting clients and customers to contact the scammers under various pretexts: getting tech support, processing a refund, entering a prize giveaway, or claiming compensation for public events involving the brand. The goal is to then swindle the victims out of as much money as possible.
Luring business partners and employees to specially crafted pages that mimic internal company systems, to get them to approve a payment or redirect a legitimate payment to the scammers.
Prompting clients, business partners, and employees to download malware — most often an infostealer — disguised as corporate software from a fake company website.
The words “luring” and “prompting” here imply a whole toolbox of tactics: email, messages in chat apps, social media posts that look like official ads, lookalike websites promoted through SEO tools, and even paid ads.
These schemes all share two common features. First, the attackers exploit the organization’s brand, and strive to mimic its official website, domain name, and corporate style of emails, ads, and social media posts. And the forgery doesn’t have to be flawless — just convincing enough for at least some of business partners and customers. Second, while the organization and its online resources aren’t targeted directly, the impact on them is still significant.
Business damage from brand impersonation
When fakes are crafted to target employees, an attack can lead to direct financial loss. An employee might be persuaded to transfer company funds, or their credentials could be used to steal confidential information or launch a ransomware attack.
Attacks on customers don’t typically imply direct damage to the company’s coffers, but they cause substantial indirect harm in the following areas:
Strain on customer support. Customers who “bought” a product on a fake site will likely bring their issues to the real customer support team. Convincing them that they never actually placed an order is tough, making each case a major time waster for multiple support agents.
Reputational damage. Defrauded customers often blame the brand for failing to protect them from the scam, and also expect compensation. According to a European survey, around half of affected buyers expect payouts and may stop using the company’s services — often sharing their negative experience on social media. This is especially damaging if the victims include public figures or anyone with a large following.
Unplanned response costs. Depending on the specifics and scale of an attack, an affected company might need digital forensics and incident response (DFIR) services, as well as consultants specializing in consumer law, intellectual property, cybersecurity, and crisis PR.
Increased insurance premiums. Companies that insure businesses against cyber-incidents factor in fallout from brand impersonation. An increased risk profile may be reflected in a higher premium for a business.
Degraded website performance and rising ad costs. If criminals run paid ads using a brand’s name, they siphon traffic away from its official site. Furthermore, if a company pays to advertise its site, the cost per click rises due to the increased competition. This is a particularly acute problem for IT companies selling online services, but it’s also relevant for retail brands.
Long-term metric decline. This includes drops in sales volume, market share, and market capitalization. These are all consequences of lost trust from customers and business partners following major incidents.
Does insurance cover the damage?
Popular cyber-risk insurance policies typically only cover costs directly tied to incidents explicitly defined in the policy — think data loss, business interruption, IT system compromise, and the like. Fake domains and web pages don’t directly damage a company’s IT systems, so they’re usually not covered by standard insurance. Reputational losses and the act of impersonation itself are separate insurance risks, requiring expanded coverage for this scenario specifically.
Of the indirect losses we’ve listed above, standard insurance might cover DFIR expenses and, in some cases, extra customer support costs (if the situation is recognized as an insured event). Voluntary customer reimbursements, lost sales, and reputational damage are almost certainly not covered.
What to do if your company is attacked by clones
If you find out someone is using your brand’s name for fraud, it makes sense to do the following:
Send clear, straightforward notifications to your customers explaining what happened, what measures are being taken, and how to verify the authenticity of official websites, emails, and other communications.
Create a simple “trust center” page listing your official domains, social media accounts, app store links, and support contacts. Make it easy to find and keep it updated.
Monitor new registrations of social media pages and domain names that contain your brand names to spot the clones before an attack kicks off.
Follow a takedown procedure. This involves gathering evidence, filing complaints with domain registrars, hosting providers, and social media administrators, then tracking the status until the fakes are fully removed. For a complete and accurate record of violations, preserve URLs, screenshots, metadata, and the date and time of discovery. Ideally, also examine the source code of fake pages, as it might contain clues pointing to other components of the criminal operation.
Add a simple customer reporting form for suspicious sites or messages to your official website and/or branded app. This helps you learn about problems early.
Coordinate activities between your legal, cybersecurity, and marketing teams. This ensures a consistent, unified, and effective response.
How to defend against brand impersonation attacks
While the open nature of the internet and the specifics of these attacks make preventing them outright impossible, a business can stay on top of new fakes and have the tools ready to fight back.
Continuously monitor for suspicious public activity using specialized monitoring services. The most obvious indicator is the registration of domains similar to your brand name, but there are others — like someone buying databases related to your organization on the dark web. Comprehensive monitoring of all platforms is best outsourced to a specialized service provider, such as Kaspersky Digital Footprint Intelligence (DFI).
The quickest and simplest way to take down a fake website or social media profile is to file a trademark infringement complaint. Make sure your portfolio of registered trademarks is robust enough to file complaints under UDRP procedures before you need it.
When you discover fakes, deploy UDRP procedures promptly to have the fake domains transferred or removed. For social media, follow the platform’s specific infringement procedure — easily found by searching for “[social media name] trademark infringement” (for example, “LinkedIn trademark infringement”). Transferring the domain to the legitimate owner is preferred over deletion, as it prevents scammers from simply re-registering it. Many continuous monitoring services, such as Kaspersky Digital Footprint Intelligence, also offer a rapid takedown service, filing complaints on the protected brand’s behalf.
Act quickly to block fake domains on your corporate systems. This won’t protect partners or customers, but it’ll throw a wrench into attacks targeting your own employees.
Consider proactively registering your company’s website name and common variations (for example, with and without hyphens) in all major top-level domains, such as .com, and local extensions. This helps protect partners and customers from common typos and simple copycat sites.
Insider Threats: Turning 2025 Intelligence into a 2026 Defense Strategy
In this post, we break down the 91,321 instances of insider activity observed by Flashpoint in 2025, examine the top five cases that defined the year, and provide the technical and behavioral red flags your team needs to monitor in 2026.
Every organization houses sensitive assets that threat actors actively seek. Whether it is proprietary trade secrets, intellectual property, or the personally identifiable information (PII) of employees and customers, these datasets are the lifeblood of the modern enterprise—and highly lucrative commodities within the illicit underground.
In 2025, Flashpoint observed 91,321 instances of insider recruiting, advertising, and threat actor discussions involving insider-related illicit activity. This underscores a critical reality—it is far more efficient for threat actors to recruit an “insider” to circumvent multi-million dollar security stacks than it is to develop a complex exploit from the outside.
An insider threat, any individual with authorized access, possesses the unique ability to bypass traditional security gates. Whether driven by financial gain, ideological grievances, or simple human error, insiders can potentially compromise a system with a single keystroke. To protect our customers from this internal risk, Flashpoint monitors the illicit forums and marketplaces where these threats are being solicited.
In this post, we unpack the evolving insider threat landscape and what it means for your security strategy in 2026. By analyzing the volume of recruitment activity and the specific industries being targeted, organizations can move from a reactive posture to a proactive defense.
By the Numbers: Mapping the 2025 Insider Threat Landscape
Last year, Flashpoint collected and researched:
91,321 posts of insider solicitation and service advertising
On average, 1,162 insider-related posts were published per month, with Telegram continuing to be one of the most prominent mediums for insiders and threat actors to identify and collaborate with each other. Analysts also identified instances of extortionist groups targeting employees at organizations to financially motivate them to become insiders.
Insider Threat Landscape by Industry
The telecommunications industry observed the most insider-related activity in 2025. This is due to the industry’s central role in identity verification and its status as the primary target for SIM swapping—a fraudulent technique where threat actors convince employees of a mobile carrier to link a victim’s phone number to a SIM card controlled by the attacker. This allows the threat actor to receive all the victim’s calls and texts, allowing them to bypass SMS-based two-factor authentication.
Insider Threat data from January 1, 2025 to November 24, 2025
Flashpoint analysts identified 12,783 notable posts where the level of detail or the specific target was particularly concerning.
Top Industries for Insiders Advertising Services (Supply):
Telecom
Financial
Retail
Technology
Top Industries for Threat Actors Soliciting Access (Demand):
Technology
Financial
Telecom
Retail
6 Notable Insider Threat Cases of 2025
The following cases highlight the variety of ways insiders impacted enterprise systems this year, ranging from intentional fraud to massive technical oversights.
Type of Incident
Description
Malicious
Approximately nine employees accessed the personal information of over 94,000 individuals, making illegal purchases using changed food stamp cards.
Nonmalicious
An unprotected database belonging to a Chinese IoT firm leaked 2.7 billion records, exposing 1.17 TB of sensitive data and plaintext passwords.
Malicious
An insider at a well-known cybersecurity organization was terminated after sharing screenshots of internal dashboards with the Scattered Lapsus$ Hunters threat actor group.
Malicious
An employee working for a foreign military contractor was bribed to pass confidential information to threat actors.
Malicious
A third-party contractor for a cryptocurrency firm sold customer data to threat actors and recruited colleagues into the scheme, leading to the termination of 300 employees and the compromise of 69,000 customers.
Malicious
Two contractors accessed and deleted sensitive documents and dozens of databases belonging to the Internal Revenue Service and US General Services Administration.
Catching the Warning Signs Early
Potential insiders often display technical and nontechnical behavior before initiating illicit activity. Although these actions may not directly implicate an employee, they can be monitored, which may lead to inquiries or additional investigations to better understand whether the employee poses an elevated risk to the organization.
Flashpoint has identified the following nontechnical warning signs associated with insiders:
Behavioral indicators: Observable actions that deviate from a known baseline of behaviors. These can be observed by coworkers or management or through technical indicators. Behavioral indicators can include increasingly impulsive or erratic behavior, noncompliance with rules and policies, social withdrawal, and communications with competitors.
Financial changes: Significant and overlapping changes in financial standing—such as significant debt, financial troubles, or sudden unexplained financial gain—could indicate a potential insider threat. In the case of financial distress, an employee can sell their services to other threat actors via forums or chat services, thus creating additional funding streams while seeming benign within their organization.
Abnormal access behavior: Resistance to oversight, unjustified requests for sensitive information beyond the employee’s role, or the employee being overprotective of their access privileges might indicate malicious intent.
Separation on bad terms: Employees who leave an organization under unfavorable circumstances pose an increased insider threat risk, as they might want to seek revenge by exploiting whatever access they had or might still possess after leaving.
Odd working hours: Actors may leverage atypical after-hours work to pursue insider threat activity, as there is less monitoring. By sticking to an atypical schedule, threat actors maintain a cover of standard work activity while pursuing illicit activity simultaneously.
Unusual overseas travel: Unusual and undocumented overseas travel may indicate an employee’s potential recruitment by a foreign state or state-sponsored actor. Travel might be initiated to establish contact and pass sensitive information while avoiding raising suspicions in the recruit’s home country.
The following are technical warning signs:
Unauthorized devices: Employees using unauthorized devices for work pose an insider threat, whether they have malicious intent or are simply putting themselves at higher risk of human error. Devices that are not controlled and monitored by the organization fall outside of its scope of operational security, while still carrying all of the sensitive data and configuration of the organization.
Abnormal network traffic: An unusual increase in network traffic or unexplained traffic patterns associated with the employee’s device that differ from their normal network activity could indicate malicious intent. This includes network traffic employing unusual protocols, using uncommon ports, or an overall increase in after-hours network activity.
Irregular access pattern: Employees accessing data outside the scope of their job function may be testing and mapping the limits of their access privileges to restricted areas of information as they evaluate their exfiltration capabilities for their planned illicit actions.
Irregular or mass data download: Unexpected changes in an employee’s data handling practices, such as irregular large-scale downloads, unusual data encryption, or uncharacteristic or unauthorized data destinations, are significant indicators of an insider threat.
Insider Threats: What to Expect in 2026
As 2026 unfolds, insider threat actors will continue to be a major threat to organizations. Ransomware groups and initial access threat actors will continue recruiting interested insiders and exploiting human vulnerabilities through social engineering tactics. Following Telegram’s recent bans on many illicit groups and channels, Flashpoint assesses that threat actors are likely to migrate to different platforms, such as Signal, where encrypted chats make their activity harder to monitor.
As AI technologies continue to advance, organizations will be better equipped to identify and mitigate insider risks. At the same time, threat actors will likely increasingly abuse AI and other tools to access sensitive information. Is your organization equipped to spot the warning signs? Request a demo to learn more and to mitigate potential risk from within your organization.
In 2025, cybersecurity researchers discovered several open databases belonging to various AI image-generation tools. This fact alone makes you wonder just how much AI startups care about the privacy and security of their users’ data. But the nature of the content in these databases is far more alarming.
A large number of generated pictures in these databases were images of women in lingerie or fully nude. Some were clearly created from children’s photos, or intended to make adult women appear younger (and undressed). Finally, the most disturbing part: some pornographic images were generated from completely innocent photos of real people — likely taken from social media.
In this post, we’re talking about what sextortion is, and why AI tools mean anyone can become a victim. We detail the contents of these open databases, and give you advice on how to avoid becoming a victim of AI-era sextortion.
What is sextortion?
Online sexual extortion has become so common it’s earned its own global name: sextortion (a portmanteau of sex and extortion). We’ve already detailed its various types in our post, Fifty shades of sextortion. To recap, this form of blackmail involves threatening to publish intimate images or videos to coerce the victim into taking certain actions, or to extort money from them.
Previously, victims of sextortion were typically adult industry workers, or individuals who’d shared intimate content with an untrustworthy person.
However, the rapid advancement of artificial intelligence, particularly text-to-image technology, has fundamentally changed the game. Now, literally anyone who’s posted their most innocent photos publicly can become a victim of sextortion. This is because generative AI makes it possible to quickly, easily, and convincingly undress people in any digital image, or add a generated nude body to someone’s head in a matter of seconds.
Of course, this kind of fakery was possible before AI, but it required long hours of meticulous Photoshop work. Now, all you need is to describe the desired result in words.
To make matters worse, many generative AI services don’t bother much with protecting the content they’ve been used to create. As mentioned earlier, last year saw researchers discover at least three publicly accessible databases belonging to these services. This means the generated nudes within them were available not just to the user who’d created them, but to anyone on the internet.
How the AI image database leak was discovered
In October 2025, cybersecurity researcher Jeremiah Fowler uncovered an open database containing over a million AI-generated images and videos. According to the researcher, the overwhelming majority of this content was pornographic in nature. The database wasn’t encrypted or password-protected — meaning any internet user could access it.
The database’s name and watermarks on some images led Fowler to believe its source was the U.S.-based company SocialBook, which offers services for influencers and digital marketing services. The company’s website also provides access to tools for generating images and content using AI.
However, further analysis revealed that SocialBook itself wasn’t directly generating this content. Links within the service’s interface led to third-party products — the AI services MagicEdit and DreamPal — which were the tools used to create the images. These tools allowed users to generate pictures from text descriptions, edit uploaded photos, and perform various visual manipulations, including creating explicit content and face-swapping.
The leak was linked to these specific tools, and the database contained the product of their work, including AI-generated and AI-edited images. A portion of the images led the researcher to suspect they’d been uploaded to the AI as references for creating provocative imagery.
Fowler states that roughly 10,000 photos were being added to the database every single day. SocialBook denies any connection to the database. After the researcher informed the company of the leak, several pages on the SocialBook website that had previously mentioned MagicEdit and DreamPal became inaccessible and began returning errors.
Which services were the source of the leak?
Both services — MagicEdit and DreamPal — were initially marketed as tools for interactive, user-driven visual experimentation with images and art characters. Unfortunately, a significant portion of these capabilities were directly linked to creating sexualized content.
For example, MagicEdit offered a tool for AI-powered virtual clothing changes, as well as a set of styles that made images of women more revealing after processing — such as replacing everyday clothes with swimwear or lingerie. Its promotional materials promised to turn an ordinary look into a sexy one in seconds.
DreamPal, for its part, was initially positioned as an AI-powered role-playing chat, and was even more explicit about its adult-oriented positioning. The site offered to create an ideal AI girlfriend, with certain pages directly referencing erotic content. The FAQ also noted that filters for explicit content in chats were disabled so as not to limit users’ most intimate fantasies.
Both services have suspended operations. At the time of writing, the DreamPal website returned an error, while MagicEdit seemed available again. Their apps were removed from both the App Store and Google Play.
Jeremiah Fowler says earlier in 2025, he discovered two more open databases containing AI-generated images. One belonged to the South Korean site GenNomis, and contained 95,000 entries — a substantial portion of which being images of “undressed” people. Among other things, the database included images with child versions of celebrities: American singers Ariana Grande and Beyoncé, and reality TV star Kim Kardashian.
How to avoid becoming a victim
In light of incidents like these, it’s clear that the risks associated with sextortion are no longer confined to private messaging or the exchange of intimate content. In the era of generative AI, even ordinary photos, when posted publicly, can be used to create compromising content.
This problem is especially relevant for women, but men shouldn’t get too comfortable either: the popular blackmail scheme of “I hacked your computer and used the webcam to make videos of you browsing adult sites” could reach a whole new level of persuasion thanks to AI tools for generating photos and videos.
Therefore, protecting your privacy on social media and controlling what data about you is publicly available become key measures for safeguarding both your reputation and peace of mind. To prevent your photos from being used to create questionable AI-generated content, we recommend making all your social media profiles as private as possible — after all, they could be the source of images for AI-generated nudes.
Additionally, we have a dedicated service, Privacy Checker — perfect for anyone who wants a quick but systematic approach to privacy settings everywhere possible. It compiles step-by-step guides for securing accounts on social media and online services across all major platforms.
And to ensure the safety and privacy of your child’s data, Kaspersky Safe Kids can help: it allows parents to monitor which social media their child spends time on. From there, you can help them adjust privacy settings on their accounts so their posted photos aren’t used to create inappropriate content. Explore our guide to children’s online safety together, and if your child dreams of becoming a popular blogger, discuss our step-by-step cybersecurity guide for wannabe bloggers with them.
Database platform MongoDB disclosed CVE-2025-14847, called MongoBleed. This is an unauthenticated memory disclosure vulnerability with a CVSS score of 8.7.
Thanks to the convenience of NFC and smartphone payments, many people no longer carry wallets or remember their bank card PINs. All their cards reside in a payment app, and using that is quicker than fumbling for a physical card. Mobile payments are also secure — the technology was developed relatively recently and includes numerous anti-fraud protections. Still, criminals have invented several ways to abuse NFC and steal your money. Fortunately, protecting your funds is straightforward: just know about these tricks and avoid risky NFC usage scenarios.
What are NFC relay and NFCGate?
NFC relay is a technique where data wirelessly transmitted between a source (like a bank card) and a receiver (like a payment terminal) is intercepted by one intermediate device, and relayed in real time to another. Imagine you have two smartphones connected via the internet, each with a relay app installed. If you tap a physical bank card against the first smartphone and hold the second smartphone near a terminal or ATM, the relay app on the first smartphone will read the card’s signal using NFC, and relay it in real time to the second smartphone, which will then transmit this signal to the terminal. From the terminal’s perspective, it all looks like a real card is tapped on it — even though the card itself might physically be in another city or country.
This technology wasn’t originally created for crime. The NFCGate app appeared in 2015 as a research tool after it was developed by students at the Technical University of Darmstadt in Germany. It was intended for analyzing and debugging NFC traffic, as well as for education purposes and experiments with contactless technology. NFCGate was distributed as an open-source solution and used in academic and enthusiast circles.
Five years later, cybercriminals caught on to the potential of NFC relay and began modifying NFCGate by adding mods that allowed it to run through a malicious server, disguise itself as legitimate software, and perform social engineering scenarios.
What began as a research project morphed into the foundation for an entire class of attacks aimed at draining bank accounts without physical access to bank cards.
A history of misuse
The first documented attacks using a modified NFCGate occurred in late 2023 in the Czech Republic. By early 2025, the problem had become large scale and noticeable: cybersecurity analysts uncovered more than 80 unique malware samples built on the NFCGate framework. The attacks evolved rapidly, with NFC relay capabilities being integrated into other malware components.
By February 2025, malware bundles combining CraxsRAT and NFCGate emerged, allowing attackers to install and configure the relay with minimal victim interaction. A new scheme, a so-called “reverse” version of NFCGate, appeared in spring 2025, fundamentally changing the attack’s execution.
Particularly noteworthy is the RatOn Trojan, first detected in the Czech Republic. It combines remote smartphone control with NFC relay capabilities, letting attackers target victims’ banking apps and cards through various technique combinations. Features like screen capture, clipboard data manipulation, SMS sending, and stealing info from crypto wallets and banking apps give criminals an extensive arsenal.
Cybercriminals have also packaged NFC relay technology into malware-as-a-service (MaaS) offerings, and reselling them to other threat actors through subscription. In early 2025, analysts uncovered a new and sophisticated Android malware campaign in Italy, dubbed SuperCard X. Attempts to deploy SuperCard X were recorded in Russia in May 2025, and in Brazil in August of the same year.
The direct NFCGate attack
The direct attack is the original criminal scheme exploiting NFCGate. In this scenario, the victim’s smartphone plays the role of the reader, while the attacker’s phone acts as the card emulator.
First, the fraudsters trick the user into installing a malicious app disguised as a banking service, a system update, an “account security” app, or even a popular app like TikTok. Once installed, the app gains access to both NFC and the internet — often without requesting dangerous permissions or root access. Some versions also ask for access to Android accessibility features.
Then, under the guise of identity verification, the victim is prompted to tap their bank card to their phone. When they do, the malware reads the card data via NFC and immediately sends it to the criminals’ server. From there, the information is relayed to a second smartphone held by a money mule, who helps extract the money. This phone then emulates the victim’s card to make payments at a terminal or withdraw cash from an ATM.
The fake app on the victim’s smartphone also asks for the card PIN — just like at a payment terminal or ATM — and sends it to the attackers.
In early versions of the attack, criminals would simply stand ready at an ATM with a phone to use the duped user’s card in real time. Later, the malware was refined so the stolen data could be used for in-store purchases in a delayed, offline mode, rather than in a live relay.
For the victim, the theft is hard to notice: the card never left their possession, they didn’t have to manually enter or recite its details, and the bank alerts about the withdrawals can be delayed or even intercepted by the malicious app itself.
Among the red flags that should make you suspect a direct NFC attack are:
prompts to install apps not from official stores;
requests to tap your bank card on your phone.
The reverse NFCGate attack
The reverse attack is a newer, more sophisticated scheme. The victim’s smartphone no longer reads their card — it emulates the attacker’s card. To the victim, everything appears completely safe: there’s no need to recite card details, share codes, or tap a card to the phone.
Just like with the direct scheme, it all starts with social engineering. The user gets a call or message convincing them to install an app for “contactless payments”, “card security”, or even “using central bank digital currency”. Once installed, the new app asks to be set as the default contactless payment method — and this step is critically important. Thanks to this, the malware requires no root access — just user consent.
The malicious app then silently connects to the attackers’ server in the background, and the NFC data from a card belonging to one of the criminals is transmitted to the victim’s device. This step is completely invisible to the victim.
Next, the victim is directed to an ATM. Under the pretext of “transferring money to a secure account” or “sending money to themselves”, they are instructed to tap their phone on the ATM’s NFC reader. At this moment, the ATM is actually interacting with the attacker’s card. The PIN is dictated to the victim beforehand — presented as “new” or “temporary”.
The result is that all the money deposited or transferred by the victim ends up in the criminals’ account.
The hallmarks of this attack are:
requests to change your default NFC payment method;
a “new” PIN;
any scenario where you’re told to go to an ATM and perform actions there under someone else’s instructions.
How to protect yourself from NFC relay attacks
NFC relay attacks rely not so much on technical vulnerabilities as on user trust. Defending against them comes down to some simple precautions.
Make sure you keep your trusted contactless payment method (like Google Pay or Samsung Pay) as the default.
Never tap your bank card on your phone at someone else’s request, or because an app tells you to. Legitimate apps might use your camera to scan a card number, but they’ll never ask you to use the NFC reader for your own card.
Never follow instructions from strangers at an ATM — no matter who they claim to be.
Avoid installing apps from unofficial sources. This includes links sent via messaging apps, social media, SMS, or recommended during a phone call — even if they come from someone claiming to be customer support or the police.
Stick to official app stores only. When downloading from a store, check the app’s reviews, number of downloads, publication date, and rating.
When using an ATM, rely on your physical card instead of your smartphone for the transaction.
Make it a habit to regularly check the “Payment default” setting in your phone’s NFC menu. If you see any suspicious apps listed, remove them immediately and run a full security scan on your device.
Review the list of apps with accessibility permissions — this is a feature commonly abused by malware. Either revoke these permissions for any suspicious apps, or uninstall the apps completely.
Save the official customer service numbers for your banks in your phone’s contacts. At the slightest hint of foul play, call your bank’s hotline directly without delay.
If you suspect your card details may have been compromised, block the card immediately.
In Q3 2025, the percentage of ICS computers on which malicious objects were blocked decreased from the previous quarter by 0.4 pp to 20.1%. This is the lowest level for the observed period.
Percentage of ICS computers on which malicious objects were blocked, Q3 2022–Q3 2025
Regionally, the percentage of ICS computers on which malicious objects were blocked ranged from 9.2% in Northern Europe to 27.4% in Africa.
Regions ranked by percentage of ICS computers on which malicious objects were blocked
In Q3 2025, the percentage increased in five regions. The most notable increase occurred in East Asia, triggered by the local spread of malicious scripts in the OT infrastructure of engineering organizations and ICS integrators.
Changes in the percentage of ICS computers on which malicious objects were blocked, Q3 2025
Selected industries
The biometrics sector traditionally led the rankings of the industries and OT infrastructures surveyed in this report in terms of the percentage of ICS computers on which malicious objects were blocked.
Rankings of industries and OT infrastructures by percentage of ICS computers on which malicious objects were blocked
In Q3 2025, the percentage of ICS computers on which malicious objects were blocked increased in four of the seven surveyed industries. The most notable increases were in engineering and ICS integrators, and manufacturing.
Percentage of ICS computers on which malicious objects were blocked in selected industries
Diversity of detected malicious objects
In Q3 2025, Kaspersky protection solutions blocked malware from 11,356 different malware families of various categories on industrial automation systems.
Percentage of ICS computers on which the activity of malicious objects of various categories was blocked
In Q3 2025, there was a decrease in the percentage of ICS computers on which denylisted internet resources and miners of both categories were blocked. These were the only categories that exhibited a decrease.
Main threat sources
Depending on the threat detection and blocking scenario, it is not always possible to reliably identify the source. The circumstantial evidence for a specific source can be the blocked threat’s type (category).
The internet (visiting malicious or compromised internet resources; malicious content distributed via messengers; cloud data storage and processing services and CDNs), email clients (phishing emails), and removable storage devices remain the primary sources of threats to computers in an organization’s technology infrastructure.
In Q3 2025, the percentage of ICS computers on which malicious objects from various sources were blocked decreased.
Percentage of ICS computers on which malicious objects from various sources were blocked
The same computer can be attacked by several categories of malware from the same source during a quarter. That computer is counted when calculating the percentage of attacked computers for each threat category, but is only counted once for the threat source (we count unique attacked computers). In addition, it is not always possible to accurately determine the initial infection attempt. Therefore, the total percentage of ICS computers on which various categories of threats from a certain source were blocked can exceed the percentage of threats from the source itself.
The main categories of threats from the internet blocked on ICS computers in Q3 2025 were malicious scripts and phishing pages, and denylisted internet resources. The percentage ranged from 4.57% in Northern Europe to 10.31% in Africa.
The main categories of threats from email clients blocked on ICS computers were malicious scripts and phishing pages, spyware, and malicious documents. Most of the spyware detected in phishing emails was delivered as a password-protected archive or a multi-layered script embedded in an office document. The percentage of ICS computers on which threats from email clients were blocked ranged from 0.78% in Russia to 6.85% in Southern Europe.
The main categories of threats that were blocked when removable media was connected to ICS computers were worms, viruses, and spyware. The percentage of ICS computers on which threats from this source were blocked ranged from 0.05% in Australia and New Zealand to 1.43% in Africa.
The main categories of threats that spread through network folders were viruses, AutoCAD malware, worms, and spyware. The percentages of ICS computers where threats from this source were blocked ranged from 0.006% in Northern Europe to 0.20% in East Asia.
Threat categories
Typical attacks blocked within an OT network are multi-step sequences of malicious activities, where each subsequent step of the attackers is aimed at increasing privileges and/or gaining access to other systems by exploiting the security problems of industrial enterprises, including technological infrastructures.
Malicious objects used for initial infection
In Q3 2025, the percentage of ICS computers on which denylisted internet resources were blocked decreased to 4.01%. This is the lowest quarterly figure since the beginning of 2022.
Percentage of ICS computers on which denylisted internet resources were blocked, Q3 2022–Q3 2025
Regionally, the percentage of ICS computers on which denylisted internet resources were blocked ranged from 2.35% in Australia and New Zealand to 4.96% in Africa. Southeast Asia and South Asia were also among the top three regions for this indicator.
The percentage of ICS computers on which malicious documents were blocked has grown for three consecutive quarters, following a decline at the end of 2024. In Q3 2025, it reached 1,98%.
Percentage of ICS computers on which malicious documents were blocked, Q3 2022–Q3 2025
The indicator increased in four regions: South America, East Asia, Southeast Asia, and Australia and New Zealand. South America saw the largest increase as a result of a large-scale phishing campaign in which attackers used new exploits for an old vulnerability (CVE-2017-11882) in Microsoft Office Equation Editor to deliver various spyware to victims’ computers. It is noteworthy that the attackers in this phishing campaign used localized Spanish-language emails disguised as business correspondence.
In Q3 2025, the percentage of ICS computers on which malicious scripts and phishing pages were blocked increased to 6.79%. This category led the rankings of threat categories in terms of the percentage of ICS computers on which they were blocked.
Percentage of ICS computers on which malicious scripts and phishing pages were blocked, Q3 2022–Q3 2025
Regionally, the percentage of ICS computers on which malicious scripts and phishing pages were blocked ranged from 2.57% in Northern Europe to 9.41% in Africa. The top three regions for this indicator were Africa, East Asia, and South America. The indicator increased the most in East Asia (by a dramatic 5.23 pp) as a result of the local spread of malicious spyware scripts loaded into the memory of popular torrent clients including MediaGet.
Next-stage malware
Malicious objects used to initially infect computers deliver next-stage malware — spyware, ransomware, and miners — to victims’ computers. As a rule, the higher the percentage of ICS computers on which the initial infection malware is blocked, the higher the percentage for next-stage malware.
In Q3 2025, the percentage of ICS computers on which spyware and ransomware were blocked increased. The rates were:
spyware: 4.04% (up 0.20 pp);
ransomware: 0.17% (up 0.03 pp).
The percentage of ICS computers on which miners of both categories were blocked decreased. The rates were:
miners in the form of executable files for Windows: 0.57% (down 0.06 pp), it’s the lowest level since Q3 2022;
web miners: 0.25% (down 0.05 pp). This is the lowest level since Q3 2022.
Self-propagating malware
Self-propagating malware (worms and viruses) is a category unto itself. Worms and virus-infected files were originally used for initial infection, but as botnet functionality evolved, they took on next-stage characteristics.
To spread across ICS networks, viruses and worms rely on removable media and network folders in the form of infected files, such as archives with backups, office documents, pirated games and hacked applications. In rarer and more dangerous cases, web pages with network equipment settings, as well as files stored in internal document management systems, product lifecycle management (PLM) systems, resource management (ERP) systems and other web services are infected.
In Q3 2025, the percentage of ICS computers on which worms and viruses were blocked increased to 1.26% (by 0.04 pp) and 1.40% (by 0.11 pp), respectively.
AutoCAD malware
This category of malware can spread in a variety of ways, so it does not belong to a specific group.
In Q3 2025, the percentage of ICS computers on which AutoCAD malware was blocked slightly increased to 0.30% (by 0.01 pp).
In Q3 2025, the percentage of ICS computers on which malicious objects were blocked decreased from the previous quarter by 0.4 pp to 20.1%. This is the lowest level for the observed period.
Percentage of ICS computers on which malicious objects were blocked, Q3 2022–Q3 2025
Regionally, the percentage of ICS computers on which malicious objects were blocked ranged from 9.2% in Northern Europe to 27.4% in Africa.
Regions ranked by percentage of ICS computers on which malicious objects were blocked
In Q3 2025, the percentage increased in five regions. The most notable increase occurred in East Asia, triggered by the local spread of malicious scripts in the OT infrastructure of engineering organizations and ICS integrators.
Changes in the percentage of ICS computers on which malicious objects were blocked, Q3 2025
Selected industries
The biometrics sector traditionally led the rankings of the industries and OT infrastructures surveyed in this report in terms of the percentage of ICS computers on which malicious objects were blocked.
Rankings of industries and OT infrastructures by percentage of ICS computers on which malicious objects were blocked
In Q3 2025, the percentage of ICS computers on which malicious objects were blocked increased in four of the seven surveyed industries. The most notable increases were in engineering and ICS integrators, and manufacturing.
Percentage of ICS computers on which malicious objects were blocked in selected industries
Diversity of detected malicious objects
In Q3 2025, Kaspersky protection solutions blocked malware from 11,356 different malware families of various categories on industrial automation systems.
Percentage of ICS computers on which the activity of malicious objects of various categories was blocked
In Q3 2025, there was a decrease in the percentage of ICS computers on which denylisted internet resources and miners of both categories were blocked. These were the only categories that exhibited a decrease.
Main threat sources
Depending on the threat detection and blocking scenario, it is not always possible to reliably identify the source. The circumstantial evidence for a specific source can be the blocked threat’s type (category).
The internet (visiting malicious or compromised internet resources; malicious content distributed via messengers; cloud data storage and processing services and CDNs), email clients (phishing emails), and removable storage devices remain the primary sources of threats to computers in an organization’s technology infrastructure.
In Q3 2025, the percentage of ICS computers on which malicious objects from various sources were blocked decreased.
Percentage of ICS computers on which malicious objects from various sources were blocked
The same computer can be attacked by several categories of malware from the same source during a quarter. That computer is counted when calculating the percentage of attacked computers for each threat category, but is only counted once for the threat source (we count unique attacked computers). In addition, it is not always possible to accurately determine the initial infection attempt. Therefore, the total percentage of ICS computers on which various categories of threats from a certain source were blocked can exceed the percentage of threats from the source itself.
The main categories of threats from the internet blocked on ICS computers in Q3 2025 were malicious scripts and phishing pages, and denylisted internet resources. The percentage ranged from 4.57% in Northern Europe to 10.31% in Africa.
The main categories of threats from email clients blocked on ICS computers were malicious scripts and phishing pages, spyware, and malicious documents. Most of the spyware detected in phishing emails was delivered as a password-protected archive or a multi-layered script embedded in an office document. The percentage of ICS computers on which threats from email clients were blocked ranged from 0.78% in Russia to 6.85% in Southern Europe.
The main categories of threats that were blocked when removable media was connected to ICS computers were worms, viruses, and spyware. The percentage of ICS computers on which threats from this source were blocked ranged from 0.05% in Australia and New Zealand to 1.43% in Africa.
The main categories of threats that spread through network folders were viruses, AutoCAD malware, worms, and spyware. The percentages of ICS computers where threats from this source were blocked ranged from 0.006% in Northern Europe to 0.20% in East Asia.
Threat categories
Typical attacks blocked within an OT network are multi-step sequences of malicious activities, where each subsequent step of the attackers is aimed at increasing privileges and/or gaining access to other systems by exploiting the security problems of industrial enterprises, including technological infrastructures.
Malicious objects used for initial infection
In Q3 2025, the percentage of ICS computers on which denylisted internet resources were blocked decreased to 4.01%. This is the lowest quarterly figure since the beginning of 2022.
Percentage of ICS computers on which denylisted internet resources were blocked, Q3 2022–Q3 2025
Regionally, the percentage of ICS computers on which denylisted internet resources were blocked ranged from 2.35% in Australia and New Zealand to 4.96% in Africa. Southeast Asia and South Asia were also among the top three regions for this indicator.
The percentage of ICS computers on which malicious documents were blocked has grown for three consecutive quarters, following a decline at the end of 2024. In Q3 2025, it reached 1,98%.
Percentage of ICS computers on which malicious documents were blocked, Q3 2022–Q3 2025
The indicator increased in four regions: South America, East Asia, Southeast Asia, and Australia and New Zealand. South America saw the largest increase as a result of a large-scale phishing campaign in which attackers used new exploits for an old vulnerability (CVE-2017-11882) in Microsoft Office Equation Editor to deliver various spyware to victims’ computers. It is noteworthy that the attackers in this phishing campaign used localized Spanish-language emails disguised as business correspondence.
In Q3 2025, the percentage of ICS computers on which malicious scripts and phishing pages were blocked increased to 6.79%. This category led the rankings of threat categories in terms of the percentage of ICS computers on which they were blocked.
Percentage of ICS computers on which malicious scripts and phishing pages were blocked, Q3 2022–Q3 2025
Regionally, the percentage of ICS computers on which malicious scripts and phishing pages were blocked ranged from 2.57% in Northern Europe to 9.41% in Africa. The top three regions for this indicator were Africa, East Asia, and South America. The indicator increased the most in East Asia (by a dramatic 5.23 pp) as a result of the local spread of malicious spyware scripts loaded into the memory of popular torrent clients including MediaGet.
Next-stage malware
Malicious objects used to initially infect computers deliver next-stage malware — spyware, ransomware, and miners — to victims’ computers. As a rule, the higher the percentage of ICS computers on which the initial infection malware is blocked, the higher the percentage for next-stage malware.
In Q3 2025, the percentage of ICS computers on which spyware and ransomware were blocked increased. The rates were:
spyware: 4.04% (up 0.20 pp);
ransomware: 0.17% (up 0.03 pp).
The percentage of ICS computers on which miners of both categories were blocked decreased. The rates were:
miners in the form of executable files for Windows: 0.57% (down 0.06 pp), it’s the lowest level since Q3 2022;
web miners: 0.25% (down 0.05 pp). This is the lowest level since Q3 2022.
Self-propagating malware
Self-propagating malware (worms and viruses) is a category unto itself. Worms and virus-infected files were originally used for initial infection, but as botnet functionality evolved, they took on next-stage characteristics.
To spread across ICS networks, viruses and worms rely on removable media and network folders in the form of infected files, such as archives with backups, office documents, pirated games and hacked applications. In rarer and more dangerous cases, web pages with network equipment settings, as well as files stored in internal document management systems, product lifecycle management (PLM) systems, resource management (ERP) systems and other web services are infected.
In Q3 2025, the percentage of ICS computers on which worms and viruses were blocked increased to 1.26% (by 0.04 pp) and 1.40% (by 0.11 pp), respectively.
AutoCAD malware
This category of malware can spread in a variety of ways, so it does not belong to a specific group.
In Q3 2025, the percentage of ICS computers on which AutoCAD malware was blocked slightly increased to 0.30% (by 0.01 pp).
If you’re a penetration tester, you know that lateral movement is becoming increasingly difficult, especially in well-defended environments. One common technique for remote command execution has been the use of DCOM objects.
Over the years, many different DCOM objects have been discovered. Some rely on native Windows components, others depend on third-party software such as Microsoft Office, and some are undocumented objects found through reverse engineering. While certain objects still work, others no longer function in newer versions of Windows.
This research presents a previously undescribed DCOM object that can be used for both command execution and potential persistence. This new technique abuses older initial access and persistence methods through Control Panel items.
First, we will discuss COM technology. After that, we will review the current state of the Impacket dcomexec script, focusing on objects that still function, and discuss potential fixes and improvements, then move on to techniques for enumerating objects on the system. Next, we will examine Control Panel items, how adversaries have used them for initial access and persistence, and how these items can be leveraged through a DCOM object to achieve command execution.
Finally, we will cover detection strategies to identify and respond to this type of activity.
COM/DCOM technology
What is COM?
COM stands for Component Object Model, a Microsoft technology that defines a binary standard for interoperability. It enables the creation of reusable software components that can interact at runtime without the need to compile COM libraries directly into an application.
These software components operate in a client–server model. A COM object exposes its functionality through one or more interfaces. An interface is essentially a collection of related member functions (methods).
COM also enables communication between processes running on the same machine by using local RPC (Remote Procedure Call) to handle cross-process communication.
Terms
To ensure a better understanding of its structure and functionality, let’s revise COM-related terminology.
COM interface A COM interface defines the functionality that a COM object exposes. Each COM interface is identified by a unique GUID known as the IID (Interface ID). All COM interfaces can be found in the Windows Registry under HKEY_CLASSES_ROOT\Interface, where they are organized by GUID.
COM class (COM CoClass) A COM class is the actual implementation of one or more COM interfaces. Like COM interfaces, classes are identified by unique GUIDs, but in this case the GUID is called the CLSID (Class ID). This GUID is used to locate the COM server and activate the corresponding COM class.
All COM classes must be registered in the registry under HKEY_CLASSES_ROOT\CLSID, where each class’s GUID is stored. Under each GUID, you may find multiple subkeys that serve different purposes, such as:
InprocServer32/LocalServer32: Specifies the system path of the COM server where the class is defined. InprocServer32 is used for in-process servers (DLLs), while LocalServer32 is used for out-of-process servers (EXEs). We’ll describe this in more detail later.
ProgID: A human-readable name assigned to the COM class.
TypeLib: A binary description of the COM class (essentially documentation for the class).
AppID: Used to describe security configuration for the class.
COM server A COM is the module where a COM class is defined. The server can be implemented as an EXE, in which case it is called an out-of-process server, or as a DLL, in which case it is called an in-process server. Each COM server has a unique file path or location in the system. Information about COM servers is stored in the Windows Registry. The COM runtime uses the registry to locate the server and perform further actions. Registry entries for COM servers are located under the HKEY_CLASSES_ROOT root key for both 32- and 64-bit servers.
Component Object Model implementation
Client–server model
In-process server In the case of an in-process server, the server is implemented as a DLL. The client loads this DLL into its own address space and directly executes functions exposed by the COM object. This approach is efficient since both client and server run within the same process.
In-process COM server
Out-of-process server Here, the server is implemented and compiled as an executable (EXE). Since the client cannot load an EXE into its address space, the server runs in its own process, separate from the client. Communication between the two processes is handled via ALPC (Advanced Local Procedure Call) ports, which serve as the RPC transport layer for COM.
Out-of-process COM server
What is DCOM?
DCOM is an extension of COM where the D stands for Distributed. It enables the client and server to reside on different machines. From the user’s perspective, there is no difference: DCOM provides an abstraction layer that makes both the client and the server appear as if they are on the same machine.
Under the hood, however, COM uses TCP as the RPC transport layer to enable communication across machines.
Distributed COM implementation
Certain requirements must be met to extend a COM object into a DCOM object. The most important one for our research is the presence of the AppID subkey in the registry, located under the COM CLSID entry.
The AppID value contains a GUID that maps to a corresponding key under HKEY_CLASSES_ROOT\AppID. Several subkeys may exist under this GUID. Two critical ones are:
These registry settings grant remote clients permissions to activate and interact with DCOM objects.
Lateral movement via DCOM
After attackers compromise a host, their next objective is often to compromise additional machines. This is what we call lateral movement. One common lateral movement technique is to achieve remote command execution on a target machine. There are many ways to do this, one of which involves abusing DCOM objects.
In recent years, many DCOM objects have been discovered. This research focuses on the objects exposed by the Impacket script dcomexec.py that can be used for command execution. More specifically, three exposed objects are used: ShellWindows, ShellBrowserWindow and MMC20.
ShellWindows
ShellWindows was one of the first DCOM objects to be identified. It represents a collection of open shell windows and is hosted by explorer.exe, meaning any COM client communicates with that process.
In Impacket’s dcomexec.py, once an instance of this COM object is created on a remote machine, the script provides a semi-interactive shell.
Each time a user enters a command, the function exposed by the COM object is called. The command output is redirected to a file, which the script retrieves via SMB and displays back to simulate a regular shell.
Internally, the script runs this command when connecting:
cmd.exe /Q /c cd \ 1> \\127.0.0.1\ADMIN$\__17602 2>&1
This sets the working directory to C:\ and redirects the output to the ADMIN$ share under the filename __17602. After that, the script checks whether the file exists; if it does, execution is considered successful and the output appears as if in a shell.
When running dcomexec.py against Windows 10 and 11 using the ShellWindows object, the script hangs after confirming SMB connection initialization and printing the SMB banner. As I mentioned in my personal blog post, it appears that this DCOM object no longer has permission to write to the ADMIN$ share. A simple fix is to redirect the output to a directory the DCOM object can write to, such as the Temp folder. The Temp folder can then be accessed under the same ADMIN$ share. A small change in the code resolves the issue. For example:
ShellBrowserWindow
The ShellBrowserWindow object behaves almost identically to ShellWindows and exhibits the same behavior on Windows 10. The same workaround that we used for ShellWindows applies in this case. However, on Windows 11, this object no longer works for command execution.
MMC20
The MMC20.Application COM object is the automation interface for Microsoft Management Console (MMC). It exposes methods and properties that allow MMC snap-ins to be automated.
This object has historically worked across all Windows versions. Starting with Windows Server 2025, however, attempting to use it triggers a Defender alert, and execution is blocked.
As shown in earlier examples, the dcomexec.py script writes the command output to a file under ADMIN$, with a filename that begins with __:
OUTPUT_FILENAME = '__' + str(time.time())[:5]
Defender appears to check for files written under ADMIN$ that start with __, and when it detects one, it blocks the process and alerts the user. A quick fix is to simply remove the double underscores from the output filename.
Another way to bypass this issue is to use the same workaround used for ShellWindows – redirecting the output to the Temp folder. The table below outlines the status of these objects across different Windows versions.
Windows Server 2025
Windows Server 2022
Windows 11
Windows 10
ShellWindows
Doesn’t work
Doesn’t work
Works but needs a fix
Works but needs a fix
ShellBrowserWindow
Doesn’t work
Doesn’t work
Doesn’t work
Works but needs a fix
MMC20
Detected by Defender
Works
Works
Works
Enumerating COM/DCOM objects
The first step to identifying which DCOM objects could be used for lateral movement is to enumerate them. By enumerating, I don’t just mean listing the objects. Enumeration involves:
Finding objects and filtering specifically for DCOM objects.
Identifying their interfaces.
Inspecting the exposed functions.
Automating enumeration is difficult because most COM objects lack a type library (TypeLib). A TypeLib acts as documentation for an object: which interfaces it supports, which functions are exposed, and the definitions of those functions. Even when TypeLibs are available, manual inspection is often still required, as we will explain later.
There are several approaches to enumerating COM objects depending on their use cases. Next, we’ll describe the methods I used while conducting this research, taking into account both automated and manual methods.
Automation using PowerShell In PowerShell, you can use .NET to create and interact with DCOM objects. Objects can be created using either their ProgID or CLSID, after which you can call their functions (as shown in the figure below).
Shell.Application COM object function list in PowerShell
Under the hood, PowerShell checks whether the COM object has a TypeLib and implements the IDispatch interface. IDispatch enables late binding, which allows runtime dynamic object creation and function invocation. With these two conditions met, PowerShell can dynamically interact with COM objects at runtime.
Our strategy looks like this:
As you can see in the last box, we perform manual inspection to look for functions with names that could be of interest, such as Execute, Exec, Shell, etc. These names often indicate potential command execution capabilities.
However, this approach has several limitations:
TypeLib requirement: Not all COM objects have a TypeLib, so many objects cannot be enumerated this way.
IDispatch requirement: Not all COM objects implement the IDispatch interface, which is required for PowerShell interaction.
Interface control: When you instantiate an object in PowerShell, you cannot choose which interface the instance will be tied to. If a COM class implements multiple interfaces, PowerShell will automatically select the one marked as [default] in the TypeLib. This means that other non-default interfaces, which may contain additional relevant functionality, such as command execution, could be overlooked.
Automation using C++ As you might expect, C++ is one of the languages that natively supports COM clients. Using C++, you can create instances of COM objects and call their functions via header files that define the interfaces.However, with this approach, we are not necessarily interested in calling functions directly. Instead, the goal is to check whether a specific COM object supports certain interfaces. The reasoning is that many interfaces have been found to contain functions that can be abused for command execution or other purposes.
This strategy primarily relies on an interface called IUnknown. All COM interfaces should inherit from this interface, and all COM classes should implement it.The IUnknown interface exposes three main functions. The most important is QueryInterface(), which is used to ask a COM object for a pointer to one of its interfaces.So, the strategy is to:
Enumerate COM classes in the system by reading CLSIDs under the HKEY_CLASSES_ROOT\CLSID key.
Check whether they support any known valuable interfaces. If they do, those classes may be leveraged for command execution or other useful functionality.
This method has several advantages:
No TypeLib dependency: Unlike PowerShell, this approach does not require the COM object to have a TypeLib.
Use of IUnknown: In C++, you can use the QueryInterface function from the base IUnknown interface to check if a particular interface is supported by a COM class.
No need for interface definitions: Even without knowing the exact interface structure, you can obtain a pointer to its virtual function table (vtable), typically cast as a void*. This is enough to confirm the existence of the interface and potentially inspect it further.
The figure below illustrates this strategy:
This approach is good in terms of automation because it eliminates the need for manual inspection. However, we are still only checking well-known interfaces commonly used for lateral movement, while potentially missing others.
Manual inspection using open-source tools
As you can see, automation can be difficult since it requires several prerequisites and, in many cases, still ends with a manual inspection. An alternative approach is manual inspection using a tool called OleViewDotNet, developed by James Forshaw. This tool allows you to:
List all COM classes in the system.
Create instances of those classes.
Check their supported interfaces.
Call specific functions.
Apply various filters for easier analysis.
Perform other inspection tasks.
Open-source tool for inspecting COM interfaces
One of the most valuable features of this tool is its naming visibility. OleViewDotNet extracts the names of interfaces and classes (when available) from the Windows Registry and displays them, along with any associated type libraries.
This makes manual inspection easier, since you can analyze the names of classes, interfaces, or type libraries and correlate them with potentially interesting functionality, for example, functions that could lead to command execution or persistence techniques.
Control Panel items as attack surfaces
Control Panel items allow users to view and adjust their computer settings. These items are implemented as DLLs that export the CPlApplet function and typically have the .cpl extension. Control Panel items can also be executables, but our research will focus on DLLs only.
Control Panel items
Attackers can abuse CPL files for initial access. When a user executes a malicious .cpl file (e.g., delivered via phishing), the system may be compromised – a technique mapped to MITRE ATT&CK T1218.002.
Adversaries may also modify the extensions of malicious DLLs to .cpl and register them in the corresponding locations in the registry.
Under HKEY_CURRENT_USER:
HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
Under HKEY_LOCAL_MACHINE:
For 64-bit DLLs:
HKLM\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
For 32-bit DLLs:
HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
These locations are important when Control Panel DLLs need to be available to the current logged-in user or to all users on the machine. However, the “Control Panel” subkey and its “Cpls” subkey under HKCU should be created manually, unlike the “Control Panel” and “Cpls” subkeys under HKLM, which are created automatically by the operating system.
Once registered, the DLL (CPL file) will load every time the Control Panel is opened, enabling persistence on the victim’s system.
It’s worth noting that even DLLs that do not comply with the CPL specification, do not export CPlApplet, or do not have the .cpl extension can still be executed via their DllEntryPoint function if they are registered under the registry keys listed above.
There are multiple ways to execute Control Panel items:
This calls the Control_RunDLL function from shell32.dll, passing the CPL file as an argument. Everything inside the CPlApplet function will then be executed.
However, if the CPL file has been registered in the registry as shown earlier, then every time the Control Panel is opened, the file is loaded into memory through the COM Surrogate process (dllhost.exe):
COM Surrogate process loading the CPL file
What happened was that a Control Panel with a COM client used a COM object to load these CPL files. We will talk about this COM object in more detail later.
The COM Surrogate process was designed to host COM server DLLs in a separate process rather than loading them directly into the client process’s address space. This isolation improves stability for the in-process server model. This hosting behavior can be configured for a COM object in the registry if you want a COM server DLL to run inside a separate process because, by default, it is loaded in the same process.
‘DCOMing’ through Control Panel items
While following the manual approach of enumerating COM/DCOM objects that could be useful for lateral movement, I came across a COM object called COpenControlPanel, which is exposed through shell32.dll and has the CLSID {06622D85-6856-4460-8DE1-A81921B41C4B}. This object exposes multiple interfaces, one of which is IOpenControlPanel with IID {D11AD862-66DE-4DF4-BF6C-1F5621996AF1}.
IOpenControlPanel interface in the OleViewDotNet output
I immediately thought of its potential to compromise Control Panel items, so I wanted to check which functions were exposed by this interface. Unfortunately, neither the interface nor the COM class has a type library.
COpenControlPanel interfaces without TypeLib
Normally, checking the interface definition would require reverse engineering, so at first, it looked like we needed to take a different research path. However, it turned out that the IOpenControlPanel interface is documented on MSDN, and according to the documentation, it exposes several functions. One of them, called Open, allows a specified Control Panel item to be opened using its name as the first argument.
Full type and function definitions are provided in the shobjidl_core.h Windows header file.
Open function exposed by IOpenControlPanel interface
It’s worth noting that in newer versions of Windows (e.g., Windows Server 2025 and Windows 11), Microsoft has removed interface names from the registry, which means they can no longer be identified through OleViewDotNet.
COpenControlPanel interfaces without names
Returning to the COpenControlPanel COM object, I found that the Open function can trigger a DLL to be loaded into memory if it has been registered in the registry. For the purposes of this research, I created a DLL that basically just spawns a message box which is defined under the DllEntryPoint function. I registered it under HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls and then created a simple C++ COM client to call the Open function on this interface.
As expected, the DLL was loaded into memory. It was hosted in the same way that it would be if the Control Panel itself was opened: through the COM Surrogate process (dllhost.exe). Using Process Explorer, it was clear that dllhost.exe loaded my DLL while simultaneously hosting the COpenControlPanel object along with other COM objects.
COM Surrogate loading a custom DLL and hosting the COpenControlPanel object
Based on my testing, I made the following observations:
The DLL that needs to be registered does not necessarily have to be a .cpl file; any DLL with a valid entry point will be loaded.
The Open() function accepts the name of a Control Panel item as its first argument. However, it appears that even if a random string is supplied, it still causes all DLLs registered in the relevant registry location to be loaded into memory.
Now, what if we could trigger this COM object remotely? In other words, what if it is not just a COM object but also a DCOM object? To verify this, we checked the AppID of the COpenControlPanel object using OleViewDotNet.
COpenControlPanel object in OleViewDotNet
Both the launch and access permissions are empty, which means the object will follow the system’s default DCOM security policy. By default, members of the Administrators group are allowed to launch and access the DCOM object.
Based on this, we can build a remote strategy. First, upload the “malicious” DLL, then use the Remote Registry service to register it in the appropriate registry location. Finally, use a trigger acting as a DCOM client to remotely invoke the Open() function, causing our DLL to be loaded. The diagram below illustrates the flow of this approach.
Malicious DLL loading using DCOM
The trigger can be written in either C++ or Python, for example, using Impacket. I chose Python because of its flexibility. The trigger itself is straightforward: we define the DCOM class, the interface, and the function to call. The full code example can be found here.
Once the trigger runs, the behavior will be the same as when executing the COM client locally: our DLL will be loaded through the COM Surrogate process (dllhost.exe).
As you can see, this technique not only achieves command execution but also provides persistence. It can be triggered in two ways: when a user opens the Control Panel or remotely at any time via DCOM.
Detection
The first step in detecting such activity is to check whether any Control Panel items have been registered under the following registry paths:
Although commonly known best practices and research papers regarding Windows security advise monitoring only the first subkey, for thorough coverage it is important to monitor all of the above.
In addition, monitoring dllhost.exe (COM Surrogate) for unusual COM objects such as COpenControlPanel can provide indicators of malicious activity.
Finally, it is always recommended to monitor Remote Registry usage because it is commonly abused in many types of attacks, not just in this scenario.
Conclusion
In conclusion, I hope this research has clarified yet another attack vector and emphasized the importance of implementing hardening practices. Below are a few closing points for security researchers to take into account:
As shown, DCOM represents a large attack surface. Windows exposes many DCOM classes, a significant number of which lack type libraries – meaning reverse engineering can reveal additional classes that may be abused for lateral movement.
Changing registry values to register malicious CPLs is not good practice from a red teaming ethics perspective. Defender products tend to monitor common persistence paths, but Control Panel applets can be registered in multiple registry locations, so there is always a gap that can be exploited.
Bitness also matters. On x64 systems, loading a 32-bit DLL will spawn a 32-bit COM Surrogate process (dllhost.exe *32). This is unusual on 64-bit hosts and therefore serves as a useful detection signal for defenders and an interesting red flag for red teamers to consider.
If you’re a penetration tester, you know that lateral movement is becoming increasingly difficult, especially in well-defended environments. One common technique for remote command execution has been the use of DCOM objects.
Over the years, many different DCOM objects have been discovered. Some rely on native Windows components, others depend on third-party software such as Microsoft Office, and some are undocumented objects found through reverse engineering. While certain objects still work, others no longer function in newer versions of Windows.
This research presents a previously undescribed DCOM object that can be used for both command execution and potential persistence. This new technique abuses older initial access and persistence methods through Control Panel items.
First, we will discuss COM technology. After that, we will review the current state of the Impacket dcomexec script, focusing on objects that still function, and discuss potential fixes and improvements, then move on to techniques for enumerating objects on the system. Next, we will examine Control Panel items, how adversaries have used them for initial access and persistence, and how these items can be leveraged through a DCOM object to achieve command execution.
Finally, we will cover detection strategies to identify and respond to this type of activity.
COM/DCOM technology
What is COM?
COM stands for Component Object Model, a Microsoft technology that defines a binary standard for interoperability. It enables the creation of reusable software components that can interact at runtime without the need to compile COM libraries directly into an application.
These software components operate in a client–server model. A COM object exposes its functionality through one or more interfaces. An interface is essentially a collection of related member functions (methods).
COM also enables communication between processes running on the same machine by using local RPC (Remote Procedure Call) to handle cross-process communication.
Terms
To ensure a better understanding of its structure and functionality, let’s revise COM-related terminology.
COM interface A COM interface defines the functionality that a COM object exposes. Each COM interface is identified by a unique GUID known as the IID (Interface ID). All COM interfaces can be found in the Windows Registry under HKEY_CLASSES_ROOT\Interface, where they are organized by GUID.
COM class (COM CoClass) A COM class is the actual implementation of one or more COM interfaces. Like COM interfaces, classes are identified by unique GUIDs, but in this case the GUID is called the CLSID (Class ID). This GUID is used to locate the COM server and activate the corresponding COM class.
All COM classes must be registered in the registry under HKEY_CLASSES_ROOT\CLSID, where each class’s GUID is stored. Under each GUID, you may find multiple subkeys that serve different purposes, such as:
InprocServer32/LocalServer32: Specifies the system path of the COM server where the class is defined. InprocServer32 is used for in-process servers (DLLs), while LocalServer32 is used for out-of-process servers (EXEs). We’ll describe this in more detail later.
ProgID: A human-readable name assigned to the COM class.
TypeLib: A binary description of the COM class (essentially documentation for the class).
AppID: Used to describe security configuration for the class.
COM server A COM is the module where a COM class is defined. The server can be implemented as an EXE, in which case it is called an out-of-process server, or as a DLL, in which case it is called an in-process server. Each COM server has a unique file path or location in the system. Information about COM servers is stored in the Windows Registry. The COM runtime uses the registry to locate the server and perform further actions. Registry entries for COM servers are located under the HKEY_CLASSES_ROOT root key for both 32- and 64-bit servers.
Component Object Model implementation
Client–server model
In-process server In the case of an in-process server, the server is implemented as a DLL. The client loads this DLL into its own address space and directly executes functions exposed by the COM object. This approach is efficient since both client and server run within the same process.
In-process COM server
Out-of-process server Here, the server is implemented and compiled as an executable (EXE). Since the client cannot load an EXE into its address space, the server runs in its own process, separate from the client. Communication between the two processes is handled via ALPC (Advanced Local Procedure Call) ports, which serve as the RPC transport layer for COM.
Out-of-process COM server
What is DCOM?
DCOM is an extension of COM where the D stands for Distributed. It enables the client and server to reside on different machines. From the user’s perspective, there is no difference: DCOM provides an abstraction layer that makes both the client and the server appear as if they are on the same machine.
Under the hood, however, COM uses TCP as the RPC transport layer to enable communication across machines.
Distributed COM implementation
Certain requirements must be met to extend a COM object into a DCOM object. The most important one for our research is the presence of the AppID subkey in the registry, located under the COM CLSID entry.
The AppID value contains a GUID that maps to a corresponding key under HKEY_CLASSES_ROOT\AppID. Several subkeys may exist under this GUID. Two critical ones are:
These registry settings grant remote clients permissions to activate and interact with DCOM objects.
Lateral movement via DCOM
After attackers compromise a host, their next objective is often to compromise additional machines. This is what we call lateral movement. One common lateral movement technique is to achieve remote command execution on a target machine. There are many ways to do this, one of which involves abusing DCOM objects.
In recent years, many DCOM objects have been discovered. This research focuses on the objects exposed by the Impacket script dcomexec.py that can be used for command execution. More specifically, three exposed objects are used: ShellWindows, ShellBrowserWindow and MMC20.
ShellWindows
ShellWindows was one of the first DCOM objects to be identified. It represents a collection of open shell windows and is hosted by explorer.exe, meaning any COM client communicates with that process.
In Impacket’s dcomexec.py, once an instance of this COM object is created on a remote machine, the script provides a semi-interactive shell.
Each time a user enters a command, the function exposed by the COM object is called. The command output is redirected to a file, which the script retrieves via SMB and displays back to simulate a regular shell.
Internally, the script runs this command when connecting:
cmd.exe /Q /c cd \ 1> \\127.0.0.1\ADMIN$\__17602 2>&1
This sets the working directory to C:\ and redirects the output to the ADMIN$ share under the filename __17602. After that, the script checks whether the file exists; if it does, execution is considered successful and the output appears as if in a shell.
When running dcomexec.py against Windows 10 and 11 using the ShellWindows object, the script hangs after confirming SMB connection initialization and printing the SMB banner. As I mentioned in my personal blog post, it appears that this DCOM object no longer has permission to write to the ADMIN$ share. A simple fix is to redirect the output to a directory the DCOM object can write to, such as the Temp folder. The Temp folder can then be accessed under the same ADMIN$ share. A small change in the code resolves the issue. For example:
ShellBrowserWindow
The ShellBrowserWindow object behaves almost identically to ShellWindows and exhibits the same behavior on Windows 10. The same workaround that we used for ShellWindows applies in this case. However, on Windows 11, this object no longer works for command execution.
MMC20
The MMC20.Application COM object is the automation interface for Microsoft Management Console (MMC). It exposes methods and properties that allow MMC snap-ins to be automated.
This object has historically worked across all Windows versions. Starting with Windows Server 2025, however, attempting to use it triggers a Defender alert, and execution is blocked.
As shown in earlier examples, the dcomexec.py script writes the command output to a file under ADMIN$, with a filename that begins with __:
OUTPUT_FILENAME = '__' + str(time.time())[:5]
Defender appears to check for files written under ADMIN$ that start with __, and when it detects one, it blocks the process and alerts the user. A quick fix is to simply remove the double underscores from the output filename.
Another way to bypass this issue is to use the same workaround used for ShellWindows – redirecting the output to the Temp folder. The table below outlines the status of these objects across different Windows versions.
Windows Server 2025
Windows Server 2022
Windows 11
Windows 10
ShellWindows
Doesn’t work
Doesn’t work
Works but needs a fix
Works but needs a fix
ShellBrowserWindow
Doesn’t work
Doesn’t work
Doesn’t work
Works but needs a fix
MMC20
Detected by Defender
Works
Works
Works
Enumerating COM/DCOM objects
The first step to identifying which DCOM objects could be used for lateral movement is to enumerate them. By enumerating, I don’t just mean listing the objects. Enumeration involves:
Finding objects and filtering specifically for DCOM objects.
Identifying their interfaces.
Inspecting the exposed functions.
Automating enumeration is difficult because most COM objects lack a type library (TypeLib). A TypeLib acts as documentation for an object: which interfaces it supports, which functions are exposed, and the definitions of those functions. Even when TypeLibs are available, manual inspection is often still required, as we will explain later.
There are several approaches to enumerating COM objects depending on their use cases. Next, we’ll describe the methods I used while conducting this research, taking into account both automated and manual methods.
Automation using PowerShell In PowerShell, you can use .NET to create and interact with DCOM objects. Objects can be created using either their ProgID or CLSID, after which you can call their functions (as shown in the figure below).
Shell.Application COM object function list in PowerShell
Under the hood, PowerShell checks whether the COM object has a TypeLib and implements the IDispatch interface. IDispatch enables late binding, which allows runtime dynamic object creation and function invocation. With these two conditions met, PowerShell can dynamically interact with COM objects at runtime.
Our strategy looks like this:
As you can see in the last box, we perform manual inspection to look for functions with names that could be of interest, such as Execute, Exec, Shell, etc. These names often indicate potential command execution capabilities.
However, this approach has several limitations:
TypeLib requirement: Not all COM objects have a TypeLib, so many objects cannot be enumerated this way.
IDispatch requirement: Not all COM objects implement the IDispatch interface, which is required for PowerShell interaction.
Interface control: When you instantiate an object in PowerShell, you cannot choose which interface the instance will be tied to. If a COM class implements multiple interfaces, PowerShell will automatically select the one marked as [default] in the TypeLib. This means that other non-default interfaces, which may contain additional relevant functionality, such as command execution, could be overlooked.
Automation using C++ As you might expect, C++ is one of the languages that natively supports COM clients. Using C++, you can create instances of COM objects and call their functions via header files that define the interfaces.However, with this approach, we are not necessarily interested in calling functions directly. Instead, the goal is to check whether a specific COM object supports certain interfaces. The reasoning is that many interfaces have been found to contain functions that can be abused for command execution or other purposes.
This strategy primarily relies on an interface called IUnknown. All COM interfaces should inherit from this interface, and all COM classes should implement it.The IUnknown interface exposes three main functions. The most important is QueryInterface(), which is used to ask a COM object for a pointer to one of its interfaces.So, the strategy is to:
Enumerate COM classes in the system by reading CLSIDs under the HKEY_CLASSES_ROOT\CLSID key.
Check whether they support any known valuable interfaces. If they do, those classes may be leveraged for command execution or other useful functionality.
This method has several advantages:
No TypeLib dependency: Unlike PowerShell, this approach does not require the COM object to have a TypeLib.
Use of IUnknown: In C++, you can use the QueryInterface function from the base IUnknown interface to check if a particular interface is supported by a COM class.
No need for interface definitions: Even without knowing the exact interface structure, you can obtain a pointer to its virtual function table (vtable), typically cast as a void*. This is enough to confirm the existence of the interface and potentially inspect it further.
The figure below illustrates this strategy:
This approach is good in terms of automation because it eliminates the need for manual inspection. However, we are still only checking well-known interfaces commonly used for lateral movement, while potentially missing others.
Manual inspection using open-source tools
As you can see, automation can be difficult since it requires several prerequisites and, in many cases, still ends with a manual inspection. An alternative approach is manual inspection using a tool called OleViewDotNet, developed by James Forshaw. This tool allows you to:
List all COM classes in the system.
Create instances of those classes.
Check their supported interfaces.
Call specific functions.
Apply various filters for easier analysis.
Perform other inspection tasks.
Open-source tool for inspecting COM interfaces
One of the most valuable features of this tool is its naming visibility. OleViewDotNet extracts the names of interfaces and classes (when available) from the Windows Registry and displays them, along with any associated type libraries.
This makes manual inspection easier, since you can analyze the names of classes, interfaces, or type libraries and correlate them with potentially interesting functionality, for example, functions that could lead to command execution or persistence techniques.
Control Panel items as attack surfaces
Control Panel items allow users to view and adjust their computer settings. These items are implemented as DLLs that export the CPlApplet function and typically have the .cpl extension. Control Panel items can also be executables, but our research will focus on DLLs only.
Control Panel items
Attackers can abuse CPL files for initial access. When a user executes a malicious .cpl file (e.g., delivered via phishing), the system may be compromised – a technique mapped to MITRE ATT&CK T1218.002.
Adversaries may also modify the extensions of malicious DLLs to .cpl and register them in the corresponding locations in the registry.
Under HKEY_CURRENT_USER:
HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
Under HKEY_LOCAL_MACHINE:
For 64-bit DLLs:
HKLM\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
For 32-bit DLLs:
HKLM\Software\WOW6432Node\Microsoft\Windows\CurrentVersion\Control Panel\Cpls
These locations are important when Control Panel DLLs need to be available to the current logged-in user or to all users on the machine. However, the “Control Panel” subkey and its “Cpls” subkey under HKCU should be created manually, unlike the “Control Panel” and “Cpls” subkeys under HKLM, which are created automatically by the operating system.
Once registered, the DLL (CPL file) will load every time the Control Panel is opened, enabling persistence on the victim’s system.
It’s worth noting that even DLLs that do not comply with the CPL specification, do not export CPlApplet, or do not have the .cpl extension can still be executed via their DllEntryPoint function if they are registered under the registry keys listed above.
There are multiple ways to execute Control Panel items:
This calls the Control_RunDLL function from shell32.dll, passing the CPL file as an argument. Everything inside the CPlApplet function will then be executed.
However, if the CPL file has been registered in the registry as shown earlier, then every time the Control Panel is opened, the file is loaded into memory through the COM Surrogate process (dllhost.exe):
COM Surrogate process loading the CPL file
What happened was that a Control Panel with a COM client used a COM object to load these CPL files. We will talk about this COM object in more detail later.
The COM Surrogate process was designed to host COM server DLLs in a separate process rather than loading them directly into the client process’s address space. This isolation improves stability for the in-process server model. This hosting behavior can be configured for a COM object in the registry if you want a COM server DLL to run inside a separate process because, by default, it is loaded in the same process.
‘DCOMing’ through Control Panel items
While following the manual approach of enumerating COM/DCOM objects that could be useful for lateral movement, I came across a COM object called COpenControlPanel, which is exposed through shell32.dll and has the CLSID {06622D85-6856-4460-8DE1-A81921B41C4B}. This object exposes multiple interfaces, one of which is IOpenControlPanel with IID {D11AD862-66DE-4DF4-BF6C-1F5621996AF1}.
IOpenControlPanel interface in the OleViewDotNet output
I immediately thought of its potential to compromise Control Panel items, so I wanted to check which functions were exposed by this interface. Unfortunately, neither the interface nor the COM class has a type library.
COpenControlPanel interfaces without TypeLib
Normally, checking the interface definition would require reverse engineering, so at first, it looked like we needed to take a different research path. However, it turned out that the IOpenControlPanel interface is documented on MSDN, and according to the documentation, it exposes several functions. One of them, called Open, allows a specified Control Panel item to be opened using its name as the first argument.
Full type and function definitions are provided in the shobjidl_core.h Windows header file.
Open function exposed by IOpenControlPanel interface
It’s worth noting that in newer versions of Windows (e.g., Windows Server 2025 and Windows 11), Microsoft has removed interface names from the registry, which means they can no longer be identified through OleViewDotNet.
COpenControlPanel interfaces without names
Returning to the COpenControlPanel COM object, I found that the Open function can trigger a DLL to be loaded into memory if it has been registered in the registry. For the purposes of this research, I created a DLL that basically just spawns a message box which is defined under the DllEntryPoint function. I registered it under HKCU\Software\Microsoft\Windows\CurrentVersion\Control Panel\Cpls and then created a simple C++ COM client to call the Open function on this interface.
As expected, the DLL was loaded into memory. It was hosted in the same way that it would be if the Control Panel itself was opened: through the COM Surrogate process (dllhost.exe). Using Process Explorer, it was clear that dllhost.exe loaded my DLL while simultaneously hosting the COpenControlPanel object along with other COM objects.
COM Surrogate loading a custom DLL and hosting the COpenControlPanel object
Based on my testing, I made the following observations:
The DLL that needs to be registered does not necessarily have to be a .cpl file; any DLL with a valid entry point will be loaded.
The Open() function accepts the name of a Control Panel item as its first argument. However, it appears that even if a random string is supplied, it still causes all DLLs registered in the relevant registry location to be loaded into memory.
Now, what if we could trigger this COM object remotely? In other words, what if it is not just a COM object but also a DCOM object? To verify this, we checked the AppID of the COpenControlPanel object using OleViewDotNet.
COpenControlPanel object in OleViewDotNet
Both the launch and access permissions are empty, which means the object will follow the system’s default DCOM security policy. By default, members of the Administrators group are allowed to launch and access the DCOM object.
Based on this, we can build a remote strategy. First, upload the “malicious” DLL, then use the Remote Registry service to register it in the appropriate registry location. Finally, use a trigger acting as a DCOM client to remotely invoke the Open() function, causing our DLL to be loaded. The diagram below illustrates the flow of this approach.
Malicious DLL loading using DCOM
The trigger can be written in either C++ or Python, for example, using Impacket. I chose Python because of its flexibility. The trigger itself is straightforward: we define the DCOM class, the interface, and the function to call. The full code example can be found here.
Once the trigger runs, the behavior will be the same as when executing the COM client locally: our DLL will be loaded through the COM Surrogate process (dllhost.exe).
As you can see, this technique not only achieves command execution but also provides persistence. It can be triggered in two ways: when a user opens the Control Panel or remotely at any time via DCOM.
Detection
The first step in detecting such activity is to check whether any Control Panel items have been registered under the following registry paths:
Although commonly known best practices and research papers regarding Windows security advise monitoring only the first subkey, for thorough coverage it is important to monitor all of the above.
In addition, monitoring dllhost.exe (COM Surrogate) for unusual COM objects such as COpenControlPanel can provide indicators of malicious activity.
Finally, it is always recommended to monitor Remote Registry usage because it is commonly abused in many types of attacks, not just in this scenario.
Conclusion
In conclusion, I hope this research has clarified yet another attack vector and emphasized the importance of implementing hardening practices. Below are a few closing points for security researchers to take into account:
As shown, DCOM represents a large attack surface. Windows exposes many DCOM classes, a significant number of which lack type libraries – meaning reverse engineering can reveal additional classes that may be abused for lateral movement.
Changing registry values to register malicious CPLs is not good practice from a red teaming ethics perspective. Defender products tend to monitor common persistence paths, but Control Panel applets can be registered in multiple registry locations, so there is always a gap that can be exploited.
Bitness also matters. On x64 systems, loading a 32-bit DLL will spawn a 32-bit COM Surrogate process (dllhost.exe *32). This is unusual on 64-bit hosts and therefore serves as a useful detection signal for defenders and an interesting red flag for red teamers to consider.
In November 2025, Kaspersky experts uncovered a new stealer named Stealka, which targets Windows users’ data. Attackers are using Stealka to hijack accounts, steal cryptocurrency, and install a crypto miner on their victims’ devices. Most frequently, this infostealer disguises itself as game cracks, cheats and mods.
Here’s how the attackers are spreading the stealer, and how you can protect yourself.
How Stealka spreads
A stealer is a type of malware that collects confidential information stored on the victim’s device and sends it to the attackers’ server. Stealka is primarily distributed via popular platforms like GitHub, SourceForge, Softpedia, sites.google.com, and others, disguised as cracks for popular software, or cheats and mods for games. For the malware to be activated, the user must run the file manually.
Here’s an example: a malicious Roblox mod published on SourceForge.
Attackers exploited SourceForge, a legitimate website, to upload a mod containing Stealka
And here’s one on GitHub posing as a crack for Microsoft Visio.
A pirated version of Microsoft Visio containing the stealer, hosted on GitHub
Sometimes, however, attackers go a step further (and possibly use AI tools) to create entire fake websites that look quite professional. Without the help of a robust antivirus, the average user is unlikely to realize anything is amiss.
A fake website pretending to offer Roblox scripts
Admittedly, the cracks and software advertised on these fake sites can sometimes look a bit off. For example, here the attackers are offering a download for Half-Life 3, while at the same time claiming it’s not actually a game but some kind of “professional software solution designed for Windows”.
Malware disguised as Half-Life 3, which is also somehow “a professional software solution designed for Windows”. A lot of professionals clearly spent their best years on this software…
The truth is that both the page title and the filename are just bait. The attackers simply use popular search terms to lure users into downloading the malware. The actual file content has nothing to do with what’s advertised — inside, it’s always the same infostealer.
The site also claimed that all hosted files were scanned for viruses. When the user decides to download, say, a pirated game, the site displays a banner saying the file is being scanned by various antivirus engines. Of course, no such scanning actually takes place; the attackers are merely trying to create an illusion of trustworthiness.
The pirated file pretends to be scanned by a dozen antivirus tools
What makes Stealka dangerous
Stealka has a fairly extensive arsenal of capabilities, but its prime target is data from browsers built on the Chromium and Gecko engines. This puts over a hundred different browsers at risk, including popular ones like Chrome, Firefox, Opera, Yandex Browser, Edge, Brave, as well as many, many others.
Browsers store a huge amount of sensitive information, which attackers use to hijack accounts and continue their attacks. The main targets are autofill data, such as sign-in credentials, addresses, and payment card details. We’ve warned repeatedly that saving passwords in your browser is risky — attackers can extract them in seconds. Cookies and session tokens are perhaps even more valuable to hackers, as they can allow criminals to bypass two-factor authentication and hijack accounts without entering the password.
The story doesn’t end with the account hack. Attackers use these compromised accounts to spread the malware further. For example, we discovered the stealer in a GTAV mod posted on a dedicated site by an account that had previously been compromised.
Beyond stealing browser data, Stealka also targets the settings and databases of 115browser extensions for crypto wallets, password managers, and 2FA services. Here are some of the most popular extensions now at risk:
Finally, the stealer also downloads local settings, account data, and service files from a wide variety of applications:
Crypto wallets. Wallet configurations may contain encrypted private keys, seed-phrase data, wallet file paths, and encryption parameters. That’s enough to at least make an attempt at stealing your cryptocurrency. At risk are 80 wallet applications, including Binance, Bitcoin, BitcoinABC, Dogecoin, Ethereum, Exodus, Mincoin, MyCrypto, MyMonero, Monero, Nexus, Novacoin, Solar, and many others.
Messaging apps. Messaging app service files store account data, device identifiers, authentication tokens, and the encryption parameters for your conversations. In theory, a malicious actor could gain access to your account and read your chats. At risk are Discord, Telegram, Unigram, Pidgin, Tox, and others.
Password managers. Even if the passwords themselves are encrypted, the configuration files often contain information that makes cracking the vault significantly easier: encryption parameters, synchronization tokens, and details about the vault version and structure. At risk are 1Password, Authy, Bitwarden, KeePass, LastPass, and NordPass.
Email clients. These are where your account credentials, mail server connection settings, authentication tokens, and local copies of your emails can be found. With access to your email, an attacker will almost certainly attempt to reset passwords for your other services. At risk are Gmail Notifier Pro, Claws, Mailbird, Outlook, Postbox, The Bat!, Thunderbird, and TrulyMail.
Note-taking apps. Instead of shopping lists or late-night poetry, some users store information in their notes that has no business being there, like seed phrases or passwords. At risk are NoteFly, Notezilla, SimpleStickyNotes, and Microsoft StickyNotes.
Gaming services and clients. The local files of gaming platforms and launchers store account data, linked service information, and authentication tokens. At risk are Steam, Roblox, Intent Launcher, Lunar Client, TLauncher, Feather Client, Meteor Client, Impact Client, Badlion Client, and WinAuth for battle.net.
VPN clients. By gaining access to configuration files, attackers can hijack the victim’s VPN account to mask their own malicious activities. At risk are AzireVPN, OpenVPN, ProtonVPN, Surfshark, and WindscribeVPN.
That’s an extensive list — and we haven’t even named all of them! In addition to local files, this infostealer also harvests general system data: a list of installed programs, the OS version and language, username, computer hardware information, and miscellaneous settings. And as if that weren’t enough, the malware also takes screenshots.
How to protect yourself from Stealka and other infostealers
Secure your device with reliable antivirus software. Even downloading files from legitimate websites is no guarantee of safety — attackers leverage trusted platforms to distribute stealers all the time. Kaspersky Premium detects malware on your computer in time and alerts you to the threat.
Don’t store sensitive information in browsers. It’s handy — no one can argue with that. But unfortunately browsers aren’t the most secure environment for your data. Sign-in credentials, bank card details, secret notes, and other confidential information are better kept in a securely encrypted format in Kaspersky Password Manager, which is immune to the exploits used by Stealka.
Enable two-factor authentication or use backup codes wherever possible.Two-factor authentication (2FA) makes life much harder for attackers, while backup codes help you regain access to your critical accounts if compromised. Just be sure not to store backup codes in text documents, notes, or your browser. For all your backup codes and 2FA tokens, use a reliable password manager.
Curious what other stealers are out there, and what they’re capable of? Read more in our other posts:
Admit it: you’ve been meaning to jump on the latest NFT reincarnation — Telegram Gifts — but just haven’t gotten around to it. It’s the hottest trend right now. Developers are churning out collectible images in partnership with celebs like Snoop Dogg. All your friends’ profiles are already decked out with these modish pictures, and you’re dying to hop on this hype train — but pay as little as possible for it.
And then it happens — a stranger messages you privately with a generous offer: a chance to snag a couple of these digital gifts — with no investment required. A bot that looks completely legit is running an airdrop. In the world of NFTs, an airdrop is a promotional stunt where a small number of new crypto assets are given away for free. The buzzword has been adopted on Telegram, thanks to the crypto nature of these gifts and the NFT mechanics running under the hood.
Limited time offer: a marketer’s favorite trick… and a scammer’s tool
They’re offering you these gift images for free — or so they say. You could later attach them to your profile or sell them for Telegram’s native currency, Toncoin. You don’t even have to tap an external link. Just hit a button in the message, launch a Mini App right inside Telegram itself, and enter your login credentials. And then… your account immediately gets hijacked. You won’t get any gifts, and overall, you’ll be left with anything but a celebratory feeling.
This is the first of the screens where, by filling in the fields, you receive a gift lose access to your Telegram account
Today, we break down a phishing scheme that exploits Telegram’s built-in Mini Apps, and share tips to help you avoid falling for these attacks.
How the new phishing scheme works
The principle of classic phishing is straightforward: the user gets a link to a fake website that mimics a legitimate sign-in form. When the victim enters their credentials, this data goes straight to the scammer. However, phishing tactics are constantly evolving, and this new attack method is far more insidious.
The bad actors create phishing Mini Apps directly inside Telegram. These appear as standard web pages but are embedded within the messaging app’s interface instead of opening in an external browser. To the user, these apps look completely legitimate. After all, they run within the official Telegram app itself.
To make it even more convincing, scammers often add a plausible-sounding limit on gifts per user
This leads the victim to think, “If this app runs inside Telegram, there must be some kind of vetting process for these apps. Surely they wouldn’t let an obvious scam through?” In practice, it turns out that’s not the case at all.
How is this scheme even a thing?
A core security issue with Telegram Mini Apps is that the platform does almost no vetting before an app goes live. This is a world apart from the strict review processes used by Google Play and the App Store — although even there, obvious malware occasionally slips through.
On Telegram, it’s far easier for bad actors. Essentially, anyone who wishes to create and launch a Mini App can do so. Telegram does not review the code, functionality, or the developer’s intent. This turns a security flaw within a messaging service boasting nearly a billion global users into a global-scale problem. To make matters worse, moderation of these Mini Apps within Telegram is entirely reactive — meaning action is only taken after users start complaining or law enforcement gets involved.
This is a global operation, with phishing lures being distributed simultaneously in both Russian and English. However, the Russian version gives away a tell-tale sign of the scammers’ haste and lack of polish. They forgot to remove a clarification question from the AI that generated the text: “Do you need bolder, more official, or humorous options?”
In this case, the bait was “gifts” from UFC fighters: a giveaway of “papakhas” — digital gift images of the traditional Dagestani hat released by Telegram in partnership with Khabib Nurmagomedov. An auction for these items did take place, with Pavel Durov even posting about it on his X and Telegram (Khabib reposted these announcements but later deleted them after the auction ended). However, there were only 29 000 of these “papakhas” released, which wasn’t enough to satisfy all the eager fans. Scammers seized on the opportunity, assuring fans they could get the exclusive items for free. The phishing campaign was a targeted one — focusing on users who’d been active on the athlete’s channel.
How the scammers lull their victims
The criminals leveraged the name of the popular Portals platform — a legitimate service for games, apps, and entertainment within Telegram. They created a series of Mini Apps that were visually almost indistinguishable from the real ones, and promoted them as free giveaways — airdrops.
To add a veneer of authenticity, the scammers even listed the official Telegram channel for Portals in the phishing Mini App’s profile. However, the legitimate Portals Market bot has a different username: @portals
That said, the scam campaigns themselves show signs of being rushed and cutting design and copywriting costs — with obvious signs of AI involvement. Some of the messages contain leftover text fragments clearly generated by a neural network, which the scammers either forgot or couldn’t be bothered to edit.
How to protect your Telegram account from being hacked
The golden security rules are simple: stay vigilant, and learn the key hallmarks of these attacks:
Verify the source. If you receive a link promising a giveaway from a celebrity or even Telegram itself but sent from an unfamiliar account or a dubious group, don’t click. Cross-check through the celebrity or company’s official channel to see if they’re actually running a promo like that.
Inspect the account verification badge. Ascertain that the blue checkmark is real and not just an emoji status or part of the profile name. You can verify this by simply tapping that checkmark icon in the profile. If it’s a Premium emoji status, Telegram will explicitly tell you so. If a checkmark emoji is simply added to the profile name, tapping it doesn’t do anything. But if the account is genuinely verified, tapping the blue checkmark will bring up an official confirmation message from Telegram.
Don’t be in a rush to authenticate in Mini Apps. Legitimate Telegram apps typically don’t require you to sign in again through a form inside the Mini App. If you’re prompted to enter your phone number or a verification code, it’s likely a phishing attempt.
Look for signs of AI-generated text or design. Weird grammar, unnatural phrasing, or leftover neural network prompts within a message are a red flag. Scammers frequently use AI-powered generation to churn out text quickly and cheaply.
Turn on two-step verification (your Telegram password). Do this right now in Settings → Privacy and Security → Two-Step Verification. Even if a scammer manages to get your phone number and SMS code, they won’t be able to access your account without this password. Obviously, never share your password with anyone — it’s meant only for you to sign in to your Telegram account.
Use a passkey to secure your account. A recent Telegram update added the ability to securely sign in with a passkey. We’ve covered using passkeys with popular services and the associated caveats in detail. A passkey makes it nearly impossible for a malicious actor to steal your account. You can set one up in Settings → Privacy and Security → Passkeys.
Store your password and passkey in a password manager. If you’ve secured your account with both a password and a passkey, remember that a weak, reused, or compromised password can still be the proverbial “spare key under the mat” for attackers — even if the “front door” is locked with a passkey. Therefore, we recommend creating a strong, unique password for Telegram and storing it — along with your passkey — in Kaspersky Password Manager. This keeps your credentials and keys available across all your devices.
What to do if your Telegram account was already stolen
The key is keeping calm and acting swiftly. You have just 24 hours to reclaim your account, or you risk losing it permanently. Follow the step-by-step guide to restoring access in our post What to do if your Telegram account is hacked.
Finally, a reminder that has become our classic mantra: if an offer looks too good to be true, it almost certainly is. Always verify information through official channels, and never enter your passwords or passkeys into unofficial apps or forms — even if they look legit. Stay vigilant and stay safe.
Want more tips on securing your messenger accounts and chats? Check out our related posts: