Reading view

Criminals are using AI website builders to clone major brands

AI tool Vercel was abused by cybercriminals to create a Malwarebytes lookalike website.

Cybercriminals no longer need design or coding skills to create a convincing fake brand site. All they need is a domain name and an AI website builder. In minutes, they can clone a site’s look and feel, plug in payment or credential-stealing flows, and start luring victims through search, social media, and spam.

One side effect of being an established and trusted brand is that you attract copycats who want a slice of that trust without doing any of the work. Cybercriminals have always known it is much easier to trick users by impersonating something they already recognize than by inventing something new—and developments in AI have made it trivial for scammers to create convincing fake sites.​​

Registering a plausible-looking domain is cheap and fast, especially through registrars and resellers that do little or no upfront vetting. Once attackers have a name that looks close enough to the real thing, they can use AI-powered tools to copy layouts, colors, and branding elements, and generate product pages, sign-up flows, and FAQs that look “on brand.”

A flood of fake “official” sites

Data from recent holiday seasons shows just how routine large-scale domain abuse has become.

Over a three‑month period leading into the 2025 shopping season, researchers observed more than 18,000 holiday‑themed domains with lures like “Christmas,” “Black Friday,” and “Flash Sale,” with at least 750 confirmed as malicious and many more still under investigation. In the same window, about 19,000 additional domains were registered explicitly to impersonate major retail brands, nearly 3,000 of which were already hosting phishing pages or fraudulent storefronts.

These sites are used for everything from credential harvesting and payment fraud to malware delivery disguised as “order trackers” or “security updates.”

Attackers then boost visibility using SEO poisoning, ad abuse, and comment spam, nudging their lookalike sites into search results and promoting them in social feeds right next to the legitimate ones. From a user’s perspective, especially on mobile without the hover function, that fake site can be only a typo or a tap away.​

When the impersonation hits home

A recent example shows how low the barrier to entry has become.

We were alerted to a site at installmalwarebytes[.]org that masqueraded from logo to layout as a genuine Malwarebytes site.

Close inspection revealed that the HTML carried a meta tag value pointing to v0 by Vercel, an AI-assisted app and website builder.

Built by v0

The tool lets users paste an existing URL into a prompt to automatically recreate its layout, styling, and structure—producing a near‑perfect clone of a site in very little time.

The history of the imposter domain tells an incremental evolution into abuse.

Registered in 2019, the site did not initially contain any Malwarebytes branding. In 2022, the operator began layering in Malwarebytes branding while publishing Indonesian‑language security content. This likely helped with search reputation while normalizing the brand look to visitors. Later, the site went blank, with no public archive records for 2025, only to resurface as a full-on clone backed by AI‑assisted tooling.​

Traffic did not arrive by accident. Links to the site appeared in comment spam and injected links on unrelated websites, giving users the impression of organic references and driving them toward the fake download pages.

Payment flows were equally opaque. The fake site used PayPal for payments, but the integration hid the merchant’s name and logo from the user-facing confirmation screens, leaving only the buyer’s own details visible. That allowed the criminals to accept money while revealing as little about themselves as possible.

PayPal module

Behind the scenes, historical registration data pointed to an origin in India and to a hosting IP (209.99.40[.]222) associated with domain parking and other dubious uses rather than normal production hosting.

Combined with the AI‑powered cloning and the evasive payment configuration, it painted a picture of low‑effort, high‑confidence fraud.

AI website builders as force multipliers

The installmalwarebytes[.]org case is not an isolated misuse of AI‑assisted builders. It fits into a broader pattern of attackers using generative tools to create and host phishing sites at scale.

Threat intelligence teams have documented abuse of Vercel’s v0 platform to generate fully functional phishing pages that impersonate sign‑in portals for a variety of brands, including identity providers and cloud services, all from simple text prompts. Once the AI produces a clone, criminals can tweak a few links to point to their own credential‑stealing backends and go live in minutes.

Research into AI’s role in modern phishing shows that attackers are leaning heavily on website generators, writing assistants, and chatbots to streamline the entire kill chain—from crafting persuasive copy in multiple languages to spinning up responsive pages that render cleanly across devices. One analysis of AI‑assisted phishing campaigns found that roughly 40% of observed abuse involved website generation services, 30% involved AI writing tools, and about 11% leveraged chatbots, often in combination. This stack lets even low‑skilled actors produce professional-looking scams that used to require specialized skills or paid kits.​

Growth first, guardrails later

The core problem is not that AI can build websites. It’s that the incentives around AI platform development are skewed. Vendors are under intense pressure to ship new capabilities, grow user bases, and capture market share, and that pressure often runs ahead of serious investment in abuse prevention.

As Malwarebytes General Manager Mark Beare put it:

“AI-powered website builders like Lovable and Vercel have dramatically lowered the barrier for launching polished sites in minutes. While these platforms include baseline security controls, their core focus is speed, ease of use, and growth—not preventing brand impersonation at scale. That imbalance creates an opportunity for bad actors to move faster than defenses, spinning up convincing fake brands before victims or companies can react.”

Site generators allow cloned branding of well‑known companies with no verification, publishing flows skip identity checks, and moderation either fails quietly or only reacts after an abuse report. Some builders let anyone spin up and publish a site without even confirming an email address, making it easy to burn through accounts as soon as one is flagged or taken down.

To be fair, there are signs that some providers are starting to respond by blocking specific phishing campaigns after disclosure or by adding limited brand-protection controls. But these are often reactive fixes applied after the damage is done.

Meanwhile, attackers can move to open‑source clones or lightly modified forks of the same tools hosted elsewhere, where there may be no meaningful content moderation at all.

In practice, the net effect is that AI companies benefit from the growth and experimentation that comes with permissive tooling, while the consequences is left to victims and defenders.

We have blocked the domain in our web protection module and requested a domain and vendor takedown.

How to stay safe

End users cannot fix misaligned AI incentives, but they can make life harder for brand impersonators. Even when a cloned website looks convincing, there are red flags to watch for:

  • Before completing any payment, always review the “Pay to” details or transaction summary. If no merchant is named, back out and treat the site as suspicious.
  • Use an up-to-date, real-time anti-malware solution with a web protection module.
  • Do not follow links posted in comments, on social media, or unsolicited emails to buy a product. Always follow a verified and trusted method to reach the vendor.

If you come across a fake Malwarebytes website, please let us know.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

  •  

Criminals are using AI website builders to clone major brands

AI tool Vercel was abused by cybercriminals to create a Malwarebytes lookalike website.

Cybercriminals no longer need design or coding skills to create a convincing fake brand site. All they need is a domain name and an AI website builder. In minutes, they can clone a site’s look and feel, plug in payment or credential-stealing flows, and start luring victims through search, social media, and spam.

One side effect of being an established and trusted brand is that you attract copycats who want a slice of that trust without doing any of the work. Cybercriminals have always known it is much easier to trick users by impersonating something they already recognize than by inventing something new—and developments in AI have made it trivial for scammers to create convincing fake sites.​​

Registering a plausible-looking domain is cheap and fast, especially through registrars and resellers that do little or no upfront vetting. Once attackers have a name that looks close enough to the real thing, they can use AI-powered tools to copy layouts, colors, and branding elements, and generate product pages, sign-up flows, and FAQs that look “on brand.”

A flood of fake “official” sites

Data from recent holiday seasons shows just how routine large-scale domain abuse has become.

Over a three‑month period leading into the 2025 shopping season, researchers observed more than 18,000 holiday‑themed domains with lures like “Christmas,” “Black Friday,” and “Flash Sale,” with at least 750 confirmed as malicious and many more still under investigation. In the same window, about 19,000 additional domains were registered explicitly to impersonate major retail brands, nearly 3,000 of which were already hosting phishing pages or fraudulent storefronts.

These sites are used for everything from credential harvesting and payment fraud to malware delivery disguised as “order trackers” or “security updates.”

Attackers then boost visibility using SEO poisoning, ad abuse, and comment spam, nudging their lookalike sites into search results and promoting them in social feeds right next to the legitimate ones. From a user’s perspective, especially on mobile without the hover function, that fake site can be only a typo or a tap away.​

When the impersonation hits home

A recent example shows how low the barrier to entry has become.

We were alerted to a site at installmalwarebytes[.]org that masqueraded from logo to layout as a genuine Malwarebytes site.

Close inspection revealed that the HTML carried a meta tag value pointing to v0 by Vercel, an AI-assisted app and website builder.

Built by v0

The tool lets users paste an existing URL into a prompt to automatically recreate its layout, styling, and structure—producing a near‑perfect clone of a site in very little time.

The history of the imposter domain tells an incremental evolution into abuse.

Registered in 2019, the site did not initially contain any Malwarebytes branding. In 2022, the operator began layering in Malwarebytes branding while publishing Indonesian‑language security content. This likely helped with search reputation while normalizing the brand look to visitors. Later, the site went blank, with no public archive records for 2025, only to resurface as a full-on clone backed by AI‑assisted tooling.​

Traffic did not arrive by accident. Links to the site appeared in comment spam and injected links on unrelated websites, giving users the impression of organic references and driving them toward the fake download pages.

Payment flows were equally opaque. The fake site used PayPal for payments, but the integration hid the merchant’s name and logo from the user-facing confirmation screens, leaving only the buyer’s own details visible. That allowed the criminals to accept money while revealing as little about themselves as possible.

PayPal module

Behind the scenes, historical registration data pointed to an origin in India and to a hosting IP (209.99.40[.]222) associated with domain parking and other dubious uses rather than normal production hosting.

Combined with the AI‑powered cloning and the evasive payment configuration, it painted a picture of low‑effort, high‑confidence fraud.

AI website builders as force multipliers

The installmalwarebytes[.]org case is not an isolated misuse of AI‑assisted builders. It fits into a broader pattern of attackers using generative tools to create and host phishing sites at scale.

Threat intelligence teams have documented abuse of Vercel’s v0 platform to generate fully functional phishing pages that impersonate sign‑in portals for a variety of brands, including identity providers and cloud services, all from simple text prompts. Once the AI produces a clone, criminals can tweak a few links to point to their own credential‑stealing backends and go live in minutes.

Research into AI’s role in modern phishing shows that attackers are leaning heavily on website generators, writing assistants, and chatbots to streamline the entire kill chain—from crafting persuasive copy in multiple languages to spinning up responsive pages that render cleanly across devices. One analysis of AI‑assisted phishing campaigns found that roughly 40% of observed abuse involved website generation services, 30% involved AI writing tools, and about 11% leveraged chatbots, often in combination. This stack lets even low‑skilled actors produce professional-looking scams that used to require specialized skills or paid kits.​

Growth first, guardrails later

The core problem is not that AI can build websites. It’s that the incentives around AI platform development are skewed. Vendors are under intense pressure to ship new capabilities, grow user bases, and capture market share, and that pressure often runs ahead of serious investment in abuse prevention.

As Malwarebytes General Manager Mark Beare put it:

“AI-powered website builders like Lovable and Vercel have dramatically lowered the barrier for launching polished sites in minutes. While these platforms include baseline security controls, their core focus is speed, ease of use, and growth—not preventing brand impersonation at scale. That imbalance creates an opportunity for bad actors to move faster than defenses, spinning up convincing fake brands before victims or companies can react.”

Site generators allow cloned branding of well‑known companies with no verification, publishing flows skip identity checks, and moderation either fails quietly or only reacts after an abuse report. Some builders let anyone spin up and publish a site without even confirming an email address, making it easy to burn through accounts as soon as one is flagged or taken down.

To be fair, there are signs that some providers are starting to respond by blocking specific phishing campaigns after disclosure or by adding limited brand-protection controls. But these are often reactive fixes applied after the damage is done.

Meanwhile, attackers can move to open‑source clones or lightly modified forks of the same tools hosted elsewhere, where there may be no meaningful content moderation at all.

In practice, the net effect is that AI companies benefit from the growth and experimentation that comes with permissive tooling, while the consequences is left to victims and defenders.

We have blocked the domain in our web protection module and requested a domain and vendor takedown.

How to stay safe

End users cannot fix misaligned AI incentives, but they can make life harder for brand impersonators. Even when a cloned website looks convincing, there are red flags to watch for:

  • Before completing any payment, always review the “Pay to” details or transaction summary. If no merchant is named, back out and treat the site as suspicious.
  • Use an up-to-date, real-time anti-malware solution with a web protection module.
  • Do not follow links posted in comments, on social media, or unsolicited emails to buy a product. Always follow a verified and trusted method to reach the vendor.

If you come across a fake Malwarebytes website, please let us know.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

  •  

Rewiring Democracy Ebook is on Sale

I just noticed that the ebook version of Rewiring Democracy is on sale for $5 on Amazon, Apple Books, Barnes & Noble, Books A Million, Google Play, Kobo, and presumably everywhere else in the US. I have no idea how long this will last.

Also, Amazon has a coupon that brings the hardcover price down to $20. You’ll see the discount at checkout.

  •  

February 2026 Patch Tuesday includes six actively exploited zero-days

Microsoft releases important security updates on the second Tuesday of every month, known as “Patch Tuesday.” This month’s update patches fix 59 Microsoft CVE’s including six zero-days.

Let’s have a quick look at these six actively exploited zero-days.

Windows Shell Security Feature Bypass Vulnerability

CVE-2026-21510 (CVSS score 8.8 out of 10) is a security feature bypass in the Windows Shell. A protection mechanism failure allows an attacker to circumvent Windows SmartScreen and similar prompts once they convince a user to open a malicious link or shortcut file.

The vulnerability is exploited over the network but still requires on user interaction. The victim must be socially engineered into launching the booby‑trapped shortcut or link for the bypass to trigger. Successful exploitation lets the attacker suppress or evade the usual “are you sure?” security dialogs for untrusted content, making it easier to deliver and execute further payloads without raising user suspicion.

MSHTML Framework Security Feature Bypass Vulnerability

CVE-2026-21513 (CVSS score 8.8 out of 10) affects the MSHTML Framework, which is used by Internet Explorer’s Trident/embedded web rendering). It is classified as a protection mechanism failure that results in a security feature bypass over the network.

A successful attack requires the victim to open a malicious HTML file or a crafted shortcut (.lnk) that leverages MSHTML for rendering. When opened, the flaw allows an attacker to bypass certain security checks in MSHTML, potentially removing or weakening normal browser or Office sandbox or warning protections and enabling follow‑on code execution or phishing activity.

Microsoft Word Security Feature Bypass Vulnerability

CVE-2026-21514 (CVSS score 5.5 out of 10) affects Microsoft Word. It relies on untrusted inputs in a security decision, leading to a local security feature bypass.  

An attacker must persuade a user to open a malicious Word document to exploit this vulnerability. If exploited, the untrusted input is processed incorrectly, potentially bypassing Word’s defenses for embedded or active content—leading to execution of attacker‑controlled content that would normally be blocked.

Desktop Window Manager Elevation of Privilege Vulnerability

CVE-2026-21519 (CVSS score 7.8 out of 10) is a local elevation‑of‑privilege vulnerability in Windows Desktop Window Manager caused by type confusion (a flaw where the system treats one type of data as another, leading to unintended behavior).

A locally authenticated attacker with low privileges and no required user interaction can exploit the issue to gain higher privileges. Exploitation must be done locally, for example via a crafted program or exploit chain stage running on the target system. An attacker who successfully exploited this vulnerability could gain SYSTEM privileges.

Windows Remote Access Connection Manager Denial of Service Vulnerability

CVE-2026-21525 (CVSS score 6.2 out of 10) is a denial‑of‑service vulnerability in the Windows Remote Access Connection Manager service (RasMan).

An unauthenticated local attacker can trigger the flaw with low attack complexity, leading to a high impact on availability but no direct impact on confidentiality or integrity. This means they could crash the service or potentially the system, but not elevate privileges or execute malicious code.

Windows Remote Desktop Services Elevation of Privilege Vulnerability

CVE-2026-21533 (CVSS score 7.8 out of 10) is an elevation‑of‑privilege vulnerability in Windows Remote Desktop Services, caused by improper privilege management.

A local authenticated attacker with low privileges, and no required user interaction, can exploit the flaw to escalate privileges to SYSTEM and fully compromise confidentiality, integrity, and availability on the affected system. Successful exploitation typically involves running attacker‑controlled code on a system with Remote Desktop Services present and abusing the vulnerable privilege management path.

Azure vulnerabilities

Azure users are also advised to take note of two critical vulnerabilities with CVSS ratings of 9.8:

How to apply fixes and check you’re protected

These updates fix security problems and keep your Windows PC protected. Here’s how to make sure you’re up to date:

1. Open Settings

  • Click the Start button (the Windows logo at the bottom left of your screen).
  • Click on Settings (it looks like a little gear).

2. Go to Windows Update

  • In the Settings window, select Windows Update (usually at the bottom of the menu on the left).

3. Check for updates

  • Click the button that says Check for updates.
  • Windows will search for the latest Patch Tuesday updates.
  • If you have selected automatic updates earlier, you may see this under Update history:
list of recent updates
  • Or you may see a Restart required message, which means all you have to do is restart your system and you’re done updating.
  • If not, continue with the steps below.

4. Download and Install

  • If updates are found, they’ll start downloading right away. Once complete, you’ll see a button that says Install or Restart now.
  • Click Install if needed and follow any prompts. Your computer will usually need a restart to finish the update. If it does, click Restart now.

5. Double-check you’re up to date

  • After restarting, go back to Windows Update and check again. If it says You’re up to date, you’re all set!
You're up to date

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

  •  

February 2026 Patch Tuesday includes six actively exploited zero-days

Microsoft releases important security updates on the second Tuesday of every month, known as “Patch Tuesday.” This month’s update patches fix 59 Microsoft CVE’s including six zero-days.

Let’s have a quick look at these six actively exploited zero-days.

Windows Shell Security Feature Bypass Vulnerability

CVE-2026-21510 (CVSS score 8.8 out of 10) is a security feature bypass in the Windows Shell. A protection mechanism failure allows an attacker to circumvent Windows SmartScreen and similar prompts once they convince a user to open a malicious link or shortcut file.

The vulnerability is exploited over the network but still requires on user interaction. The victim must be socially engineered into launching the booby‑trapped shortcut or link for the bypass to trigger. Successful exploitation lets the attacker suppress or evade the usual “are you sure?” security dialogs for untrusted content, making it easier to deliver and execute further payloads without raising user suspicion.

MSHTML Framework Security Feature Bypass Vulnerability

CVE-2026-21513 (CVSS score 8.8 out of 10) affects the MSHTML Framework, which is used by Internet Explorer’s Trident/embedded web rendering). It is classified as a protection mechanism failure that results in a security feature bypass over the network.

A successful attack requires the victim to open a malicious HTML file or a crafted shortcut (.lnk) that leverages MSHTML for rendering. When opened, the flaw allows an attacker to bypass certain security checks in MSHTML, potentially removing or weakening normal browser or Office sandbox or warning protections and enabling follow‑on code execution or phishing activity.

Microsoft Word Security Feature Bypass Vulnerability

CVE-2026-21514 (CVSS score 5.5 out of 10) affects Microsoft Word. It relies on untrusted inputs in a security decision, leading to a local security feature bypass.  

An attacker must persuade a user to open a malicious Word document to exploit this vulnerability. If exploited, the untrusted input is processed incorrectly, potentially bypassing Word’s defenses for embedded or active content—leading to execution of attacker‑controlled content that would normally be blocked.

Desktop Window Manager Elevation of Privilege Vulnerability

CVE-2026-21519 (CVSS score 7.8 out of 10) is a local elevation‑of‑privilege vulnerability in Windows Desktop Window Manager caused by type confusion (a flaw where the system treats one type of data as another, leading to unintended behavior).

A locally authenticated attacker with low privileges and no required user interaction can exploit the issue to gain higher privileges. Exploitation must be done locally, for example via a crafted program or exploit chain stage running on the target system. An attacker who successfully exploited this vulnerability could gain SYSTEM privileges.

Windows Remote Access Connection Manager Denial of Service Vulnerability

CVE-2026-21525 (CVSS score 6.2 out of 10) is a denial‑of‑service vulnerability in the Windows Remote Access Connection Manager service (RasMan).

An unauthenticated local attacker can trigger the flaw with low attack complexity, leading to a high impact on availability but no direct impact on confidentiality or integrity. This means they could crash the service or potentially the system, but not elevate privileges or execute malicious code.

Windows Remote Desktop Services Elevation of Privilege Vulnerability

CVE-2026-21533 (CVSS score 7.8 out of 10) is an elevation‑of‑privilege vulnerability in Windows Remote Desktop Services, caused by improper privilege management.

A local authenticated attacker with low privileges, and no required user interaction, can exploit the flaw to escalate privileges to SYSTEM and fully compromise confidentiality, integrity, and availability on the affected system. Successful exploitation typically involves running attacker‑controlled code on a system with Remote Desktop Services present and abusing the vulnerable privilege management path.

Azure vulnerabilities

Azure users are also advised to take note of two critical vulnerabilities with CVSS ratings of 9.8:

How to apply fixes and check you’re protected

These updates fix security problems and keep your Windows PC protected. Here’s how to make sure you’re up to date:

1. Open Settings

  • Click the Start button (the Windows logo at the bottom left of your screen).
  • Click on Settings (it looks like a little gear).

2. Go to Windows Update

  • In the Settings window, select Windows Update (usually at the bottom of the menu on the left).

3. Check for updates

  • Click the button that says Check for updates.
  • Windows will search for the latest Patch Tuesday updates.
  • If you have selected automatic updates earlier, you may see this under Update history:
list of recent updates
  • Or you may see a Restart required message, which means all you have to do is restart your system and you’re done updating.
  • If not, continue with the steps below.

4. Download and Install

  • If updates are found, they’ll start downloading right away. Once complete, you’ll see a button that says Install or Restart now.
  • Click Install if needed and follow any prompts. Your computer will usually need a restart to finish the update. If it does, click Restart now.

5. Double-check you’re up to date

  • After restarting, go back to Windows Update and check again. If it says You’re up to date, you’re all set!
You're up to date

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

  •  

Discord will limit profiles to teen-appropriate mode until you verify your age

Discord announced it will put all existing and new profiles in teen-appropriate mode by default in early March.

The teen-appropriate profile mode will remain in place until users prove they are adults. To change a profile to “full access” will require verification by Discord’s age inference model—a new system that runs in the background to help determine whether an account belongs to an adult, without always requiring users to verify their age.

Savannah Badalich, Head of Product Policy at Discord, explained the reasoning:

“Rolling out teen-by-default settings globally builds on Discord’s existing safety architecture, giving teens strong protections while allowing verified adults flexibility. We design our products with teen safety principles at the core and will continue working with safety experts, policymakers, and Discord users to support meaningful, long term wellbeing for teens on the platform.”

Platforms have been facing growing regulatory pressure—particularly in the UK, EU, and parts of the US—to introduce stronger age-verification measures. The announcement also comes as concerns about children’s safety on social media continue to surface. In research we published today, parents highlighted issues such as exposure to inappropriate content, unwanted contact, and safeguards that are easy to bypass. Discord was one of the platforms we researched.

The problem in Discord’s case lies in the age-verification methods it’s made available, which require either a facial scan or a government-issued ID. Discord says that video selfies used for facial age estimation never leave a user’s device, but this method is known not to work reliably for everyone.

Identity documents submitted to Discord’s vendor partners are also deleted quickly—often immediately after age confirmation, according to Discord. But, as we all know, computers are very bad at “forgetting” things and criminals are very good at finding things that were supposed to be gone.

Besides all that, the effectiveness of this kind of measure remains an issue. Minors often find ways around systems—using borrowed IDs, VPNs, or false information—so strict verification can create a sense of safety without fully eliminating risk. In some cases, it may even push activity into less regulated or more opaque spaces.

As someone who isn’t an avid Discord user, I can’t help but wonder why keeping my profile teen-appropriate would be a bad thing. Let us know in the comments what your objections to this scenario would be.

I wouldn’t have to provide identification and what I’d “miss” doesn’t sound terrible at all:

  • Mature and graphic images would be permanently blocked.
  • Age-restricted channels and servers would be inaccessible.
  • DMs from unknown users would be rerouted to a separate inbox.
  • Friend requests from unknown users would always trigger a warning pop-up.
  • No speaking on server stages.

Given the amount of backlash this news received, I’m probably missing something—and I don’t mind being corrected. So let’s hear it.

Note: All comments are moderated. Those including links and inappropriate language will be deleted. The rest must be approved by a moderator.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  •  

Discord will limit profiles to teen-appropriate mode until you verify your age

Discord announced it will put all existing and new profiles in teen-appropriate mode by default in early March.

The teen-appropriate profile mode will remain in place until users prove they are adults. To change a profile to “full access” will require verification by Discord’s age inference model—a new system that runs in the background to help determine whether an account belongs to an adult, without always requiring users to verify their age.

Savannah Badalich, Head of Product Policy at Discord, explained the reasoning:

“Rolling out teen-by-default settings globally builds on Discord’s existing safety architecture, giving teens strong protections while allowing verified adults flexibility. We design our products with teen safety principles at the core and will continue working with safety experts, policymakers, and Discord users to support meaningful, long term wellbeing for teens on the platform.”

Platforms have been facing growing regulatory pressure—particularly in the UK, EU, and parts of the US—to introduce stronger age-verification measures. The announcement also comes as concerns about children’s safety on social media continue to surface. In research we published today, parents highlighted issues such as exposure to inappropriate content, unwanted contact, and safeguards that are easy to bypass. Discord was one of the platforms we researched.

The problem in Discord’s case lies in the age-verification methods it’s made available, which require either a facial scan or a government-issued ID. Discord says that video selfies used for facial age estimation never leave a user’s device, but this method is known not to work reliably for everyone.

Identity documents submitted to Discord’s vendor partners are also deleted quickly—often immediately after age confirmation, according to Discord. But, as we all know, computers are very bad at “forgetting” things and criminals are very good at finding things that were supposed to be gone.

Besides all that, the effectiveness of this kind of measure remains an issue. Minors often find ways around systems—using borrowed IDs, VPNs, or false information—so strict verification can create a sense of safety without fully eliminating risk. In some cases, it may even push activity into less regulated or more opaque spaces.

As someone who isn’t an avid Discord user, I can’t help but wonder why keeping my profile teen-appropriate would be a bad thing. Let us know in the comments what your objections to this scenario would be.

I wouldn’t have to provide identification and what I’d “miss” doesn’t sound terrible at all:

  • Mature and graphic images would be permanently blocked.
  • Age-restricted channels and servers would be inaccessible.
  • DMs from unknown users would be rerouted to a separate inbox.
  • Friend requests from unknown users would always trigger a warning pop-up.
  • No speaking on server stages.

Given the amount of backlash this news received, I’m probably missing something—and I don’t mind being corrected. So let’s hear it.

Note: All comments are moderated. Those including links and inappropriate language will be deleted. The rest must be approved by a moderator.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  •  

Man tricked hundreds of women into handing over Snapchat security codes

Fresh off a breathless Super Bowl Sunday, we’re less thrilled to bring you this week’s Weirdo Wednesday. Two stories caught our eye, both involving men who crossed clear lines and invaded women’s privacy online.

Last week, 27-year-old Kyle Svara of Oswego, Illinois admitted to hacking women’s Snapchat accounts across the US. Between May 2020 and February 2021, Svara harvested account security codes from 571 victims, leading to confirmed unauthorized access to at least 59 accounts.

Rather than attempting to break Snapchat’s robust encryption protocols, Svara targeted the account owners themselves with social engineering.

After gathering phone numbers and email addresses, he triggered Snapchat’s legitimate login process, which sent six-digit security codes directly to victims’ devices. Posing as Snapchat support, he then sent more than 4,500 anonymous messages via a VoIP texting service, claiming the codes were needed to “verify” or “secure” the account.

Svara showed particular interest in Snapchat’s My Eyes Only feature—a secondary four-digit PIN meant to protect a user’s most sensitive content. By persuading victims to share both codes, he bypassed two layers of security without touching a single line of code. He walked away with private material, including nude images.

Svara didn’t do this solely for his own kicks. He marketed himself as a hacker-for-hire, advertising on platforms like Reddit and offering access to specific accounts in exchange for money or trades.

Selling his services to others was how he got found out. Although Svara stopped hacking in early 2021, his legal day of reckoning followed the 2024 sentencing of one of his customers: Steve Waithe, a former track and field coach who worked at several high-profile universities including Northeastern. Waithe paid Svara to target student athletes he was supposed to mentor.

Svara also went after women in his home area of Plainfield, Illinois, and as far away as Colby College in Maine.

He now faces charges including identity theft, wire fraud, computer fraud, and making false statements to law enforcement about child sex abuse material. Sentencing is scheduled for May 18.

How to protect your Snapchat account

Never send someone your login details or secret codes, even if you think you know them.

This is also a good time to talk about passkeys.

Passkeys let you sign in without a password, but unlike multi-factor authentication, passkeys are cryptographically tied to your device, and can’t be phished or forwarded like one-time codes. Snapchat supports them, and they offer stronger protection than traditional multi-factor authentication, which is increasingly susceptible to smart phishing attacks.

Bad guys with smart glasses

Unfortunately, hacking women’s social media accounts to steal private content isn’t new. But predators will always find a way to use smart tech in nefarious ways. Such is the case with new generations of ‘smart glasses’ powered by AI.

This week, CNN published stories from women who believed they were having private, flirtatious interactions with strangers—only to later discover the men were recording them using camera-equipped smart glasses and posting the footage online.

These clips are often packaged as “rizz” videos—short for “charisma”—where so-called manfluencers film themselves chatting up women in public, without consent, to build followings and sell “coaching” services.

The glasses, sold by companies like Meta, are supposed to be used for recording only with consent, and often display a light to show that they’re recording. In practice, that indicator is easy to hide.

When combined with AI-powered services to identify people, as researchers did in 2024, the possibilities become even more chilling. We’re unaware of any related cases coming to court, but suspect it’s only a matter of time.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

  •  

Man tricked hundreds of women into handing over Snapchat security codes

Fresh off a breathless Super Bowl Sunday, we’re less thrilled to bring you this week’s Weirdo Wednesday. Two stories caught our eye, both involving men who crossed clear lines and invaded women’s privacy online.

Last week, 27-year-old Kyle Svara of Oswego, Illinois admitted to hacking women’s Snapchat accounts across the US. Between May 2020 and February 2021, Svara harvested account security codes from 571 victims, leading to confirmed unauthorized access to at least 59 accounts.

Rather than attempting to break Snapchat’s robust encryption protocols, Svara targeted the account owners themselves with social engineering.

After gathering phone numbers and email addresses, he triggered Snapchat’s legitimate login process, which sent six-digit security codes directly to victims’ devices. Posing as Snapchat support, he then sent more than 4,500 anonymous messages via a VoIP texting service, claiming the codes were needed to “verify” or “secure” the account.

Svara showed particular interest in Snapchat’s My Eyes Only feature—a secondary four-digit PIN meant to protect a user’s most sensitive content. By persuading victims to share both codes, he bypassed two layers of security without touching a single line of code. He walked away with private material, including nude images.

Svara didn’t do this solely for his own kicks. He marketed himself as a hacker-for-hire, advertising on platforms like Reddit and offering access to specific accounts in exchange for money or trades.

Selling his services to others was how he got found out. Although Svara stopped hacking in early 2021, his legal day of reckoning followed the 2024 sentencing of one of his customers: Steve Waithe, a former track and field coach who worked at several high-profile universities including Northeastern. Waithe paid Svara to target student athletes he was supposed to mentor.

Svara also went after women in his home area of Plainfield, Illinois, and as far away as Colby College in Maine.

He now faces charges including identity theft, wire fraud, computer fraud, and making false statements to law enforcement about child sex abuse material. Sentencing is scheduled for May 18.

How to protect your Snapchat account

Never send someone your login details or secret codes, even if you think you know them.

This is also a good time to talk about passkeys.

Passkeys let you sign in without a password, but unlike multi-factor authentication, passkeys are cryptographically tied to your device, and can’t be phished or forwarded like one-time codes. Snapchat supports them, and they offer stronger protection than traditional multi-factor authentication, which is increasingly susceptible to smart phishing attacks.

Bad guys with smart glasses

Unfortunately, hacking women’s social media accounts to steal private content isn’t new. But predators will always find a way to use smart tech in nefarious ways. Such is the case with new generations of ‘smart glasses’ powered by AI.

This week, CNN published stories from women who believed they were having private, flirtatious interactions with strangers—only to later discover the men were recording them using camera-equipped smart glasses and posting the footage online.

These clips are often packaged as “rizz” videos—short for “charisma”—where so-called manfluencers film themselves chatting up women in public, without consent, to build followings and sell “coaching” services.

The glasses, sold by companies like Meta, are supposed to be used for recording only with consent, and often display a light to show that they’re recording. In practice, that indicator is easy to hide.

When combined with AI-powered services to identify people, as researchers did in 2024, the possibilities become even more chilling. We’re unaware of any related cases coming to court, but suspect it’s only a matter of time.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

  •  

AI chat app leak exposes 300 million messages tied to 25 million users

An independent security researcher uncovered a major data breach affecting Chat & Ask AI, one of the most popular AI chat apps on Google Play and Apple App Store, with more than 50 million users.

The researcher claims to have accessed 300 million messages from over 25 million users due to an exposed database. These messages reportedly included, among other things, discussions of illegal activities and requests for suicide assistance.

Behind the scenes, Chat & Ask AI is a “wrapper” app that plugs into various large language models (LLMs) from other companies, including OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini. Users can choose which model they want to interact with.

The exposed data included user files containing their entire chat history, the models used, and other settings. But it also revealed data belonging to users of other apps developed by Codeway—the developer of Chat & Ask AI.

The vulnerability behind this data breach is a well-known and documented Firebase misconfiguration. Firebase is a cloud-based backend-as-a-service (BaaS) platform provided by Google that helps developers build, manage, and scale mobile and web applications.

Security researchers often refer to a set of preventable errors in how developers set up Google Firebase services, which leave backend data, databases, and storage buckets accessible to the public without authentication.

One of the most common Firebase misconfigurations is leaving Security Rules set to public. This allows anyone with the project URL to read, modify, or delete data without authentication.

This prompted the researcher to create a tool that automatically scans apps on Google Play and Apple App Store for this vulnerability—with astonishing results. Reportedly, the researcher, named Harry, found that 103 out of 200 iOS apps they scanned had this issue, collectively exposing tens of millions of stored files. 

To draw attention to the issue, Harry set up a website where users can see the apps affected by the issue. Codeway’s apps are no longer listed there, as Harry removes entries once developers confirm they have fixed the problem. Codeway reportedly resolved the issue across all of its apps within hours of responsible disclosure.

How to stay safe

Besides checking if any apps you use appear in Harry’s Firehound registry, there are a few ways to better protect your privacy when using AI chatbots.

  • Use private chatbots that don’t use your data to train the model.
  • Don’t rely on chatbots for important life decisions. They have no experience or empathy.
  • Don’t use your real identity when discussing sensitive subjects.
  • Keep shared information impersonal. Don’t use real names and don’t upload personal documents.
  • Don’t share your conversations unless you absolutely have to. In some cases, it makes them searchable.
  • If you’re using an AI that is developed by a social media company (Meta AI, Llama, Grok, Bard, Gemini, and so on), make sure you’re not logged in to that social media platform. Your conversations could be linked to your social media account, which might contain a lot of personal information.

Always remember that the developments in AI are going too fast for security and privacy to be baked into technology. And that even the best AIs still hallucinate.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

  •  

AI chat app leak exposes 300 million messages tied to 25 million users

An independent security researcher uncovered a major data breach affecting Chat & Ask AI, one of the most popular AI chat apps on Google Play and Apple App Store, with more than 50 million users.

The researcher claims to have accessed 300 million messages from over 25 million users due to an exposed database. These messages reportedly included, among other things, discussions of illegal activities and requests for suicide assistance.

Behind the scenes, Chat & Ask AI is a “wrapper” app that plugs into various large language models (LLMs) from other companies, including OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini. Users can choose which model they want to interact with.

The exposed data included user files containing their entire chat history, the models used, and other settings. But it also revealed data belonging to users of other apps developed by Codeway—the developer of Chat & Ask AI.

The vulnerability behind this data breach is a well-known and documented Firebase misconfiguration. Firebase is a cloud-based backend-as-a-service (BaaS) platform provided by Google that helps developers build, manage, and scale mobile and web applications.

Security researchers often refer to a set of preventable errors in how developers set up Google Firebase services, which leave backend data, databases, and storage buckets accessible to the public without authentication.

One of the most common Firebase misconfigurations is leaving Security Rules set to public. This allows anyone with the project URL to read, modify, or delete data without authentication.

This prompted the researcher to create a tool that automatically scans apps on Google Play and Apple App Store for this vulnerability—with astonishing results. Reportedly, the researcher, named Harry, found that 103 out of 200 iOS apps they scanned had this issue, collectively exposing tens of millions of stored files. 

To draw attention to the issue, Harry set up a website where users can see the apps affected by the issue. Codeway’s apps are no longer listed there, as Harry removes entries once developers confirm they have fixed the problem. Codeway reportedly resolved the issue across all of its apps within hours of responsible disclosure.

How to stay safe

Besides checking if any apps you use appear in Harry’s Firehound registry, there are a few ways to better protect your privacy when using AI chatbots.

  • Use private chatbots that don’t use your data to train the model.
  • Don’t rely on chatbots for important life decisions. They have no experience or empathy.
  • Don’t use your real identity when discussing sensitive subjects.
  • Keep shared information impersonal. Don’t use real names and don’t upload personal documents.
  • Don’t share your conversations unless you absolutely have to. In some cases, it makes them searchable.
  • If you’re using an AI that is developed by a social media company (Meta AI, Llama, Grok, Bard, Gemini, and so on), make sure you’re not logged in to that social media platform. Your conversations could be linked to your social media account, which might contain a lot of personal information.

Always remember that the developments in AI are going too fast for security and privacy to be baked into technology. And that even the best AIs still hallucinate.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

  •  

I Am in the Epstein Files

Once. Someone named “Vincenzo lozzo” wrote to Epstein in email, in 2016: “I wouldn’t pay too much attention to this, Schneier has a long tradition of dramatizing and misunderstanding things.” The topic of the email is DDoS attacks, and it is unclear what I am dramatizing and misunderstanding.

Rabbi Schneier is also mentioned, also incidentally, also once. As far as either of us know, we are not related.

EDITED TO ADD (2/7): There is more context on the Justice.gov website version.

  •  

Apple Pay phish uses fake support calls to steal payment details

It started with an email that looked boringly familiar: Apple logo, a clean layout, and a subject line designed to make the target’s stomach drop.

The message claimed Apple has stopped a high‑value Apple Pay charge at an Apple Store, complete with a case ID, timestamp, and a warning that the account could be at risk if the target doesn’t respond.​

In some cases, there was even an “appointment” booked on their behalf to “review fraudulent activity,” plus a phone number they should call immediately if the time didn’t work.​ Nothing in the email screams amateur. The display name appears to be Apple, the formatting closely matches real receipts, and the language hits all the right anxiety buttons.

This is how most users are lured in by a recent Apple Pay phishing campaign.

The call that feels like real support

The email warns recipients not to Apple Pay until they’ve spoken to “Apple Billing & Fraud Prevention,” and it provides a phone number to call.​

partial example of the phish

After dialing the number, an agent introduces himself as part of Apple’s fraud department and asks for details such as Apple ID verification codes or payment information.

The conversation is carefully scripted to establish trust. The agent explains that criminals attempted to use Apple Pay in a physical Apple Store and that the system “partially blocked” the transaction. To “fully secure” the account, he says, some details need to be verified.

The call starts with harmless‑sounding checks: your name, the last four digits of your phone number, what Apple devices you own, and so on.

Next comes a request to confirm the Apple ID email address. While the victim is looking it up, a real-looking Apple ID verification code arrives by text message.

The agent asks for this code, claiming it’s needed to confirm they’re speaking to the rightful account owner. In reality, the scammer is logging into the account in real time and using the code to bypass two-factor authentication.

Once the account is “confirmed,” the agent walks the victim through checking their bank and Apple Pay cards. They ask questions about bank accounts and suggest “temporarily securing” payment methods so criminals can’t exploit them while the “Apple team” investigates.

The entire support process is designed to steal login codes and payment data. At scale, campaigns like this work because Apple’s brand carries enormous trust, Apple Pay involves real money, and users have been trained to treat fraud alerts as urgent and to cooperate with “support” when they’re scared.

One example submitted to Malwarebytes Scam Guard showed an email claiming an Apple Gift Card purchase for $279.99 and urging the recipient to call a support number (1-812-955-6285).

Another user submitted a screenshot showing a fake “Invoice Receipt – Paid” styled to look like an Apple Store receipt for a 2025 MacBook Air 13-inch laptop with M4 chip priced at $1,157.07 and a phone number (1-805-476-8382) to call about this “unauthorized transaction.”

What you should know

Apple doesn’t set up fraud appointments through email. The company also doesn’t ask users to fix billing problems by calling numbers in unsolicited messages.

Closely inspect the sender’s address. In these cases, the email doesn’t come from an official Apple domain, even if the display name makes it seem legitimate.

Never share two-factor authentication (2FA) codes, SMS codes, or passwords with anyone, even if they claim to be from Apple.

Ignore unsolicited messages urging you to take immediate action. Always think and verify before you engage. Talk to someone you trust if you’re not sure.

Malwarebytes Scam Guard helped several users identify this type of scam. For those without a subscription, you can use Scam Guard in ChatGPT.

If you’ve already engaged with these Apple Pay scammers, it is important to:

  • Change the Apple ID password immediately from Settings or appleid.apple.com, not from any link provided by email or SMS.
  • Check active sessions, sign out of all devices, then sign back in only on devices you recognize and control.
  • Rotate your Apple ID password again if you see any new login alerts, and confirm 2FA is still enabled. If not, turn it on.
  • In Wallet, check every card for unfamiliar Apple Pay transactions and recent in-store or online charges. Monitor bank and credit card statements closely for the next few weeks and dispute any unknown transactions immediately.
  • Check if the primary email account tied to your Apple ID is yours, since control of that email can be used to take over accounts.

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

  •  

Apple Pay phish uses fake support calls to steal payment details

It started with an email that looked boringly familiar: Apple logo, a clean layout, and a subject line designed to make the target’s stomach drop.

The message claimed Apple has stopped a high‑value Apple Pay charge at an Apple Store, complete with a case ID, timestamp, and a warning that the account could be at risk if the target doesn’t respond.​

In some cases, there was even an “appointment” booked on their behalf to “review fraudulent activity,” plus a phone number they should call immediately if the time didn’t work.​ Nothing in the email screams amateur. The display name appears to be Apple, the formatting closely matches real receipts, and the language hits all the right anxiety buttons.

This is how most users are lured in by a recent Apple Pay phishing campaign.

The call that feels like real support

The email warns recipients not to Apple Pay until they’ve spoken to “Apple Billing & Fraud Prevention,” and it provides a phone number to call.​

partial example of the phish

After dialing the number, an agent introduces himself as part of Apple’s fraud department and asks for details such as Apple ID verification codes or payment information.

The conversation is carefully scripted to establish trust. The agent explains that criminals attempted to use Apple Pay in a physical Apple Store and that the system “partially blocked” the transaction. To “fully secure” the account, he says, some details need to be verified.

The call starts with harmless‑sounding checks: your name, the last four digits of your phone number, what Apple devices you own, and so on.

Next comes a request to confirm the Apple ID email address. While the victim is looking it up, a real-looking Apple ID verification code arrives by text message.

The agent asks for this code, claiming it’s needed to confirm they’re speaking to the rightful account owner. In reality, the scammer is logging into the account in real time and using the code to bypass two-factor authentication.

Once the account is “confirmed,” the agent walks the victim through checking their bank and Apple Pay cards. They ask questions about bank accounts and suggest “temporarily securing” payment methods so criminals can’t exploit them while the “Apple team” investigates.

The entire support process is designed to steal login codes and payment data. At scale, campaigns like this work because Apple’s brand carries enormous trust, Apple Pay involves real money, and users have been trained to treat fraud alerts as urgent and to cooperate with “support” when they’re scared.

One example submitted to Malwarebytes Scam Guard showed an email claiming an Apple Gift Card purchase for $279.99 and urging the recipient to call a support number (1-812-955-6285).

Another user submitted a screenshot showing a fake “Invoice Receipt – Paid” styled to look like an Apple Store receipt for a 2025 MacBook Air 13-inch laptop with M4 chip priced at $1,157.07 and a phone number (1-805-476-8382) to call about this “unauthorized transaction.”

What you should know

Apple doesn’t set up fraud appointments through email. The company also doesn’t ask users to fix billing problems by calling numbers in unsolicited messages.

Closely inspect the sender’s address. In these cases, the email doesn’t come from an official Apple domain, even if the display name makes it seem legitimate.

Never share two-factor authentication (2FA) codes, SMS codes, or passwords with anyone, even if they claim to be from Apple.

Ignore unsolicited messages urging you to take immediate action. Always think and verify before you engage. Talk to someone you trust if you’re not sure.

Malwarebytes Scam Guard helped several users identify this type of scam. For those without a subscription, you can use Scam Guard in ChatGPT.

If you’ve already engaged with these Apple Pay scammers, it is important to:

  • Change the Apple ID password immediately from Settings or appleid.apple.com, not from any link provided by email or SMS.
  • Check active sessions, sign out of all devices, then sign back in only on devices you recognize and control.
  • Rotate your Apple ID password again if you see any new login alerts, and confirm 2FA is still enabled. If not, turn it on.
  • In Wallet, check every card for unfamiliar Apple Pay transactions and recent in-store or online charges. Monitor bank and credit card statements closely for the next few weeks and dispute any unknown transactions immediately.
  • Check if the primary email account tied to your Apple ID is yours, since control of that email can be used to take over accounts.

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

  •  

A Victorian schoolteacher was applying for ‘heaps of rentals’ online – then someone accessed his bank account

Michael suspects personal information he submitted to rent application platforms was leaked online. And analysis shows millions of documents may also be at risk

Michael* has spent the past two months trying to get his digital identity back.

The 47-year-old Victorian schoolteacher was in the process of moving to a new town and applying for rental properties online. Around this time – and unbeknown to him – his mobile phone number was transferred to someone else.

Continue reading...

© Composite: Getty Images

© Composite: Getty Images

© Composite: Getty Images

  •  

Open the wrong “PDF” and attackers gain remote access to your PC

Cybercriminals behind a campaign dubbed DEAD#VAX are taking phishing one step further by delivering malware inside virtual hard disks that pretend to be ordinary PDF documents. Open the wrong “invoice” or “purchase order” and you won’t see a document at all. Instead, Windows mounts a virtual drive that quietly installs AsyncRAT, a backdoor Trojan that allows attackers to remotely monitor and control your computer.

It’s a remote access tool, which means attackers gain remote hands‑on‑keyboard control, while traditional file‑based defenses see almost nothing suspicious on disk.

From a high-level view, the infection chain is long, but every step looks just legitimate enough on its own to slip past casual checks.

Victims receive phishing emails that look like routine business messages, often referencing purchase orders or invoices and sometimes impersonating real companies. The email doesn’t attach a document directly. Instead, it links to a file hosted on IPFS (InterPlanetary File System), a decentralized storage network increasingly abused in phishing campaigns because content is harder to take down and can be accessed through normal web gateways.

The linked file is named as a PDF and has the PDF icon, but is actually a virtual hard disk (VHD) file. When the user double‑clicks it, Windows mounts it as a new drive (for example, drive E:) instead of opening a document viewer. Mounting VHDs is perfectly legitimate Windows behavior, which makes this step less likely to ring alarm bells.

Inside the mounted drive is what appears to be the expected document, but it’s actually a Windows Script File (WSF). When the user opens it, Windows executes the code in the file instead of displaying a PDF.

After some checks to avoid analysis and detection, the script injects the payload—AsyncRAT shellcode—into trusted, Microsoft‑signed processes such as RuntimeBroker.exe, OneDrive.exe, taskhostw.exe, or sihost.exe. The malware never writes an actual executable file to disk. It lives and runs entirely in memory inside these legitimate processes, making detection and eventually at a later stage, forensics much harder. It also avoids sudden spikes in activity or memory usage that could draw attention.

For an individual user, falling for this phishing email can result in:

  • Theft of saved and typed passwords, including for email, banking, and social media.
  • Exposure of confidential documents, photos, or other sensitive files taken straight from the system.
  • Surveillance via periodic screenshots or, where configured, webcam capture.
  • Use of the machine as a foothold to attack other devices on the same home or office network.

How to stay safe

Because detection can be hard, it is crucial that users apply certain checks:

  • Don’t open email attachments until after verifying, with a trusted source, that they are legitimate.
  • Make sure you can see the actual file extensions. Unfortunately, Windows allows users to hide them. So, when in reality the file would be called invoice.pdf.vhd the user would only see invoice.pdf. To find out how to do this, see below.
  • Use an up-to-date, real-time anti-malware solution that can detect malware hiding in memory.

Showing file extensions on Windows 10 and 11

To show file extensions in Windows 10 and 11:

  • Open Explorer (Windows key + E)
  • In Windows 10, select View and check the box for File name extensions.
  • In Windows 11, this is found under View > Show > File name extensions.

Alternatively, search for File Explorer Options to uncheck Hide extensions for known file types.

For older versions of Windows, refer to this article.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

  •  

Open the wrong “PDF” and attackers gain remote access to your PC

Cybercriminals behind a campaign dubbed DEAD#VAX are taking phishing one step further by delivering malware inside virtual hard disks that pretend to be ordinary PDF documents. Open the wrong “invoice” or “purchase order” and you won’t see a document at all. Instead, Windows mounts a virtual drive that quietly installs AsyncRAT, a backdoor Trojan that allows attackers to remotely monitor and control your computer.

It’s a remote access tool, which means attackers gain remote hands‑on‑keyboard control, while traditional file‑based defenses see almost nothing suspicious on disk.

From a high-level view, the infection chain is long, but every step looks just legitimate enough on its own to slip past casual checks.

Victims receive phishing emails that look like routine business messages, often referencing purchase orders or invoices and sometimes impersonating real companies. The email doesn’t attach a document directly. Instead, it links to a file hosted on IPFS (InterPlanetary File System), a decentralized storage network increasingly abused in phishing campaigns because content is harder to take down and can be accessed through normal web gateways.

The linked file is named as a PDF and has the PDF icon, but is actually a virtual hard disk (VHD) file. When the user double‑clicks it, Windows mounts it as a new drive (for example, drive E:) instead of opening a document viewer. Mounting VHDs is perfectly legitimate Windows behavior, which makes this step less likely to ring alarm bells.

Inside the mounted drive is what appears to be the expected document, but it’s actually a Windows Script File (WSF). When the user opens it, Windows executes the code in the file instead of displaying a PDF.

After some checks to avoid analysis and detection, the script injects the payload—AsyncRAT shellcode—into trusted, Microsoft‑signed processes such as RuntimeBroker.exe, OneDrive.exe, taskhostw.exe, or sihost.exe. The malware never writes an actual executable file to disk. It lives and runs entirely in memory inside these legitimate processes, making detection and eventually at a later stage, forensics much harder. It also avoids sudden spikes in activity or memory usage that could draw attention.

For an individual user, falling for this phishing email can result in:

  • Theft of saved and typed passwords, including for email, banking, and social media.
  • Exposure of confidential documents, photos, or other sensitive files taken straight from the system.
  • Surveillance via periodic screenshots or, where configured, webcam capture.
  • Use of the machine as a foothold to attack other devices on the same home or office network.

How to stay safe

Because detection can be hard, it is crucial that users apply certain checks:

  • Don’t open email attachments until after verifying, with a trusted source, that they are legitimate.
  • Make sure you can see the actual file extensions. Unfortunately, Windows allows users to hide them. So, when in reality the file would be called invoice.pdf.vhd the user would only see invoice.pdf. To find out how to do this, see below.
  • Use an up-to-date, real-time anti-malware solution that can detect malware hiding in memory.

Showing file extensions on Windows 10 and 11

To show file extensions in Windows 10 and 11:

  • Open Explorer (Windows key + E)
  • In Windows 10, select View and check the box for File name extensions.
  • In Windows 11, this is found under View > Show > File name extensions.

Alternatively, search for File Explorer Options to uncheck Hide extensions for known file types.

For older versions of Windows, refer to this article.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

  •  
❌