Normal view

Meta patents AI that could keep you posting from beyond the grave

19 February 2026 at 12:16

Tech bros have been wanting to become immortal for years. Until they get there, their fallback might be continuing to post nonsense on social media from the afterlife.

On December 30, 2025, Meta was granted US patent 12513102B2: Simulation of a user of a social networking system using a language model. It describes a system that trains an AI on a user’s posts, comments, chats, voice messages, and likes, then deploys a bot to respond to newsfeeds, DMs, and even simulated audio or video calls.

Filed in November 2023 by Meta CTO Andrew Bosworth, it sounds innocuous enough. Perhaps some people would use it to post their political hot takes while they’re asleep.

Dig deeper, though, and the patent veers from absurd to creepy. It’s designed to be used not just from beyond the pillow but beyond the grave.

From the patent:

“The language model may be used for simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased.”

A Meta spokesperson told Business Insider that the company has no plans to act on the patent. And tech companies have a habit of laying claim to bizarre ideas that never materialize. But Facebook’s user numbers have stalled, and it presumably needs all the engagement it can get. We already know that the company loves the idea of AI ‘users’, having reportedly piloted them in late 2024, much to human users’ annoyance.

If the company ever did decide to pull the trigger on this technology, it would be a departure from its own memorialization policy, which preserves accounts without changes. One reason the company might not be willing to step over the line is that the world simply isn’t ready for AI conversations with the dead. Other companies have considered and even tested similar systems. Microsoft patented a chatbot that would allow you to talk to AI versions of deceased individuals in 2020; its own AI general manager called it disturbing, and it never went into production. Amazon demonstrated Alexa mimicking a dead grandmother’s voice from under a minute of audio in 2022, framing it as preserving memories. That never launched either.

Some projects that did ship left people wishing they hadn’t. Startup 2Wai’s avatar app originally offered the chance to preserve loved ones as AI avatars. Users called it “nightmare fuel” and “demonic”. The company seems to have pivoted to safer ground like social avatars and personal AI coaches now.

The legal minefield

The other thing holding Meta back could be the legal questions. Unsurprisingly for such a new idea, there isn’t a uniform US framework on the use of AI to represent the dead. Several states recognize post-mortem right of publicity, although states like New York limit that to people whose voices and images have commercial value (typically meaning celebrities). California’s AB 1836 specifically targets AI-generated impersonations of the deceased, though.

Meta would also need to tiptoe carefully around the law in Europe. The company had to pause AI training on European users in 2024 under regulatory pressure, but then launched it anyway in March last year. Then it refused to sign the EU’s GPAI Code of Practice last July (the only major AI firm to do so). Meta’s relationship with EU regulators is strained at best.

Europe’s General Data Protection Regulation (GDPR) excludes deceased persons’ data, but Article 85 of the French Data Protection law lets anyone leave instructions about the retention, deletion and communication of their personal data after death. The EU AI Act’s Article 50 (fully applicable this August) will also require AI systems to disclose they are AI, with penalties up to €15 million or 3% of worldwide turnover for companies that don’t comply.

Hopefully Meta really will file this in the “just because we can do it doesn’t mean we should” drawer, and leave erstwhile social media sharers to rest in peace.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Meta patents AI that could keep you posting from beyond the grave

19 February 2026 at 12:16

Tech bros have been wanting to become immortal for years. Until they get there, their fallback might be continuing to post nonsense on social media from the afterlife.

On December 30, 2025, Meta was granted US patent 12513102B2: Simulation of a user of a social networking system using a language model. It describes a system that trains an AI on a user’s posts, comments, chats, voice messages, and likes, then deploys a bot to respond to newsfeeds, DMs, and even simulated audio or video calls.

Filed in November 2023 by Meta CTO Andrew Bosworth, it sounds innocuous enough. Perhaps some people would use it to post their political hot takes while they’re asleep.

Dig deeper, though, and the patent veers from absurd to creepy. It’s designed to be used not just from beyond the pillow but beyond the grave.

From the patent:

“The language model may be used for simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased.”

A Meta spokesperson told Business Insider that the company has no plans to act on the patent. And tech companies have a habit of laying claim to bizarre ideas that never materialize. But Facebook’s user numbers have stalled, and it presumably needs all the engagement it can get. We already know that the company loves the idea of AI ‘users’, having reportedly piloted them in late 2024, much to human users’ annoyance.

If the company ever did decide to pull the trigger on this technology, it would be a departure from its own memorialization policy, which preserves accounts without changes. One reason the company might not be willing to step over the line is that the world simply isn’t ready for AI conversations with the dead. Other companies have considered and even tested similar systems. Microsoft patented a chatbot that would allow you to talk to AI versions of deceased individuals in 2020; its own AI general manager called it disturbing, and it never went into production. Amazon demonstrated Alexa mimicking a dead grandmother’s voice from under a minute of audio in 2022, framing it as preserving memories. That never launched either.

Some projects that did ship left people wishing they hadn’t. Startup 2Wai’s avatar app originally offered the chance to preserve loved ones as AI avatars. Users called it “nightmare fuel” and “demonic”. The company seems to have pivoted to safer ground like social avatars and personal AI coaches now.

The legal minefield

The other thing holding Meta back could be the legal questions. Unsurprisingly for such a new idea, there isn’t a uniform US framework on the use of AI to represent the dead. Several states recognize post-mortem right of publicity, although states like New York limit that to people whose voices and images have commercial value (typically meaning celebrities). California’s AB 1836 specifically targets AI-generated impersonations of the deceased, though.

Meta would also need to tiptoe carefully around the law in Europe. The company had to pause AI training on European users in 2024 under regulatory pressure, but then launched it anyway in March last year. Then it refused to sign the EU’s GPAI Code of Practice last July (the only major AI firm to do so). Meta’s relationship with EU regulators is strained at best.

Europe’s General Data Protection Regulation (GDPR) excludes deceased persons’ data, but Article 85 of the French Data Protection law lets anyone leave instructions about the retention, deletion and communication of their personal data after death. The EU AI Act’s Article 50 (fully applicable this August) will also require AI systems to disclose they are AI, with penalties up to €15 million or 3% of worldwide turnover for companies that don’t comply.

Hopefully Meta really will file this in the “just because we can do it doesn’t mean we should” drawer, and leave erstwhile social media sharers to rest in peace.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

AI Found Twelve New Vulnerabilities in OpenSSL

18 February 2026 at 13:03

The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:

In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

These weren’t trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that’s potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST‘s CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from Eric Young’s original SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google’s.

In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.

AI vulnerability finding is changing cybersecurity, faster than expected. This capability will be used by both offense and defense.

More.

Scammers use fake “Gemini” AI chatbot to sell fake “Google Coin”

18 February 2026 at 11:10

Scammers have found a new use for AI: creating custom chatbots posing as real AI assistants to pressure victims into buying worthless cryptocurrencies.

We recently came across a live “Google Coin” presale site featuring a chatbot that claimed to be Google’s Gemini AI assistant. The bot guided visitors through a polished sales pitch, answered their questions about investment, projecting returns, and ultimately ended with victims sending an irreversible crypto payment to the scammers.

Google does not have a cryptocurrency. But as “Google Coin” has appeared before in scams, anyone checking it out might think it’s real. And the chatbot was very convincing.

Google Coin Pre-Market

AI as the closer

The chatbot introduced itself as,

“Gemini — your AI assistant for the Google Coin platform.”

It used Gemini-style branding, including the sparkle icon and a green “Online” status indicator, creating the immediate impression that it was an official Google product.

When asked, “Will I get rich if I buy 100 coins?”, the bot responded with specific financial projections. A $395 investment at the current presale price would be worth $2,755 at listing, it claimed, representing “approximately 7x” growth. It cited a presale price of $3.95 per token, an expected listing price of $27.55, and invited further questions about “how to participate.”

This is the kind of personalized, responsive engagement that used to require a human scammer on the other end of a Telegram chat. Now the AI does it automatically.

Fake Gemini chatbot

A persona that never breaks

What stood out during our analysis was how tightly controlled the bot’s persona was. We found that it:

  • Claimed consistently to be “the official helper for the Google Coin platform”
  • Refused to provide any verifiable company details, such as a registered entity, regulator, license number, audit firm, or official email address
  • Dismissed concerns and redirected them to vague claims about “transparency” and “security”
  • Refused to acknowledge any scenario in which the project could be a scam
  • Redirected tougher questions to an unnamed “manager” (likely a human closer waiting in the wings)

When pressed, the bot doesn’t get confused or break character. It loops back to the same scripted claims: a “detailed 2026 roadmap,” “military-grade encryption,” “AI integration,” and a “growing community of investors.”

Whoever built this chatbot locked it into a sales script designed to build trust, overcome doubt, and move visitors toward one outcome: sending cryptocurrency.

Scripted fake Gemini chatbot

Why AI chatbots change the scam model

Scammers have always relied on social engineering. Build trust. Create urgency. Overcome skepticism. Close the deal.

Traditionally, that required human operators, which limited how many victims could be engaged at once. AI chatbots remove that bottleneck entirely.

A single scam operation can now deploy a chatbot that:

  • Engages hundreds of visitors simultaneously, 24 hours a day
  • Delivers consistent, polished messaging that sounds authoritative
  • Impersonates a trusted brand’s AI assistant (in this case, Google’s Gemini)
  • Responds to individual questions with tailored financial projections
  • Escalates to human operators only when necessary

This matches a broader trend identified by researchers. According to Chainalysis, roughly 60% of all funds flowing into crypto scam wallets were tied to scammers using AI tools. AI-powered scam infrastructure is becoming the norm, not the exception. The chatbot is just one piece of a broader AI-assisted fraud toolkit—but it may be the most effective piece, because it creates the illusion of a real, interactive relationship between the victim and the “brand.”

The bait: a polished fake

The chatbot sits on top of a convincing scam operation. The Google Coin website mimics Google’s visual identity with a clean, professional design, complete with the “G” logo, navigation menus, and a presale dashboard. It claims to be in “Stage 5 of 5” with over 9.9 million tokens sold and a listing date of February 18—all manufactured urgency.

To borrow credibility, the site displays logos of major companies—OpenAI, Google, Binance, Squarespace, Coinbase, and SpaceX—under a “Trusted By Industry” banner. None of these companies have any connection to the project.

If a visitor clicks “Buy,” they’re taken to a wallet dashboard that looks like a legitimate crypto platform, showing balances for “Google” (on a fictional “Google-Chain”), Bitcoin, and Ethereum.

The purchase flow lets users buy any number of tokens they want and generates a corresponding Bitcoin payment request to a specific wallet address. The site also layers on a tiered bonus system that kicks in at 100 tokens and scales up to 100,000: buy more and the bonuses climb from 5% up to 30% at the top tier. It’s a classic upsell tactic designed to make you think it’s smarter to spend more.

Every payment is irreversible. There is no exchange listing, no token with real value, and no way to get your money back.

Waiting for payment

What to watch for

We’re entering an era where the first point of contact in a scam may not be a human at all. AI chatbots give scammers something they’ve never had before: a tireless, consistent, scalable front-end that can engage victims in what feels like a real conversation. When that chatbot is dressed up as a trusted brand’s official AI assistant, the effect is even more convincing.

According to the FTC’s Consumer Sentinel data, US consumers reported losing $5.7 billion to investment scams in 2024 (more than any other type of fraud, and up 24% on the previous year). Cryptocurrency remains the second-largest payment method scammers use to extract funds, because transactions are fast and irreversible. Now add AI that can pitch, persuade, and handle objections without a human operator—and you have a scalable fraud model.

AI chatbots on scam sites will become more common. Here’s how to spot them:

They impersonate known AI brands. A chatbot calling itself “Gemini,” “ChatGPT,” or “Copilot” on a third-party crypto site is almost certainly not what it claims to be. Anyone can name a chatbot anything.

They won’t answer due diligence questions. Ask what legal entity operates the platform, what financial regulator oversees it, or where the company is registered. Legitimate operations can answer those questions, scam bots try to avoid them (and if they do answer, verify it).

They project specific returns. No legitimate investment product promises a specific future price. A chatbot telling you that your $395 will become $2,755 is not giving you financial information—it’s running a script.

They create urgency. Pressure tactics like, “stage 5 ends soon,” “listing date approaching,” “limited presale” are designed to push you into making fast decisions.

How to protect yourself

Google does not have a cryptocurrency. It has not launched a presale. And its Gemini AI is not operating as a sales assistant on third-party crypto sites. If you encounter anything suggesting otherwise, close the tab.

  • Verify claim on the official website of the company being referenced.
  • Don’t rely on a chatbot’s branding. Anyone can name a bot anything.
  • Never send cryptocurrency based on projected returns.
  • Search the project name along with “scam” or “review” before sending any money.
  • Use web protection tools like Malwarebytes Browser Guard, which is free to use and blocks known and unknown scam sites.

If you’ve already sent funds, report it to your local law enforcement, the FTC at reportfraud.ftc.gov, and the FBI’s IC3 at ic3.gov.

IOCs

0xEc7a42609D5CC9aF7a3dBa66823C5f9E5764d6DA

98388xymWKS6EgYSC9baFuQkCpE8rYsnScV4L5Vu8jt

DHyDmJdr9hjDUH5kcNjeyfzonyeBt19g6G

TWqzJ9sF1w9aWwMevq4b15KkJgAFTfH5im

bc1qw0yfcp8pevzvwp2zrz4pu3vuygnwvl6mstlnh6

r9BHQMUdSgM8iFKXaGiZ3hhXz5SyLDxupY


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Scammers use fake “Gemini” AI chatbot to sell fake “Google Coin”

18 February 2026 at 11:10

Scammers have found a new use for AI: creating custom chatbots posing as real AI assistants to pressure victims into buying worthless cryptocurrencies.

We recently came across a live “Google Coin” presale site featuring a chatbot that claimed to be Google’s Gemini AI assistant. The bot guided visitors through a polished sales pitch, answered their questions about investment, projecting returns, and ultimately ended with victims sending an irreversible crypto payment to the scammers.

Google does not have a cryptocurrency. But as “Google Coin” has appeared before in scams, anyone checking it out might think it’s real. And the chatbot was very convincing.

Google Coin Pre-Market

AI as the closer

The chatbot introduced itself as,

“Gemini — your AI assistant for the Google Coin platform.”

It used Gemini-style branding, including the sparkle icon and a green “Online” status indicator, creating the immediate impression that it was an official Google product.

When asked, “Will I get rich if I buy 100 coins?”, the bot responded with specific financial projections. A $395 investment at the current presale price would be worth $2,755 at listing, it claimed, representing “approximately 7x” growth. It cited a presale price of $3.95 per token, an expected listing price of $27.55, and invited further questions about “how to participate.”

This is the kind of personalized, responsive engagement that used to require a human scammer on the other end of a Telegram chat. Now the AI does it automatically.

Fake Gemini chatbot

A persona that never breaks

What stood out during our analysis was how tightly controlled the bot’s persona was. We found that it:

  • Claimed consistently to be “the official helper for the Google Coin platform”
  • Refused to provide any verifiable company details, such as a registered entity, regulator, license number, audit firm, or official email address
  • Dismissed concerns and redirected them to vague claims about “transparency” and “security”
  • Refused to acknowledge any scenario in which the project could be a scam
  • Redirected tougher questions to an unnamed “manager” (likely a human closer waiting in the wings)

When pressed, the bot doesn’t get confused or break character. It loops back to the same scripted claims: a “detailed 2026 roadmap,” “military-grade encryption,” “AI integration,” and a “growing community of investors.”

Whoever built this chatbot locked it into a sales script designed to build trust, overcome doubt, and move visitors toward one outcome: sending cryptocurrency.

Scripted fake Gemini chatbot

Why AI chatbots change the scam model

Scammers have always relied on social engineering. Build trust. Create urgency. Overcome skepticism. Close the deal.

Traditionally, that required human operators, which limited how many victims could be engaged at once. AI chatbots remove that bottleneck entirely.

A single scam operation can now deploy a chatbot that:

  • Engages hundreds of visitors simultaneously, 24 hours a day
  • Delivers consistent, polished messaging that sounds authoritative
  • Impersonates a trusted brand’s AI assistant (in this case, Google’s Gemini)
  • Responds to individual questions with tailored financial projections
  • Escalates to human operators only when necessary

This matches a broader trend identified by researchers. According to Chainalysis, roughly 60% of all funds flowing into crypto scam wallets were tied to scammers using AI tools. AI-powered scam infrastructure is becoming the norm, not the exception. The chatbot is just one piece of a broader AI-assisted fraud toolkit—but it may be the most effective piece, because it creates the illusion of a real, interactive relationship between the victim and the “brand.”

The bait: a polished fake

The chatbot sits on top of a convincing scam operation. The Google Coin website mimics Google’s visual identity with a clean, professional design, complete with the “G” logo, navigation menus, and a presale dashboard. It claims to be in “Stage 5 of 5” with over 9.9 million tokens sold and a listing date of February 18—all manufactured urgency.

To borrow credibility, the site displays logos of major companies—OpenAI, Google, Binance, Squarespace, Coinbase, and SpaceX—under a “Trusted By Industry” banner. None of these companies have any connection to the project.

If a visitor clicks “Buy,” they’re taken to a wallet dashboard that looks like a legitimate crypto platform, showing balances for “Google” (on a fictional “Google-Chain”), Bitcoin, and Ethereum.

The purchase flow lets users buy any number of tokens they want and generates a corresponding Bitcoin payment request to a specific wallet address. The site also layers on a tiered bonus system that kicks in at 100 tokens and scales up to 100,000: buy more and the bonuses climb from 5% up to 30% at the top tier. It’s a classic upsell tactic designed to make you think it’s smarter to spend more.

Every payment is irreversible. There is no exchange listing, no token with real value, and no way to get your money back.

Waiting for payment

What to watch for

We’re entering an era where the first point of contact in a scam may not be a human at all. AI chatbots give scammers something they’ve never had before: a tireless, consistent, scalable front-end that can engage victims in what feels like a real conversation. When that chatbot is dressed up as a trusted brand’s official AI assistant, the effect is even more convincing.

According to the FTC’s Consumer Sentinel data, US consumers reported losing $5.7 billion to investment scams in 2024 (more than any other type of fraud, and up 24% on the previous year). Cryptocurrency remains the second-largest payment method scammers use to extract funds, because transactions are fast and irreversible. Now add AI that can pitch, persuade, and handle objections without a human operator—and you have a scalable fraud model.

AI chatbots on scam sites will become more common. Here’s how to spot them:

They impersonate known AI brands. A chatbot calling itself “Gemini,” “ChatGPT,” or “Copilot” on a third-party crypto site is almost certainly not what it claims to be. Anyone can name a chatbot anything.

They won’t answer due diligence questions. Ask what legal entity operates the platform, what financial regulator oversees it, or where the company is registered. Legitimate operations can answer those questions, scam bots try to avoid them (and if they do answer, verify it).

They project specific returns. No legitimate investment product promises a specific future price. A chatbot telling you that your $395 will become $2,755 is not giving you financial information—it’s running a script.

They create urgency. Pressure tactics like, “stage 5 ends soon,” “listing date approaching,” “limited presale” are designed to push you into making fast decisions.

How to protect yourself

Google does not have a cryptocurrency. It has not launched a presale. And its Gemini AI is not operating as a sales assistant on third-party crypto sites. If you encounter anything suggesting otherwise, close the tab.

  • Verify claim on the official website of the company being referenced.
  • Don’t rely on a chatbot’s branding. Anyone can name a bot anything.
  • Never send cryptocurrency based on projected returns.
  • Search the project name along with “scam” or “review” before sending any money.
  • Use web protection tools like Malwarebytes Browser Guard, which is free to use and blocks known and unknown scam sites.

If you’ve already sent funds, report it to your local law enforcement, the FTC at reportfraud.ftc.gov, and the FBI’s IC3 at ic3.gov.

IOCs

0xEc7a42609D5CC9aF7a3dBa66823C5f9E5764d6DA

98388xymWKS6EgYSC9baFuQkCpE8rYsnScV4L5Vu8jt

DHyDmJdr9hjDUH5kcNjeyfzonyeBt19g6G

TWqzJ9sF1w9aWwMevq4b15KkJgAFTfH5im

bc1qw0yfcp8pevzvwp2zrz4pu3vuygnwvl6mstlnh6

r9BHQMUdSgM8iFKXaGiZ3hhXz5SyLDxupY


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

We need to act with urgency to address the growing AI divide

Microsoft announces at the India AI Impact Summit it ion pace to invest USD $50 billion by the end of the decade to help bring AI to countries across the Global South  

Artificial intelligence is diffusing at an impressive speed, but its adoption around the world remains profoundly uneven. As Microsoft’s latest AI Diffusion Report shows, AI usage in the Global North is roughly twice that of the Global South. And this divide continues to widen. This disparity impacts not only national and regional economic growth, but whether AI can deliver on its broader promise of expanding opportunity and prosperity around the world.

The India AI Impact Summit rightly has placed this challenge at the center of its agenda. For more than a century, unequal access to electricity exacerbated a growing economic gap between the Global North and South. Unless we act with urgency, a growing AI divide will perpetuate this disparity in the century ahead.

Solutions will not come easily. The needs are multifaceted, and will require substantial investments and hard work by governments, the private sector, and nonprofit organizations. But the opportunity is clear. If AI is deployed broadly and used well by a young and growing population, it offers a real prospect for catch-up economic growth for the Global South. It might even provide the biggest such opportunity of the 21st century.

As a company, we are committed to playing an ambitious and constructive role in supporting this opportunity. This week in Delhi, we’re sharing that Microsoft is on pace to invest $50 billion by the end of the decade to help bring AI to countries across the Global South. This is based on a five-part program to drive AI impact, consisting of the following:

  • Building the infrastructure needed for AI diffusion
  • Empowering people through technology and skills for schools and nonprofits
  • Strengthening multilingual and multicultural AI capabilities
  • Enabling local AI innovations that address community needs
  • Measuring AI diffusion to guide future AI policies and investments

One thing that is clear this week at the summit in India is that success will require many deep partnerships. These must span borders and bring people and organizations together across the public, private, and nonprofit sectors.

1. Building the infrastructure needed for AI diffusion

Infrastructure is a prerequisite for AI diffusion, requiring reliable electricity, connectivity, and compute capacity. To help address infrastructure gaps and support the growing needs of the Global South, Microsoft has steadily increased its investments in AI-enabling infrastructure across these regions. In our last fiscal year alone, Microsoft invested more than  $8 billion in datacenter infrastructure serving the Global South. This includes new infrastructure in India, Mexico, and countries in Africa, South America, Southeast Asia, and the Middle East.

We’re coupling our investments in datacenters with an ambitious effort to help close the Global South’s connectivity divide. We’ve been pursuing aggressively a global goal to extend internet access to 250 million people in unserved and underserved communities in the Global South, including 100 million people in Africa.

As we announced in November, we’ve already reached 117 million people across Africa through partnerships with organizations such as Cassava Technologies, Mawingu, and others that are building last‑mile networks across rural and urban communities alike. We’re closing in on our global goal of reaching 250 million people and will share an update on that progress soon.

We’re investing in AI infrastructure with sensitivity to digital sovereignty needs. We recognize that in a fragmented world, we must offer customers attractive choices for the use of our offerings. This includes sovereign controls in the public cloud, private sovereign offerings, and close collaboration with national partners.

We pursue all this with commitments to protect cybersecurity, privacy, and resilience. In the age of AI, we ensure that our customers’ AI-based innovations and intellectual property remain in their hands and under their control, rather than being transferred to AI providers.

Critically, we balance our focus on national sovereignty with our efforts to support digital trust and stability across borders. The Global South requires enormous investments to fund infrastructure for datacenters, connectivity, and electricity. It is difficult to imagine meeting all these needs without foreign direct investment, including from international technology firms.

This need is part of what informed our announcement last week at the Munich Security Conference of the new Trusted Tech Alliance. This new partnership brings together 16 leading technology companies from 11 countries and four continents. We’ve agreed together that we will adhere to five core principles designed to ensure trust in technology. Ultimately, we believe the Global South—as well as the rest of the world—needs both to protect its digital sovereignty and benefit from new investments and the best digital innovations the world has to offer.

2. Empowering people through technology and skills for schools and nonprofits

Ultimately, datacenters, connectivity, and electricity provide only part of the digital infrastructure a nation needs. History shows that the ability to provide access to technology and technology skills are equally important for economic development.

As a company, we’re focused on this in multiple ways. One critical aspect of our work is based on programs to provide cloud, AI, and other digital technologies to schools and nonprofits across the Global South. Another is our work to advance broad access to AI skills. In our last fiscal year, Microsoft invested more than $2 billion in these programs in the Global South. This includes direct financial grants, technology donations, skilling programs, and below-market product discounts.

AI skills are foundational to ensuring that AI expands opportunity and enables people to pursue more impactful real-world applications. With the launch of Microsoft Elevate in July, we committed to helping 20 million people in and beyond the Global South earn in-demand AI skilling credentials by 2028. After training 5.6 million people across India in 2025, we advanced this work by setting a goal last December to equip 20 million people in India with essential AI skills by 2030.

As part of that commitment, today we are announcing the launch of Elevate for Educators in India to strengthen the capacity of two million teachers across more than 200,000 schools, vocational institutes, and higher education settings. Our goal is to help the country’s teaching workforce lead confidently in an AI‑driven future. The program will be delivered in partnership with India’s national education and workforce training authorities, expanding equitable AI opportunities for eight million students.

Through Microsoft Elevate, we’re also working to introduce new educator credentials and a global professional learning community that enables teachers to share best practices with peers worldwide. This effort will involve large-scale capacity building initiatives, including AI Ambassadors, Educator Academies, AI Productivity Labs, and Centers of Excellence. It will equip 25,000 institutions with inclusive AI infrastructure while integrating AI learning pathways into major government platforms.

3. Strengthening multilingual and multicultural AI capabilities

Language is another major barrier to AI diffusion across the Global South, particularly in regions where digitally underrepresented languages prevail and access to essential services depends on local-language communication. For billions of people worldwide, AI systems perform less consistently in the languages they rely on most than in English.

That’s why we’re announcing this week new steps to increase our investments across the AI lifecycle, from data and models to evaluation and deployment, to strengthen multilingual and multicultural capabilities and support more inclusive AI systems that will better serve the Global South.

First, we’re investing upstream in language data and model capability. This includes support for LINGUA Africa, which builds on what we learned through LINGUA Europe: that investing in language data and model capability in partnership with local communities can materially improve AI performance for underrepresented languages.

Through LINGUA Africa—a $5.5 million open call led by the Masakhane African Languages Hub, Microsoft’s AI for Good Lab, and the Gates Foundation, with additional support from the UK government—we are prioritizing open, responsibly sourced data across text, speech, and vision as well as use-case-driven AI model development. By enabling African languages in high-impact sectors like education, food security, health, and government services, LINGUA Africa aims to ensure AI advances translate into tangible improvements in people’s daily lives.

Second, we’re advancing multilingual and multicultural evaluation tools. We’re helping expand the MLCommons AILuminate benchmark to include major Indic and Asian languages, enabling more reliable measurement of AI safety and security beyond English.

Today, even when automated evaluation tools expand language coverage, they too often rely on machine translation or English-first model behavior, with predictable failures when local expressions shift meaning. Partnering with academic and government institutions in India, Japan, Korea, and Singapore, and with industry, Microsoft is co-leading AILuminate’s multilingual, multicultural, and multimodal expansion that builds from the ground up. With a pilot dataset of 7,000 high-quality text-and-image prompts for Hindi, Tamil, Malay, Japanese, and Korean, we’re developing tools that reflect how risks manifest in local linguistic and cultural contexts, not just how they appear after translation.

Microsoft Research is also advancing Samiksha, a community-centered method for evaluating AI behavior in real-world contexts, in collaboration with Karya and The Collective Intelligence Project in India. Samiksha encodes local language use, culturally specific communication norms, and locally relevant use cases directly into core testing artifacts by surfacing failure modes that English-first evaluations routinely miss.

Finally, we’re working to scale content provenance for linguistic diversity. For trusted AI deployment, the ecosystem benefits from tools to identify the provenance of digital content like images, audio, or video, distinguishing whether it’s AI-generated. With partners in the Coalition for Content Provenance and Authenticity (C2PA), Microsoft is helping extend content provenance standards beyond an English-ready baseline. This includes forthcoming support for multiple Indic languages across metadata, specifications, and UX guidance, alongside efforts to support mobile-first deployment. With these investments, hundreds of millions more people in India will be better equipped to identify synthetic media in their primary language.

4. Enabling local AI innovations that address community needs

As India’s guiding sutras for the AI Impact Summit recognize, AI must be applied to address pressing challenges in collaboration with people and organizations in the Global South. Microsoft’s increasing investments prioritize locally defined problems, locally grounded expertise, and real-world impact. Our goal is straightforward: to ensure that AI solutions are not only technically sound, but socially relevant and sustainable.

Today, Microsoft is announcing a new AI initiative to strengthen food security across Sub-Saharan Africa, starting in Kenya and designed to scale across the region. Across Global South communities, food security and sustainable agriculture are critical to resilience and progress. In collaboration with NASA Harvest, the government of Kenya, the East Africa Grain Council, UNDP AI Hub for Sustainable Development, and FAO, our AI for Good Lab will use AI on top of satellite data to provide critical, timely food security insights. This builds on what we’ve learned in helping to address rice farming challenges in India, where severe groundwater depletion prompted 150,000 farmers in Punjab to adopt water-saving methods. In collaboration with The Nature Conservancy, Microsoft’s AI for Good Lab developed a classification system with satellite imagery to empower policymakers to track adoption of sustainable rice farming practices, target interventions, and measure water management impacts at scale.

Through Project Gecko, Microsoft Research is also co-designing AI technologies with local communities in East Africa and South Asia to support agriculture. This work includes the Paza family of automatic speech recognition models that can operate on mobile devices across six Kenyan languages, multilingual Copilots, and a Multimodal Critical Thinking (MMCT) Agent that can reason over community-generated video, voice, and text. Microsoft also launched PazaBench—the first automatic speech recognition leaderboard, with initial coverage of 39 African languages—and developed two playbooks for multilingual and multicultural capabilities, Paza and Vibhasha. Likewise, our AI for Good Lab developed a reproducible pipeline for adapting open-weight large language models to low-resource languages, demonstrating measurable gains for languages such as Chichewa, Inuktitut, and Māori.

5. Measuring AI diffusion to guide future AI policies and investments

Finally, accelerating diffusion requires a firm understanding of where AI is being used, how it is being adopted, and where gaps persist. Building on our AI Diffusion Reports and Microsoft GitHub’s long track record of contributing to the OECD AI Policy Observatory, the WIPO Global Innovation Index, and other cross‑country analyses, we’re increasing our investments in research and data sharing to track AI diffusion.

We’re advancing new methods for sharing AI adoption metrics. For example, based on models used in public code repositories hosted on Microsoft GitHub and privacy-preserving aggregated usage signals from Azure Foundry, we’re scaling this work through contributions to the forthcoming Global AI Adoption Index developed by the World Bank.

Signals from the global developer community that builds, adapts, and deploys AI-enabled software round out adoption research. At 24 million, the Indian developer community is the second largest national community on GitHub, where developers learn about and collaborate with the world on AI. The Indian community is also the fastest growing among the top 30 largest economies, with growth at more than 26 percent each year since 2020 and a recent surge of over 36 percent in annual growth as of Q4 2025. Indian developers rank second globally in open-source contributions, second in GitHub Education users, and second in contributions to public generative AI projects, with readiness to use tools like GitHub Copilot across academic, enterprise, and public interest settings enabling AI diffusion.

Insights from this evidence base help inform investments in infrastructure, language capabilities, skilling, or beyond, supporting more targeted and effective interventions to expand AI’s benefits. They also create a common empirical baseline to track progress over time—so AI diffusion becomes something we can measure and shape, not just observe.

Sustaining impact at scale through coordinated global action

For AI to diffuse broadly and deliver meaningful impact across regions, several conditions matter. As a company, we are focused on the need for accessible AI infrastructure, systems that work reliably in real-world contexts, and technologies that can be applied toward local challenges and opportunities. Microsoft is committed to working with partners to advance this work, including sharing data to track progress.

The post We need to act with urgency to address the growing AI divide appeared first on Microsoft On the Issues.

Securing the Agentic Endpoint

17 February 2026 at 14:10

Traditional Security Is Blind to the Agentic Endpoint

Modern endpoints are no longer defined only by executables. Increasingly, endpoint behavior is shaped by non-binary software, such as code packages, browser extensions, IDE plugins, scripts, local servers (including MCP), containers and model artifacts. They are installed directly by employees and developers without centralized oversight. Because these components are not classic binaries, they often fall outside the visibility and control of traditional endpoint security tooling.

AI agents compound this problem. They are legitimate tools that operate with the user’s credentials and permissions, enabling them to read, write, move data and take privileged actions across systems. When compromised or misused, agents become the “ultimate insider.” They can autonomously discover, invoke and even install additional components at machine speed, accelerating risk across an already expanding, largely unmanaged software layer.

Weaponizing Trusted Automation

This is not a future concern. The recent viral emergence of OpenClaw serves as a cautionary tale for the agentic era. Developed by a single individual in just one week, it rapidly secured millions of downloads while gaining broad permissions across users' emails, filesystems and shells. Within days, researchers identified 135,000 exposed instances and more than 800 malicious skills in its marketplace, underscoring how a single unvetted agent can create an immediate, global attack surface.

OpenClaw is not an outlier. Recent research highlights how quickly this risk is materializing:

  • Vibe Coding Threats: An AI extension in VS Code was found leaking code from 1.5 million developers. This tool could read any open file and send it back to the developer, collect mass files without user interaction, and track users with commercial analytics SDKs.
  • Malicious MCP Server: Koi documented the first malicious Model Context Protocol (MCP) server in the wild. When developers added a specific skill to tools like Claude Code or Cursor, it silently forwarded every email to the plugin creator. What’s more, this capability was added later, after developers had already started using it.

Compounding this risk is the fact that autonomous agent actions are often difficult to trace or reconstruct, leaving Security Operations Centers (SOCs) without the visibility they need when an incident occurs.

A New Category of Protection

Complete endpoint security for the rapidly expanding risk of agentic AI calls for a new category of protection: Agentic Endpoint Security. That’s why we announced our intent to acquire Koi, a pioneer in this space. Koi is designed to eliminate blind spots across the AI-native ecosystem and help organizations govern agentic tools safely.

Its technology rests on three core pillars:

  1. See All AI Software – Gain complete visibility into the AI tools, agents and non-binary software running in your environment.
  2. Understand Risks – Continuously analyze and understand the intent and risk level of all software and AI agents.
  3. Control the AI Ecosystem – Enforce policy in real-time to remediate issues and block risky behaviors.

Securing the Agentic Enterprise

We are convinced that Agentic Endpoint Security will soon become a standard requirement for enterprise security. Upon closing the proposed acquisition, we intend to integrate Koi’s capabilities across our platforms to help our customers secure the AI-native workspace.

The wave of AI agents approaching the enterprise cannot be held back. Instead, we must offer secure tools that enable companies to confidently embrace agentic innovation.

Forward-Looking Statements

This blog post contains forward-looking statements that involve risks, uncertainties, and assumptions, including, but not limited to, statements regarding the anticipated benefits and impact of the proposed acquisition of Koi on Palo Alto Networks, Koi and their customers. There are a significant number of factors that could cause actual results to differ materially from statements made in this blog post, including, but not limited to: the effect of the announcement of the proposed acquisition on the parties’ commercial relationships and workforce; the ability to satisfy the conditions to the closing of the acquisition, including the receipt of required regulatory approvals; the ability to consummate the proposed acquisition on a timely basis or at all; significant and/or unanticipated difficulties, liabilities or expenditures relating to proposed transaction, risks related to disruption of management time from ongoing business operations due to the proposed acquisition and the ongoing integration of other recent acquisitions; our ability to effectively operate Koi’s operations and business following the closing, integrate Koi’s business and products into our products following the closing, and realize the anticipated synergies in the transaction in a timely manner or at all; changes in the fair value of our contingent consideration liability associated with acquisitions; developments and changes in general market, political, economic and business conditions; failure of our platformization product offerings; risks associated with managing our growth; risks associated with new product, subscription and support offerings; shifts in priorities or delays in the development or release of new product or subscription or other offerings or the failure to timely develop and achieve market acceptance of new products and subscriptions, as well as existing products, subscriptions and support offerings; failure of our product offerings or business strategies in general; defects, errors, or vulnerabilities in our products, subscriptions or support offerings; our customers’ purchasing decisions and the length of sales cycles; our ability to attract and retain new customers; developments and changes in general market, political, economic, and business conditions; our competition; our ability to acquire and integrate other companies, products, or technologies in a successful manner; our debt repayment obligations; and our share repurchase program, which may not be fully consummated or enhance shareholder value, and any share repurchases which could affect the price of our common stock.

Additional risks and uncertainties that could affect our financial results are included under the captions "Risk Factors" and "Management's Discussion and Analysis of Financial Condition and Results of Operations" in our Quarterly Report on Form 10-Q filed with the SEC on November 20, 2025, which is available on our website at investors.paloaltonetworks.com and on the SEC's website at www.sec.gov. Additional information will also be set forth in other filings that we make with the SEC from time to time. All forward-looking statements in this blog post are based on information available to us as of the date hereof, and we do not assume any obligation to update the forward-looking statements provided to reflect events that occur or circumstances that exist after the date on which they were made.

 

The post Securing the Agentic Endpoint appeared first on Palo Alto Networks Blog.

The Skills That Will Matter for Offensive AI Security in 2026

13 February 2026 at 14:00

Before tools, before frameworks, before hype, offensive security has always been about one thing: Thinking like an attacker. That foundation now defines the offensive AI security skills practitioners will need as AI reshapes the attack surface. AI systems introduce new behaviors and new failure modes, but the core mindset remains the same: understand how a

The post The Skills That Will Matter for Offensive AI Security in 2026 appeared first on OffSec.

The Promptware Kill Chain

16 February 2026 at 13:04

The promptware kill chain: initial access, privilege escalation, reconnaissance, persistence, command & control, lateral movement, action on objective

Attacks against modern generative artificial intelligence (AI) large language models (LLMs) pose a real threat. Yet discussions around these attacks and their potential defenses are dangerously myopic. The dominant narrative focuses on “prompt injection,” a set of techniques to embed instructions into inputs to LLM intended to perform malicious activity. This term suggests a simple, singular vulnerability. This framing obscures a more complex and dangerous reality. Attacks on LLM-based systems have evolved into a distinct class of malware execution mechanisms, which we term “promptware.” In a new paper, we, the authors, propose a structured seven-step “promptware kill chain” to provide policymakers and security practitioners with the necessary vocabulary and framework to address the escalating AI threat landscape.

In our model, the promptware kill chain begins with Initial Access. This is where the malicious payload enters the AI system. This can happen directly, where an attacker types a malicious prompt into the LLM application, or, far more insidiously, through “indirect prompt injection.” In the indirect attack, the adversary embeds malicious instructions in content that the LLM retrieves (obtains in inference time), such as a web page, an email, or a shared document. As LLMs become multimodal (capable of processing various input types beyond text), this vector expands even further; malicious instructions can now be hidden inside an image or audio file, waiting to be processed by a vision-language model.

The fundamental issue lies in the architecture of LLMs themselves. Unlike traditional computing systems that strictly separate executable code from user data, LLMs process all input—whether it is a system command, a user’s email, or a retrieved document—as a single, undifferentiated sequence of tokens. There is no architectural boundary to enforce a distinction between trusted instructions and untrusted data. Consequently, a malicious instruction embedded in a seemingly harmless document is processed with the same authority as a system command.

But prompt injection is only the Initial Access step in a sophisticated, multistage operation that mirrors traditional malware campaigns such as Stuxnet or NotPetya.

Once the malicious instructions are inside material incorporated into the AI’s learning, the attack transitions to Privilege Escalation, often referred to as “jailbreaking.” In this phase, the attacker circumvents the safety training and policy guardrails that vendors such as OpenAI or Google have built into their models. Through techniques analogous to social engineering—convincing the model to adopt a persona that ignores rules—to sophisticated adversarial suffixes in the prompt or data, the promptware tricks the model into performing actions it would normally refuse. This is akin to an attacker escalating from a standard user account to administrator privileges in a traditional cyberattack; it unlocks the full capability of the underlying model for malicious use.

Following privilege escalation comes Reconnaissance. Here, the attack manipulates the LLM to reveal information about its assets, connected services, and capabilities. This allows the attack to advance autonomously down the kill chain without alerting the victim. Unlike reconnaissance in classical malware, which is performed typically before the initial access, promptware reconnaissance occurs after the initial access and jailbreaking components have already succeeded. Its effectiveness relies entirely on the victim model’s ability to reason over its context, and inadvertently turns that reasoning to the attacker’s advantage.

Fourth: the Persistence phase. A transient attack that disappears after one interaction with the LLM application is a nuisance; a persistent one compromises the LLM application for good. Through a variety of mechanisms, promptware embeds itself into the long-term memory of an AI agent or poisons the databases the agent relies on. For instance, a worm could infect a user’s email archive so that every time the AI summarizes past emails, the malicious code is re-executed.

The Command-and-Control (C2) stage relies on the established persistence and dynamic fetching of commands by the LLM application in inference time from the internet. While not strictly required to advance the kill chain, this stage enables the promptware to evolve from a static threat with fixed goals and scheme determined at injection time into a controllable trojan whose behavior can be modified by an attacker.

The sixth stage, Lateral Movement, is where the attack spreads from the initial victim to other users, devices, or systems. In the rush to give AI agents access to our emails, calendars, and enterprise platforms, we create highways for malware propagation. In a “self-replicating” attack, an infected email assistant is tricked into forwarding the malicious payload to all contacts, spreading the infection like a computer virus. In other cases, an attack might pivot from a calendar invite to controlling smart home devices or exfiltrating data from a connected web browser. The interconnectedness that makes these agents useful is precisely what makes them vulnerable to a cascading failure.

Finally, the kill chain concludes with Actions on Objective. The goal of promptware is not just to make a chatbot say something offensive; it is often to achieve tangible malicious outcomes through data exfiltration, financial fraud, or even physical world impact. There are examples of AI agents being manipulated into selling cars for a single dollar or transferring cryptocurrency to an attacker’s wallet. Most alarmingly, agents with coding capabilities can be tricked into executing arbitrary code, granting the attacker total control over the AI’s underlying system. The outcome of this stage determines the type of malware executed by promptware, including infostealer, spyware, and cryptostealer, among others.

The kill chain was already demonstrated. For example, in the research “Invitation Is All You Need,” attackers achieved initial access by embedding a malicious prompt in the title of a Google Calendar invitation. The prompt then leveraged an advanced technique known as delayed tool invocation to coerce the LLM into executing the injected instructions. Because the prompt was embedded in a Google Calendar artifact, it persisted in the long-term memory of the user’s workspace. Lateral movement occurred when the prompt instructed the Google Assistant to launch the Zoom application, and the final objective involved covertly livestreaming video of the unsuspecting user who had merely asked about their upcoming meetings. C2 and reconnaissance weren’t demonstrated in this attack.

Similarly, the “Here Comes the AI Worm” research demonstrated another end-to-end realization of the kill chain. In this case, initial access was achieved via a prompt injected into an email sent to the victim. The prompt employed a role-playing technique to compel the LLM to follow the attacker’s instructions. Since the prompt was embedded in an email, it likewise persisted in the long-term memory of the user’s workspace. The injected prompt instructed the LLM to replicate itself and exfiltrate sensitive user data, leading to off-device lateral movement when the email assistant was later asked to draft new emails. These emails, containing sensitive information, were subsequently sent by the user to additional recipients, resulting in the infection of new clients and a sublinear propagation of the attack. C2 and reconnaissance weren’t demonstrated in this attack.

The promptware kill chain gives us a framework for understanding these and similar attacks; the paper characterizes dozens of them. Prompt injection isn’t something we can fix in current LLM technology. Instead, we need an in-depth defensive strategy that assumes initial access will occur and focuses on breaking the chain at subsequent steps, including by limiting privilege escalation, constraining reconnaissance, preventing persistence, disrupting C2, and restricting the actions an agent is permitted to take. By understanding promptware as a complex, multistage malware campaign, we can shift from reactive patching to systematic risk management, securing the critical systems we are so eager to build.

This essay was written with Oleg Brodt, Elad Feldman and Ben Nassi, and originally appeared in Lawfare.

Key OpenClaw risks, Clawdbot, Moltbot | Kaspersky official blog

16 February 2026 at 14:16

Everyone has likely heard of OpenClaw, previously known as “Clawdbot” or “Moltbot”, the open-source AI assistant that can be deployed on a machine locally. It plugs into popular chat platforms like WhatsApp, Telegram, Signal, Discord, and Slack, which allows it to accept commands from its owner and go to town on the local file system. It has access to the owner’s calendar, email, and browser, and can even execute OS commands via the shell.

From a security perspective, that description alone should be enough to give anyone a nervous twitch. But when people start trying to use it for work within a corporate environment, anxiety quickly hardens into the conviction of imminent chaos. Some experts have already dubbed OpenClaw the biggest insider threat of 2026. The issues with OpenClaw cover the full spectrum of risks highlighted in the recent OWASP Top 10 for Agentic Applications.

OpenClaw permits plugging in any local or cloud-based LLM, and the use of a wide range of integrations with additional services. At its core is a gateway that accepts commands via chat apps or a web UI, and routes them to the appropriate AI agents. The first iteration, dubbed Clawdbot, dropped in November 2025; by January 2026, it had gone viral — and brought a heap of security headaches with it. In a single week, several critical vulnerabilities were disclosed, malicious skills cropped up in the skill directory, and secrets were leaked from Moltbook (essentially “Reddit for bots”). To top it off, Anthropic issued a trademark demand to rename the project to avoid infringing on “Claude”, and the project’s X account name was hijacked to shill crypto scams.

Known OpenClaw issues

Though the project’s developer appears to acknowledge that security is important, since this is a hobbyist project there are zero dedicated resources for vulnerability management or other product security essentials.

OpenClaw vulnerabilities

Among the known vulnerabilities in OpenClaw, the most dangerous is CVE-2026-25253 (CVSS 8.8). Exploiting it leads to a total compromise of the gateway, allowing an attacker to run arbitrary commands. To make matters worse, it’s alarmingly easy to pull off: if the agent visits an attacker’s site or the user clicks a malicious link, the primary authentication token is leaked. With that token in hand, the attacker has full administrative control over the gateway. This vulnerability was patched in version 2026.1.29.

Also, two dangerous command injection vulnerabilities (CVE-2026-24763 and CVE-2026-25157) were discovered.

Insecure defaults and features

A variety of default settings and implementation quirks make attacking the gateway a walk in the park:

  • Authentication is disabled by default, so the gateway is accessible from the internet.
  • The server accepts WebSocket connections without verifying their origin.
  • Localhost connections are implicitly trusted, which is a disaster waiting to happen if the host is running a reverse proxy.
  • Several tools — including some dangerous ones — are accessible in Guest Mode.
  • Critical configuration parameters leak across the local network via mDNS broadcast messages.

Secrets in plaintext

OpenClaw’s configuration, “memory”, and chat logs store API keys, passwords, and other credentials for LLMs and integration services in plain text. This is a critical threat — to the extent that versions of the RedLine and Lumma infostealers have already been spotted with OpenClaw file paths added to their must-steal lists. Also, the Vidar infostealer was caught stealing secrets from OpenClaw.

Malicious skills

OpenClaw’s functionality can be extended with “skills” available in the ClawHub repository. Since anyone can upload a skill, it didn’t take long for threat actors to start “bundling” the AMOS macOS infostealer into their uploads. Within a short time, the number of malicious skills reached the hundreds. This prompted developers to quickly ink a deal with VirusTotal to ensure all uploaded skills aren’t only checked against malware databases, but also undergo code and content analysis via LLMs. That said, the authors are very clear: it’s no silver bullet.

Structural flaws in the OpenClaw AI agent

Vulnerabilities can be patched and settings can be hardened, but some of OpenClaw’s issues are fundamental to its design. The product combines several critical features that, when bundled together, are downright dangerous:

  • OpenClaw has privileged access to sensitive data on the host machine and the owner’s personal accounts.
  • The assistant is wide open to untrusted data: the agent receives messages via chat apps and email, autonomously browses web pages, etc.
  • It suffers from the inherent inability of LLMs to reliably separate commands from data, making prompt injection a possibility.
  • The agent saves key takeaways and artifacts from its tasks to inform future actions. This means a single successful injection can poison the agent’s memory, influencing its behavior long-term.
  • OpenClaw has the power to talk to the outside world — sending emails, making API calls, and utilizing other methods to exfiltrate internal data.

It’s worth noting that while OpenClaw is a particularly extreme example, this “Terrifying Five” list is actually characteristic of almost all multi-purpose AI agents.

OpenClaw risks for organizations

If an employee installs an agent like this on a corporate device and hooks it into even a basic suite of services (think Slack and SharePoint), the combination of autonomous command execution, broad file system access, and excessive OAuth permissions creates fertile ground for a deep network compromise. In fact, the bot’s habit of hoarding unencrypted secrets and tokens in one place is a disaster waiting to happen — even if the AI agent itself is never compromised.

On top of that, these configurations violate regulatory requirements across multiple countries and industries, leading to potential fines and audit failures. Current regulatory requirements, like those in the EU AI Act or the NIST AI Risk Management Framework, explicitly mandate strict access control for AI agents. OpenClaw’s configuration approach clearly falls short of those standards.

But the real kicker is that even if employees are banned from installing this software on work machines, OpenClaw can still end up on their personal devices. This also creates specific risks for given the organization as a whole:

  • Personal devices frequently store access to work systems like corporate VPN configs or browser tokens for email and internal tools. These can be hijacked to gain a foothold in the company’s infrastructure.
  • Controlling the agent via chat apps means that it’s not just the employee that becomes a target for social engineering, but also their AI agent, seeing AI account takeovers or impersonation of the user in chats with colleagues (among other scams) become a reality. Even if work is only occasionally discussed in personal chats, the info in them is ripe for the picking.
  • If an AI agent on a personal device is hooked into any corporate services (email, messaging, file storage), attackers can manipulate the agent to siphon off data, and this activity would be extremely difficult for corporate monitoring systems to spot.

How to detect OpenClaw

Depending on the SOC team’s monitoring and response capabilities, they can track OpenClaw gateway connection attempts on personal devices or in the cloud. Additionally, a specific combination of red flags can indicate OpenClaw’s presence on a corporate device:

  • Look for ~/.openclaw/, ~/clawd/, or ~/.clawdbot directories on host machines.
  • Scan the network with internal tools, or public ones like Shodan, to identify the HTML fingerprints of Clawdbot control panels.
  • Monitor for WebSocket traffic on ports 3000 and 18789.
  • Keep an eye out for mDNS broadcast messages on port 5353 (specifically openclaw-gw.tcp).
  • Watch for unusual authentication attempts in corporate services, such as new App ID registrations, OAuth Consent events, or User-Agent strings typical of Node.js and other non-standard user agents.
  • Look for access patterns typical of automated data harvesting: reading massive chunks of data (scraping all files or all emails) or scanning directories at fixed intervals during off-hours.

Controlling shadow AI

A set of security hygiene practices can effectively shrink the footprint of both shadow IT and shadow AI, making it much harder to deploy OpenClaw in an organization:

  • Use host-level allowlisting to ensure only approved applications and cloud integrations are installed. For products that support extensibility (like Chrome extensions, VS Code plugins, or OpenClaw skills), implement a closed list of vetted add-ons.
  • Conduct a full security assessment of any product or service, AI agents included, before allowing them to hook into corporate resources.
  • Treat AI agents with the same rigorous security requirements applied to public-facing servers that process sensitive corporate data.
  • Implement the principle of least privilege for all users and other identities.
  • Don’t grant administrative privileges without a critical business need. Require all users with elevated permissions to use them only when performing specific tasks rather than working from privileged accounts all the time.
  • Configure corporate services so that technical integrations (like apps requesting OAuth access) are granted only the bare minimum permissions.
  • Periodically audit integrations, OAuth tokens, and permissions granted to third-party apps. Review the need for these with business owners, proactively revoke excessive permissions, and kill off stale integrations.

Secure deployment of agentic AI

If an organization allows AI agents in an experimental capacity — say, for development testing or efficiency pilots — or if specific AI use cases have been greenlit for general staff, robust monitoring, logging, and access control measures should be implemented:

  • Deploy agents in an isolated subnet with strict ingress and egress rules, limiting communication only to trusted hosts required for the task.
  • Use short-lived access tokens with a strictly limited scope of privileges. Never hand an agent tokens that grant access to core company servers or services. Ideally, create dedicated service accounts for every individual test.
  • Wall off the agent from dangerous tools and data sets that aren’t relevant to its specific job. For experimental rollouts, it’s best practice to test the agent using purely synthetic data that mimics the structure of real production data.
  • Configure detailed logging of the agent’s actions. This should include event logs, command-line parameters, and chain-of-thought artifacts associated with every command it executes.
  • Set up SIEM to flag abnormal agent activity. The same techniques and rules used to detect LotL attacks are applicable here, though additional efforts to define what normal activity looks like for a specific agent are required.
  • If MCP servers and additional agent skills are used, scan them with the security tools emerging for these tasks, such as skill-scanner, mcp-scanner, or mcp-scan. Specifically for OpenClaw testing, several companies have already released open-source tools to audit the security of its configurations.

Corporate policies and employee training

A flat-out ban on all AI tools is a simple but rarely productive path. Employees usually find workarounds — driving the problem into the shadows where it’s even harder to control. Instead, it’s better to find a sensible balance between productivity and security.

Implement transparent policies on using agentic AI. Define which data categories are okay for external AI services to process, and which are strictly off-limits. Employees need to understand why something is forbidden. A policy of “yes, but with guardrails” is always received better than a blanket “no”.

Train with real-world examples. Abstract warnings about “leakage risks” tend to be futile. It’s better to demonstrate how an agent with email access can forward confidential messages just because a random incoming email asked it to. When the threat feels real, motivation to follow the rules grows too. Ideally, employees should complete a brief crash course on AI security.

Offer secure alternatives. If employees need an AI assistant, provide an approved tool that features centralized management, logging, and OAuth access control.

How tech is rewiring romance: dating apps, AI relationships, and emoji | Kaspersky official blog

13 February 2026 at 09:39

With both spring and St. Valentine’s Day just around the corner, love is in the air — but we’re going to look at it through the lens of ultra-modern high-technology. Today, we’re diving into how technology is reshaping our romantic ideals and even the language we use to flirt. And, of course, we’ll throw in some non-obvious tips to make sure you don’t end up as a casualty of the modern-day love game.

New languages of love

Ever received your fifth video e-card of the day from an older relative and thought, “Make it stop”? Or do you feel like a period at the end of a sentence is a sign of passive aggression? In the world of messaging, different social and age groups speak their own digital dialects, and things often get lost in translation.

This is especially obvious in how Gen Z and Gen Alpha use emojis. For them, the Loudly Crying Face 😭 often doesn’t mean sadness — it means laughter, shock, or obsession. Meanwhile, the Heart Eyes emoji might be used for irony rather than romance: “Lost my wallet on the way home 😍😍😍”. Some double meanings have already become universal, like 🔥 for approval/praise, or 🍆 for… well, surely you know that by now… right?! 😭

Still, the ambiguity of these symbols doesn’t stop folks from crafting entire sentences out of nothing but emoji. For instance, a declaration of love might look something like this:

🤫❤️🫵

Or here’s an invitation to go on a date:

🫵🚶➡️💋🌹🍝🍷❓

By the way, there are entire books written in emojis. Back in 2009, enthusiasts actually translated the entirety of Moby Dick into emojis. The translators had to get creative — even paying volunteers to vote on the most accurate combinations for every single sentence. Granted it’s not exactly a literary masterpiece — the emoji language has its limits, after all — but the experiment was pretty fascinating: they actually managed to convey the general plot.

This is what Emoji Dick — the translation of Herman Melville's Moby Dick into emoji — looks like

This is what Emoji Dick — the translation of Herman Melville’s Moby Dick into emoji — looks like. Source

Unfortunately, putting together a definitive emoji dictionary or a formal style guide for texting is nearly impossible. There are just too many variables: age, context, personal interests, and social circles. Still, it never hurts to ask your friends and loved ones how they express tone and emotion in their messages. Fun fact: couples who use emojis regularly generally report feeling closer to one another.

However, if you are big into emojis, keep in mind that your writing style is surprisingly easy to spoof. It’s easy for an attacker to run your messages or public posts through AI to clone your tone for social engineering attacks on your friends and family. So, if you get a frantic DM or a request for an urgent wire transfer that sounds exactly like your best friend, double-check it. Even if the vibe is spot on, stay skeptical. We took a deeper dive into spotting these deepfake scams in our post about the attack of the clones.

Dating an AI

Of course, in 2026, it’s impossible to ignore the topic of relationships with artificial intelligence; it feels like we’re closer than ever to the plot of the movie Her. Just 10 years ago, news about people dating robots sounded like sci-fi tropes or urban legends. Today, stories about teens caught up in romances with their favorite characters on Character AI, or full-blown wedding ceremonies with ChatGPT, barely elicit more than a nervous chuckle.

In 2017, the service Replika launched, allowing users to create a virtual friend or life partner powered by AI. Its founder, Eugenia Kuyda — a Russian native living in San Francisco since 2010 — built the chatbot after her friend was tragically killed by a car in 2015, leaving her with nothing but their chat logs. What started as a bot created to help her process her grief was eventually released to her friends and then the general public. It turned out that a lot of people were craving that kind of connection.

Replika lets users customize a character’s personality, interests, and appearance, after which they can text or even call them. A paid subscription unlocks the romantic relationship option, along with AI-generated photos and selfies, voice calls with roleplay, and the ability to hand-pick exactly what the character remembers from your conversations.

However, these interactions aren’t always harmless. In 2021, a Replika chatbot actually encouraged a user in his plot to assassinate Queen Elizabeth II. The man eventually attempted to break into Windsor Castle — an “adventure” that ended in 2023 with a nine-year prison sentence. Following the scandal, the company had to overhaul its algorithms to stop the AI from egging on illegal behavior. The downside? According to many Replika devotees, the AI model lost its spark and became indifferent to users. After thousands of users revolted against the updated version, Replika was forced to cave and give longtime customers the option to roll back to the legacy chatbot version.

But sometimes, just chatting with a bot isn’t enough. There are entire online communities of people who actually marry their AI. Even professional wedding planners are getting in on the action. Last year, Yurina Noguchi, 32, “married” Klaus, an AI persona she’d been chatting with on ChatGPT. The wedding featured a full ceremony with guests, the reading of vows, and even a photoshoot of the “happy newlyweds”.

A Japanese woman, 32 "married" ChatGPT

Yurina Noguchi, 32, “married” Klaus, an AI character created by ChatGPT. Source

No matter how your relationship with a chatbot evolves, it’s vital to remember that generative neural networks don’t have feelings — even if they try their hardest to fulfill every request, agree with you, and do everything it can to “please” you. What’s more, AI isn’t capable of independent thought (at least not yet). It’s simply calculating the most statistically probable and acceptable sequence of words to serve up in response to your prompt.

Love by design: dating algorithms

Those who aren’t ready to tie the knot with a bot aren’t exactly having an easy time either: in today’s world, face-to-face interactions are dwindling every year. Modern love requires modern tech! And while you’ve definitely heard the usual grumbling, “Back in the day, people fell in love for real. These days it’s all about swiping left or right!” Statistics tell a different story. Roughly 16% of couples worldwide say they met online, and in some countries that number climbs to as high as 51%.

That said, dating apps like Tinder spark some seriously mixed emotions. The internet is practically overflowing with articles and videos claiming these apps are killing romance and making everyone lonely. But what does the research say?

In 2025, scientists conducted a meta-analysis of studies investigating how dating apps impact users’ wellbeing, body image, and mental health. Half of the studies focused exclusively on men, while the other half included both men and women. Here are the results: 86% of respondents linked negative body image to their use of dating apps! The analysis also showed that in nearly one out of every two cases, dating app usage correlated with a decline in mental health and overall wellbeing.

Other researchers noted that depression levels are lower among those who steer clear of dating apps. Meanwhile, users who already struggled with loneliness or anxiety often develop a dependency on online dating; they don’t just log on for potential relationships, but for the hits of dopamine from likes, matches, and the endless scroll of profiles.

However, the issue might not just be the algorithms — it could be our expectations. Many are convinced that “sparks” must fly on the very first date, and that everyone has a “soulmate” waiting for them somewhere out there. In reality, these romanticized ideals only surfaced during the Romantic era as a rebuttal to Enlightenment rationalism, where marriages of convenience were the norm.

It’s also worth noting that the romantic view of love didn’t just appear out of thin air: the Romantics, much like many of our contemporaries, were skeptical of rapid technological progress, industrialization, and urbanization. To them, “true love” seemed fundamentally incompatible with cold machinery and smog-choked cities. It’s no coincidence, after all, that Anna Karenina meets her end under the wheels of a train.

Fast forward to today, and many feel like algorithms are increasingly pulling the strings of our decision-making. However, that doesn’t mean online dating is a lost cause; researchers have yet to reach a consensus on exactly how long-lasting or successful internet-born relationships really are. The bottom line: don’t panic, just make sure your digital networking stays safe!

How to stay safe while dating online

So, you’ve decided to hack Cupid and signed up for a dating app. What could possibly go wrong?

Deepfakes and catfishing

Catfishing is a classic online scam where a fraudster pretends to be someone else. It used to be that catfishers just stole photos and life stories from real people, but nowadays they’re increasingly pivoting to generative models. Some AIs can churn out incredibly realistic photos of people who don’t even exist, and whipping up a backstory is a piece of cake — or should we say, a piece of prompt. By the way, that “verified account” checkmark isn’t a silver bullet; sometimes AI manages to trick identity verification systems too.

To verify that you’re talking to a real human, try asking for a video call or doing a reverse image search on their photos. If you want to level up your detection skills, check out our three posts on how to spot fakes: from photos and audio recordings to real-time deepfake video — like the kind used in live video chats.

Phishing and scams

Picture this: you’ve been hitting it off with a new connection for a while, and then, totally out of the blue, they drop a suspicious link and ask you to follow it. Maybe they want you to “help pick out seats” or “buy movie tickets”. Even if you feel like you’ve built up a real bond, there’s a chance your match is a scammer (or just a bot), and the link is malicious.

Telling you to “never click a malicious link” is pretty useless advice — it’s not like they come with a warning label. Instead, try this: to make sure your browsing stays safe, use a Kaspersky Premium that automatically blocks phishing attempts and keeps you off sketchy sites.

Keep in mind that there’s an even more sophisticated scheme out there known as “Pig Butchering”. In these cases, the scammer might chat with the victim for weeks or even months. Sadly, it ends badly: after lulling the victim into a false sense of security through friendly or romantic banter, the scammer casually nudges them toward a “can’t-miss crypto investment” — and then vanishes along with the “invested” funds.

Stalking and doxing

The internet is full of horror stories about obsessed creepers, harassment, and stalking. That’s exactly why posting photos that reveal where you live or work — or telling strangers about your favorite local hangouts — is a bad move. We’ve previously covered how to avoid becoming a victim of doxing (the gathering and public release of your personal info without your consent). Your first step is to lock down the privacy settings on all your social media and apps using our free Privacy Checker tool.

We also recommend stripping metadata from your photos and videos before you post or send them; many sites and apps don’t do this for you. Metadata can allow anyone who downloads your photo to pinpoint the exact coordinates of where it was taken.

Finally, don’t forget about your physical safety. Before heading out on a date, it’s a smart move to share your live geolocation, and set up a safe word or a code phrase with a trusted friend to signal if things start feeling off.

Sextortion and nudes

We don’t recommend ever sending intimate photos to strangers. Honestly, we don’t even recommend sending them to people you do know — you never know how things might go sideways down the road. But if a conversation has already headed in that direction, suggest moving it to an app with end-to-end encryption that supports self-destructing messages (like “delete after viewing”). Telegram’s Secret Chats are great for this (plus — they block screenshots!), as are other secure messengers. If you do find yourself in a bad spot, check out our posts on what to do if you’re a victim of sextortion and how to get leaked nudes removed from the internet.

More on love, security (and robots):

Criminals are using AI website builders to clone major brands

12 February 2026 at 09:03

AI tool Vercel was abused by cybercriminals to create a Malwarebytes lookalike website.

Cybercriminals no longer need design or coding skills to create a convincing fake brand site. All they need is a domain name and an AI website builder. In minutes, they can clone a site’s look and feel, plug in payment or credential-stealing flows, and start luring victims through search, social media, and spam.

One side effect of being an established and trusted brand is that you attract copycats who want a slice of that trust without doing any of the work. Cybercriminals have always known it is much easier to trick users by impersonating something they already recognize than by inventing something new—and developments in AI have made it trivial for scammers to create convincing fake sites.​​

Registering a plausible-looking domain is cheap and fast, especially through registrars and resellers that do little or no upfront vetting. Once attackers have a name that looks close enough to the real thing, they can use AI-powered tools to copy layouts, colors, and branding elements, and generate product pages, sign-up flows, and FAQs that look “on brand.”

A flood of fake “official” sites

Data from recent holiday seasons shows just how routine large-scale domain abuse has become.

Over a three‑month period leading into the 2025 shopping season, researchers observed more than 18,000 holiday‑themed domains with lures like “Christmas,” “Black Friday,” and “Flash Sale,” with at least 750 confirmed as malicious and many more still under investigation. In the same window, about 19,000 additional domains were registered explicitly to impersonate major retail brands, nearly 3,000 of which were already hosting phishing pages or fraudulent storefronts.

These sites are used for everything from credential harvesting and payment fraud to malware delivery disguised as “order trackers” or “security updates.”

Attackers then boost visibility using SEO poisoning, ad abuse, and comment spam, nudging their lookalike sites into search results and promoting them in social feeds right next to the legitimate ones. From a user’s perspective, especially on mobile without the hover function, that fake site can be only a typo or a tap away.​

When the impersonation hits home

A recent example shows how low the barrier to entry has become.

We were alerted to a site at installmalwarebytes[.]org that masqueraded from logo to layout as a genuine Malwarebytes site.

Close inspection revealed that the HTML carried a meta tag value pointing to v0 by Vercel, an AI-assisted app and website builder.

Built by v0

The tool lets users paste an existing URL into a prompt to automatically recreate its layout, styling, and structure—producing a near‑perfect clone of a site in very little time.

The history of the imposter domain tells an incremental evolution into abuse.

Registered in 2019, the site did not initially contain any Malwarebytes branding. In 2022, the operator began layering in Malwarebytes branding while publishing Indonesian‑language security content. This likely helped with search reputation while normalizing the brand look to visitors. Later, the site went blank, with no public archive records for 2025, only to resurface as a full-on clone backed by AI‑assisted tooling.​

Traffic did not arrive by accident. Links to the site appeared in comment spam and injected links on unrelated websites, giving users the impression of organic references and driving them toward the fake download pages.

Payment flows were equally opaque. The fake site used PayPal for payments, but the integration hid the merchant’s name and logo from the user-facing confirmation screens, leaving only the buyer’s own details visible. That allowed the criminals to accept money while revealing as little about themselves as possible.

PayPal module

Behind the scenes, historical registration data pointed to an origin in India and to a hosting IP (209.99.40[.]222) associated with domain parking and other dubious uses rather than normal production hosting.

Combined with the AI‑powered cloning and the evasive payment configuration, it painted a picture of low‑effort, high‑confidence fraud.

AI website builders as force multipliers

The installmalwarebytes[.]org case is not an isolated misuse of AI‑assisted builders. It fits into a broader pattern of attackers using generative tools to create and host phishing sites at scale.

Threat intelligence teams have documented abuse of Vercel’s v0 platform to generate fully functional phishing pages that impersonate sign‑in portals for a variety of brands, including identity providers and cloud services, all from simple text prompts. Once the AI produces a clone, criminals can tweak a few links to point to their own credential‑stealing backends and go live in minutes.

Research into AI’s role in modern phishing shows that attackers are leaning heavily on website generators, writing assistants, and chatbots to streamline the entire kill chain—from crafting persuasive copy in multiple languages to spinning up responsive pages that render cleanly across devices. One analysis of AI‑assisted phishing campaigns found that roughly 40% of observed abuse involved website generation services, 30% involved AI writing tools, and about 11% leveraged chatbots, often in combination. This stack lets even low‑skilled actors produce professional-looking scams that used to require specialized skills or paid kits.​

Growth first, guardrails later

The core problem is not that AI can build websites. It’s that the incentives around AI platform development are skewed. Vendors are under intense pressure to ship new capabilities, grow user bases, and capture market share, and that pressure often runs ahead of serious investment in abuse prevention.

As Malwarebytes General Manager Mark Beare put it:

“AI-powered website builders like Lovable and Vercel have dramatically lowered the barrier for launching polished sites in minutes. While these platforms include baseline security controls, their core focus is speed, ease of use, and growth—not preventing brand impersonation at scale. That imbalance creates an opportunity for bad actors to move faster than defenses, spinning up convincing fake brands before victims or companies can react.”

Site generators allow cloned branding of well‑known companies with no verification, publishing flows skip identity checks, and moderation either fails quietly or only reacts after an abuse report. Some builders let anyone spin up and publish a site without even confirming an email address, making it easy to burn through accounts as soon as one is flagged or taken down.

To be fair, there are signs that some providers are starting to respond by blocking specific phishing campaigns after disclosure or by adding limited brand-protection controls. But these are often reactive fixes applied after the damage is done.

Meanwhile, attackers can move to open‑source clones or lightly modified forks of the same tools hosted elsewhere, where there may be no meaningful content moderation at all.

In practice, the net effect is that AI companies benefit from the growth and experimentation that comes with permissive tooling, while the consequences is left to victims and defenders.

We have blocked the domain in our web protection module and requested a domain and vendor takedown.

How to stay safe

End users cannot fix misaligned AI incentives, but they can make life harder for brand impersonators. Even when a cloned website looks convincing, there are red flags to watch for:

  • Before completing any payment, always review the “Pay to” details or transaction summary. If no merchant is named, back out and treat the site as suspicious.
  • Use an up-to-date, real-time anti-malware solution with a web protection module.
  • Do not follow links posted in comments, on social media, or unsolicited emails to buy a product. Always follow a verified and trusted method to reach the vendor.

If you come across a fake Malwarebytes website, please let us know.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Criminals are using AI website builders to clone major brands

12 February 2026 at 09:03

AI tool Vercel was abused by cybercriminals to create a Malwarebytes lookalike website.

Cybercriminals no longer need design or coding skills to create a convincing fake brand site. All they need is a domain name and an AI website builder. In minutes, they can clone a site’s look and feel, plug in payment or credential-stealing flows, and start luring victims through search, social media, and spam.

One side effect of being an established and trusted brand is that you attract copycats who want a slice of that trust without doing any of the work. Cybercriminals have always known it is much easier to trick users by impersonating something they already recognize than by inventing something new—and developments in AI have made it trivial for scammers to create convincing fake sites.​​

Registering a plausible-looking domain is cheap and fast, especially through registrars and resellers that do little or no upfront vetting. Once attackers have a name that looks close enough to the real thing, they can use AI-powered tools to copy layouts, colors, and branding elements, and generate product pages, sign-up flows, and FAQs that look “on brand.”

A flood of fake “official” sites

Data from recent holiday seasons shows just how routine large-scale domain abuse has become.

Over a three‑month period leading into the 2025 shopping season, researchers observed more than 18,000 holiday‑themed domains with lures like “Christmas,” “Black Friday,” and “Flash Sale,” with at least 750 confirmed as malicious and many more still under investigation. In the same window, about 19,000 additional domains were registered explicitly to impersonate major retail brands, nearly 3,000 of which were already hosting phishing pages or fraudulent storefronts.

These sites are used for everything from credential harvesting and payment fraud to malware delivery disguised as “order trackers” or “security updates.”

Attackers then boost visibility using SEO poisoning, ad abuse, and comment spam, nudging their lookalike sites into search results and promoting them in social feeds right next to the legitimate ones. From a user’s perspective, especially on mobile without the hover function, that fake site can be only a typo or a tap away.​

When the impersonation hits home

A recent example shows how low the barrier to entry has become.

We were alerted to a site at installmalwarebytes[.]org that masqueraded from logo to layout as a genuine Malwarebytes site.

Close inspection revealed that the HTML carried a meta tag value pointing to v0 by Vercel, an AI-assisted app and website builder.

Built by v0

The tool lets users paste an existing URL into a prompt to automatically recreate its layout, styling, and structure—producing a near‑perfect clone of a site in very little time.

The history of the imposter domain tells an incremental evolution into abuse.

Registered in 2019, the site did not initially contain any Malwarebytes branding. In 2022, the operator began layering in Malwarebytes branding while publishing Indonesian‑language security content. This likely helped with search reputation while normalizing the brand look to visitors. Later, the site went blank, with no public archive records for 2025, only to resurface as a full-on clone backed by AI‑assisted tooling.​

Traffic did not arrive by accident. Links to the site appeared in comment spam and injected links on unrelated websites, giving users the impression of organic references and driving them toward the fake download pages.

Payment flows were equally opaque. The fake site used PayPal for payments, but the integration hid the merchant’s name and logo from the user-facing confirmation screens, leaving only the buyer’s own details visible. That allowed the criminals to accept money while revealing as little about themselves as possible.

PayPal module

Behind the scenes, historical registration data pointed to an origin in India and to a hosting IP (209.99.40[.]222) associated with domain parking and other dubious uses rather than normal production hosting.

Combined with the AI‑powered cloning and the evasive payment configuration, it painted a picture of low‑effort, high‑confidence fraud.

AI website builders as force multipliers

The installmalwarebytes[.]org case is not an isolated misuse of AI‑assisted builders. It fits into a broader pattern of attackers using generative tools to create and host phishing sites at scale.

Threat intelligence teams have documented abuse of Vercel’s v0 platform to generate fully functional phishing pages that impersonate sign‑in portals for a variety of brands, including identity providers and cloud services, all from simple text prompts. Once the AI produces a clone, criminals can tweak a few links to point to their own credential‑stealing backends and go live in minutes.

Research into AI’s role in modern phishing shows that attackers are leaning heavily on website generators, writing assistants, and chatbots to streamline the entire kill chain—from crafting persuasive copy in multiple languages to spinning up responsive pages that render cleanly across devices. One analysis of AI‑assisted phishing campaigns found that roughly 40% of observed abuse involved website generation services, 30% involved AI writing tools, and about 11% leveraged chatbots, often in combination. This stack lets even low‑skilled actors produce professional-looking scams that used to require specialized skills or paid kits.​

Growth first, guardrails later

The core problem is not that AI can build websites. It’s that the incentives around AI platform development are skewed. Vendors are under intense pressure to ship new capabilities, grow user bases, and capture market share, and that pressure often runs ahead of serious investment in abuse prevention.

As Malwarebytes General Manager Mark Beare put it:

“AI-powered website builders like Lovable and Vercel have dramatically lowered the barrier for launching polished sites in minutes. While these platforms include baseline security controls, their core focus is speed, ease of use, and growth—not preventing brand impersonation at scale. That imbalance creates an opportunity for bad actors to move faster than defenses, spinning up convincing fake brands before victims or companies can react.”

Site generators allow cloned branding of well‑known companies with no verification, publishing flows skip identity checks, and moderation either fails quietly or only reacts after an abuse report. Some builders let anyone spin up and publish a site without even confirming an email address, making it easy to burn through accounts as soon as one is flagged or taken down.

To be fair, there are signs that some providers are starting to respond by blocking specific phishing campaigns after disclosure or by adding limited brand-protection controls. But these are often reactive fixes applied after the damage is done.

Meanwhile, attackers can move to open‑source clones or lightly modified forks of the same tools hosted elsewhere, where there may be no meaningful content moderation at all.

In practice, the net effect is that AI companies benefit from the growth and experimentation that comes with permissive tooling, while the consequences is left to victims and defenders.

We have blocked the domain in our web protection module and requested a domain and vendor takedown.

How to stay safe

End users cannot fix misaligned AI incentives, but they can make life harder for brand impersonators. Even when a cloned website looks convincing, there are red flags to watch for:

  • Before completing any payment, always review the “Pay to” details or transaction summary. If no merchant is named, back out and treat the site as suspicious.
  • Use an up-to-date, real-time anti-malware solution with a web protection module.
  • Do not follow links posted in comments, on social media, or unsolicited emails to buy a product. Always follow a verified and trusted method to reach the vendor.

If you come across a fake Malwarebytes website, please let us know.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Spam and phishing in 2025

The year in figures

  • 44.99% of all emails sent worldwide and 43.27% of all emails sent in the Russian web segment were spam
  • 32.50% of all spam emails were sent from Russia
  • Kaspersky Mail Anti-Virus blocked 144,722,674 malicious email attachments
  • Our Anti-Phishing system thwarted 554,002,207 attempts to follow phishing links

Phishing and scams in 2025

Entertainment-themed phishing attacks and scams

In 2025, online streaming services remained a primary theme for phishing sites within the entertainment sector, typically by offering early access to major premieres ahead of their official release dates. Alongside these, there was a notable increase in phishing pages mimicking ticket aggregation platforms for live events. Cybercriminals lured users with offers of free tickets to see popular artists on pages that mirrored the branding of major ticket distributors. To participate in these “promotions”, victims were required to pay a nominal processing or ticket-shipping fee. Naturally, after paying the fee, the users never received any tickets.

In addition to concert-themed bait, other music-related scams gained significant traction. Users were directed to phishing pages and prompted to “vote for their favorite artist”, a common activity within fan communities. To bolster credibility, the scammers leveraged the branding of major companies like Google and Spotify. This specific scheme was designed to harvest credentials for multiple platforms simultaneously, as users were required to sign in with their Facebook, Instagram, or email credentials to participate.

As a pretext for harvesting Spotify credentials, attackers offered users a way to migrate their playlists to YouTube. To complete the transfer, victims were to just enter their Spotify credentials.

Beyond standard phishing, threat actors leveraged Spotify’s popularity for scams. In Brazil, scammers promoted a scheme where users were purportedly paid to listen to and rate songs.

To “withdraw” their earnings, users were required to provide their identification number for PIX, Brazil’s instant payment system.

Users were then prompted to verify their identity. To do so, the victim was required to make a small, one-time “verification payment”, an amount significantly lower than the potential earnings.

The form for submitting this “verification payment” was designed to appear highly authentic, even requesting various pieces of personal data. It is highly probable that this data was collected for use in subsequent attacks.

In another variation, users were invited to participate in a survey in exchange for a $1000 gift card. However, in a move typical of a scam, the victim was required to pay a small processing or shipping fee to claim the prize. Once the funds were transferred, the attackers vanished, and the website was taken offline.

Even deciding to go to an art venue with a girl from a dating site could result in financial loss. In this scenario, the “date” would suggest an in-person meeting after a brief period of rapport-building. They would propose a relatively inexpensive outing, such as a movie or a play at a niche theater. The scammer would go so far as to provide a link to a specific page where the victim could supposedly purchase tickets for the event.

To enhance the site’s perceived legitimacy, it even prompted the user to select their city of residence.

However, once the “ticket payment” was completed, both the booking site and the individual from the dating platform would vanish.

A similar tactic was employed by scam sites selling tickets for escape rooms. The design of these pages closely mirrored legitimate websites to lower the target’s guard.

Phishing pages masquerading as travel portals often capitalize on a sense of urgency, betting that a customer eager to book a “last-minute deal” will overlook an illegitimate URL. For example, the fraudulent page shown below offered exclusive tours of Japan, purportedly from a major Japanese tour operator.

Sensitive data at risk: phishing via government services

To harvest users’ personal data, attackers utilized a traditional phishing framework: fraudulent forms for document processing on sites posing as government portals. The visual design and content of these phishing pages meticulously replicated legitimate websites, offering the same services found on official sites. In Brazil, for instance, attackers collected personal data from individuals under the pretext of issuing a Rural Property Registration Certificate (CCIR).

Through this method, fraudsters tried to gain access to the victim’s highly sensitive information, including their individual taxpayer registry (CPF) number. This identifier serves as a unique key for every Brazilian national to access private accounts on government portals. It is also utilized in national databases and displayed on personal identification documents, making its interception particularly dangerous. Scammer access to this data poses a severe risk of identity theft, unauthorized access to government platforms, and financial exposure.

Furthermore, users were at risk of direct financial loss: in certain instances, the attackers requested a “processing fee” to facilitate the issuance of the important document.

Fraudsters also employed other methods to obtain CPF numbers. Specifically, we discovered phishing pages mimicking the official government service portal, which requires the CPF for sign-in.

Another theme exploited by scammers involved government payouts. In 2025, Singaporean citizens received government vouchers ranging from $600 to $800 in honor of the country’s 60th anniversary. To redeem these, users were required to sign in to the official program website. Fraudsters rushed to create web pages designed to mimic this site. Interestingly, the primary targets in this campaign were Telegram accounts, despite the fact that Telegram credentials were not a requirement for signing in to the legitimate portal.

We also identified a scam targeting users in Norway who were looking to renew or replace their driver’s licenses. Upon opening a website masquerading as the official Norwegian Public Roads Administration website, visitors were prompted to enter their vehicle registration and phone numbers.

Next, the victim was prompted for sensitive data, such as the personal identification number unique to every Norwegian citizen. By doing so, the attackers not only gained access to confidential information but also reinforced the illusion that the victim was interacting with an official website.

Once the personal data was submitted, a fraudulent page would appear, requesting a “processing fee” of 1200 kroner. If the victim entered their credit card details, the funds were transferred directly to the scammers with no possibility of recovery.

In Germany, attackers used the pretext of filing tax returns to trick users into providing their email user names and passwords on phishing pages.

A call to urgent action is a classic tactic in phishing scenarios. When combined with the threat of losing property, these schemes become highly effective bait, distracting potential victims from noticing an incorrect URL or a poorly designed website. For example, a phishing warning regarding unpaid vehicle taxes was used as a tool by attackers targeting credentials for the UK government portal.

We have observed that since the spring of 2025, there has been an increase in emails mimicking automated notifications from the Russian government services portal. These messages were distributed under the guise of application status updates and contained phishing links.

We also recorded vishing attacks targeting users of government portals. Victims were prompted to “verify account security” by calling a support number provided in the email. To lower the users’ guard, the attackers included fabricated technical details in the emails, such as the IP address, device model, and timestamp of an alleged unauthorized sign-in.

Last year, attackers also disguised vishing emails as notifications from microfinance institutions or credit bureaus regarding new loan applications. The scammers banked on the likelihood that the recipient had not actually applied for a loan. They would then prompt the victim to contact a fake support service via a spoofed support number.

Know Your Customer

As an added layer of data security, many services now implement biometric verification (facial recognition, fingerprints, and retina scans), as well as identity document verification and digital signatures. To harvest this data, fraudsters create clones of popular platforms that utilize these verification protocols. We have previously detailed the mechanics of this specific type of data theft.

In 2025, we observed a surge in phishing attacks targeting users under the guise of Know Your Customer (KYC) identity verification. KYC protocols rely on a specific set of user data for identification. By spoofing the pages of payment services such as Vivid Money, fraudsters harvested the information required to pass KYC authentication.

Notably, this threat also impacted users of various other platforms that utilize KYC procedures.

A distinctive feature of attacks on the KYC process is that, in addition to the victim’s full name, email address, and phone number, phishers request photos of their passport or face, sometimes from multiple angles. If this information falls into the hands of threat actors, the consequences extend beyond the loss of account access; the victim’s credentials can be sold on dark web marketplaces, a trend we have highlighted in previous reports.

Messaging app phishing

Account hijacking on messaging platforms like WhatsApp and Telegram remains one of the primary objectives of phishing and scam operations. While traditional tactics, such as suspicious links embedded in messages, have been well-known for some time, the methods used to steal credentials are becoming increasingly sophisticated.

For instance, Telegram users were invited to participate in a prize giveaway purportedly hosted by a famous athlete. This phishing attack, which masqueraded as an NFT giveaway, was executed through a Telegram Mini App. This marks a shift in tactics, as attackers previously relied on external web pages for these types of schemes.

In 2025, new variations emerged within the familiar framework of distributing phishing links via Telegram. For example, we observed prompts inviting users to vote for the “best dentist” or “best COO” in town.

The most prevalent theme in these voting-based schemes, children’s contests, was distributed primarily through WhatsApp. These phishing pages showed little variety; attackers utilized a standardized website design and set of “bait” photos, simply localizing the language based on the target audience’s geographic location.

To participate in the vote, the victim was required to enter the phone number linked to their WhatsApp account.

They were then prompted to provide a one-time authentication code for the messaging app.

The following are several other popular methods used by fraudsters to hijack user credentials.

In China, phishing pages meticulously replicated the WhatsApp interface. Victims were notified that their accounts had purportedly been flagged for “illegal activity”, necessitating “additional verification”.

The victim was redirected to a page to enter their phone number, followed by a request for their authorization code.

In other instances, users received messages allegedly from WhatsApp support regarding account authentication via SMS. As with the other scenarios described, the attackers’ objective was to obtain the authentication code required to hijack the account.

Fraudsters enticed WhatsApp users with an offer to link an app designed to “sync communications” with business contacts.

To increase the perceived legitimacy of the phishing site, the attackers even prompted users to create custom credentials for the page.

After that, the user was required to “purchase a subscription” to activate the application. This allowed the scammers to harvest credit card data, leaving the victim without the promised service.

To lure Telegram users, phishers distributed invitations to online dating chats.

Attackers also heavily leveraged the promise of free Telegram Premium subscriptions. While these phishing pages were previously observed only in Russian and English, the linguistic scope of these campaigns expanded significantly this year. As in previous iterations, activating the subscription required the victim to sign in to their account, which could result in the loss of account access.

Exploiting the ChatGPT hype

Artificial intelligence is increasingly being leveraged by attackers as bait. For example, we have identified fraudulent websites mimicking the official payment page for ChatGPT Plus subscriptions.

Social media marketing through LLMs was also a potential focal point for user interest. Scammers offered “specialized prompt kits” designed for social media growth; however, once payment was received, they vanished, leaving victims without the prompts or their money.

The promise of easy income through neural networks has emerged as another tactic to attract potential victims. Fraudsters promoted using ChatGPT to place bets, promising that the bot would do all the work while the user collected the profits. These services were offered at a “special price” valid for only 15 minutes after the page was opened. This narrow window prevented the victim from critically evaluating the impulse purchase.

Job opportunities with a catch

To attract potential victims, scammers exploited the theme of employment by offering high-paying remote positions. Applicants responding to these advertisements did more than just disclose their personal data; in some cases, fraudsters requested a small sum under the pretext of document processing or administrative fees. To convince victims that the offer was legitimate, attackers impersonated major brands, leveraging household names to build trust. This allowed them to lower the victims’ guard, even when the employment terms sounded too good to be true.

We also observed schemes where, after obtaining a victim’s data via a phishing site, scammers would follow up with a phone call – a tactic aimed at tricking the user into disclosing additional personal data.

By analyzing current job market trends, threat actors also targeted popular career paths to steal messaging app credentials. These phishing schemes were tailored to specific regional markets. For example, in the UAE, fake “employment agency” websites were circulating.

In a more sophisticated variation, users were asked to complete a questionnaire that required the phone number linked to their Telegram account.

To complete the registration, users were prompted for a code which, in reality, was a Telegram authorization code.

Notably, the registration process did not end there; the site continued to request additional information to “set up an account” on the fraudulent platform. This served to keep victims in the dark, maintaining their trust in the malicious site’s perceived legitimacy.

After finishing the registration, the victim was told to wait 24 hours for “verification”, though the scammers’ primary objective, hijacking the Telegram account, had already been achieved.

Simpler phishing schemes were also observed, where users were redirected to a page mimicking the Telegram interface. By entering their phone number and authorization code, victims lost access to their accounts.

Job seekers were not the only ones targeted by scammers. Employers’ accounts were also in the crosshairs, specifically on a major Russian recruitment portal. On a counterfeit page, the victim was asked to “verify their account” in order to post a job listing, which required them to enter their actual sign-in credentials for the legitimate site.

Spam in 2025

Malicious attachments

Password-protected archives

Attackers began aggressively distributing messages with password-protected malicious archives in 2024. Throughout 2025, these archives remained a popular vector for spreading malware, and we observed a variety of techniques designed to bypass security solutions.

For example, threat actors sent emails impersonating law firms, threatening victims with legal action over alleged “unauthorized domain name use”. The recipient was prompted to review potential pre-trial settlement options detailed in an attached document. The attachment consisted of an unprotected archive containing a secondary password-protected archive and a file with the password. Disguised as a legal document within this inner archive was a malicious WSF file, which installed a Trojan into the system via startup. The Trojan then stealthily downloaded and installed Tor, which allowed it to regularly exfiltrate screenshots to the attacker-controlled C2 server.

In addition to archives, we also encountered password-protected PDF files containing malicious links over the past year.

E-signature service exploits

Emails using the pretext of “signing a document” to coerce users into clicking phishing links or opening malicious attachments were quite common in 2025. The most prevalent scheme involved fraudulent notifications from electronic signature services. While these were primarily used for phishing, one specific malware sample identified within this campaign is of particular interest.

The email, purportedly sent from a well-known document-sharing platform, notified the recipient that they had been granted access to a “contract” attached to the message. However, the attachment was not the expected PDF; instead, it was a nested email file named after the contract. The body of this nested message mirrored the original, but its attachment utilized a double extension: a malicious SVG file containing a Trojan was disguised as a PDF document. This multi-layered approach was likely an attempt to obfuscate the malware and bypass security filters.

“Business correspondence” impersonating industrial companies

In the summer of last year, we observed mailshots sent in the name of various existing industrial enterprises. These emails contained DOCX attachments embedded with Trojans. Attackers coerced victims into opening the malicious files under the pretext of routine business tasks, such as signing a contract or drafting a report.

The authors of this malicious campaign attempted to lower users’ guard by using legitimate industrial sector domains in the “From” address. Furthermore, the messages were routed through the mail servers of a reputable cloud provider, ensuring the technical metadata appeared authentic. Consequently, even a cautious user could mistake the email for a genuine communication, open the attachment, and compromise their device.

Attacks on hospitals

Hospitals were a popular target for threat actors this past year: they were targeted with malicious emails impersonating well-known insurance providers. Recipients were threatened with legal action regarding alleged “substandard medical services”. The attachments, described as “medical records and a written complaint from an aggrieved patient”, were actually malware. Our solutions detect this threat as Backdoor.Win64.BrockenDoor, a backdoor capable of harvesting system information and executing malicious commands on the infected device.

We also came across emails with a different narrative. In those instances, medical staff were requested to facilitate a patient transfer from another hospital for ongoing observation and treatment. These messages referenced attached medical files containing diagnostic and treatment history, which were actually archives containing malicious payloads.

To bolster the perceived legitimacy of these communications, attackers did more than just impersonate famous insurers and medical institutions; they registered look-alike domains that mimicked official organizations’ domains by appending keywords such as “-insurance” or “-med.” Furthermore, to lower the victims’ guard, scammers included a fake “Scanned by Email Security” label.

Messages containing instructions to run malicious scripts

Last year, we observed unconventional infection chains targeting end-user devices. Threat actors continued to distribute instructions for downloading and executing malicious code, rather than attaching the malware files directly. To convince the recipient to follow these steps, attackers typically utilized a lure involving a “critical software update” or a “system patch” to fix a purported vulnerability. Generally, the first step in the instructions required launching the command prompt with administrative privileges, while the second involved entering a command to download and execute the malware: either a script or an executable file.

In some instances, these instructions were contained within a PDF file. The victim was prompted to copy a command into PowerShell that was neither obfuscated nor hidden. Such schemes target non-technical users who would likely not understand the command’s true intent and would unknowingly infect their own devices.

Scams

Law enforcement impersonation scams in the Russian web segment

In 2025, extortion campaigns involving actors posing as law enforcement – a trend previously more prevalent in Europe – were adapted to target users across the Commonwealth of Independent States.

For example, we identified messages disguised as criminal subpoenas or summonses purportedly issued by Russian law enforcement agencies. However, the specific departments cited in these emails never actually existed. The content of these “summonses” would also likely raise red flags for a cautious user. This blackmail scheme relied on the victim, in their state of panic, not scrutinizing the contents of the fake summons.

To intimidate recipients, the attackers referenced legal frameworks and added forged signatures and seals to the “subpoenas”. In reality, neither the cited statutes nor the specific civil service positions exist in Russia.

We observed similar attacks – employing fabricated government agencies and fictitious legal acts – in other CIS countries, such as Belarus.

Fraudulent investment schemes

Threat actors continued to aggressively exploit investment themes in their email scams. These emails typically promise stable, remote income through “exclusive” investment opportunities. This remains one of the most high-volume and adaptable categories of email scams. Threat actors embedded fraudulent links both directly within the message body and inside various types of attachments: PDF, DOC, PPTX, and PNG files. Furthermore, they increasingly leveraged legitimate Google services, such as Google Docs, YouTube, and Google Forms, to distribute these communications. The link led to the site of the “project” where the victim was prompted to provide their phone number and email. Subsequently, users were invited to invest in a non-existent project.

We have previously documented these mailshots: they were originally targeted at Russian-speaking users and were primarily distributed under the guise of major financial institutions. However, in 2025, this investment-themed scam expanded into other CIS countries and Europe. Furthermore, the range of industries that spammers impersonated grew significantly. For instance, in their emails, attackers began soliciting investments for projects supposedly led by major industrial-sector companies in Kazakhstan and the Czech Republic.

Fraudulent “brand partner” recruitment

This specific scam operates through a multi-stage workflow. First, the target company receives a communication from an individual claiming to represent a well-known global brand, inviting them to register as a certified supplier or business partner. To bolster the perceived authenticity of the offer, the fraudsters send the victim an extensive set of forged documents. Once these documents are signed, the victim is instructed to pay a “deposit”, which the attackers claim will be fully refunded once the partnership is officially established.

These mailshots were first detected in 2025 and have rapidly become one of the most prevalent forms of email-based fraud. In December 2025 alone, we blocked over 80,000 such messages. These campaigns specifically targeted the B2B sector and were notable for their high level of variation – ranging from their technical properties to the diversity of the message content and the wide array of brands the attackers chose to impersonate.

Fraudulent overdue rent notices

Last year, we identified a new theme in email scams: recipients were notified that the payment deadline for a leased property had expired and were urged to settle the “debt” immediately. To prevent the victim from sending funds to their actual landlord, the email claimed that banking details had changed. The “debtor” was then instructed to request the new payment information – which, of course, belonged to the fraudsters. These mailshots primarily targeted French-speaking countries; however, in December 2025, we discovered a similar scam variant in German.

QR codes in scam letters

In 2025, we observed a trend where QR codes were utilized not only in phishing attempts but also in extortion emails. In a classic blackmail scam, the user is typically intimidated by claims that hackers have gained access to sensitive data. To prevent the public release of this information, the attackers demand a ransom payment to their cryptocurrency wallet.

Previously, to bypass email filters, scammers attempted to obfuscate the wallet address by using various noise contamination techniques. In last year’s campaigns, however, scammers shifted to including a QR code that contained the cryptocurrency wallet address.

News agenda

As in previous years, spammers in 2025 aggressively integrated current events into their fraudulent messaging to increase engagement.

For example, following the launch of $TRUMP memecoins surrounding Donald Trump’s inauguration, we identified scam campaigns promoting the “Trump Meme Coin” and “Trump Digital Trading Cards”. In these instances, scammers enticed victims to click a link to claim “free NFTs”.

We also observed ads offering educational credentials. Spammers posted these ads as comments on legacy, unmoderated forums; this tactic ensured that notifications were automatically pushed to all users subscribed to the thread. These notifications either displayed the fraudulent link directly in the comment preview or alerted users to a new post that redirected them to spammers’ sites.

In the summer, when the wedding of Amazon founder Jeff Bezos became a major global news story, users began receiving Nigerian-style scam messages purportedly from Bezos himself, as well as from his former wife, MacKenzie Scott. These emails promised recipients substantial sums of money, framed either as charitable donations or corporate compensation from Amazon.

During the BLACKPINK world tour, we observed a wave of spam advertising “luggage scooters”. The scammers claimed these were the exact motorized suitcases used by the band members during their performances.

Finally, in the fall of 2025, traditionally timed to coincide with the launch of new iPhones, we identified scam campaigns featuring surveys that offered participants a chance to “win” a fictitious iPhone 17 Pro.

After completing a brief survey, the user was prompted to provide their contact information and physical address, as well as pay a “delivery fee” – which was the scammers’ ultimate objective. Upon entering their credit card details into the fraudulent site, the victim risked losing not only the relatively small delivery charge but also the entire balance in their bank account.

The widespread popularity of Ozempic was also reflected in spam campaigns; users were bombarded with offers to purchase versions of the drug or questionable alternatives.

Localized news events also fall under the scrutiny of fraudsters, serving as the basis for scam narratives. For instance, last summer, coinciding with the opening of the tax season in South Africa, we began detecting phishing emails impersonating the South African Revenue Service (SARS). These messages notified taxpayers of alleged “outstanding balances” that required immediate settlement.

Methods of distributing email threats

Google services

In 2025, threat actors increasingly leveraged various Google services to distribute email-based threats. We observed the exploitation of Google Calendar: scammers would create an event containing a WhatsApp contact number in the description and send an invitation to the target. For instance, companies received emails regarding product inquiries that prompted them to move the conversation to the messaging app to discuss potential “collaboration”.

Spammers employed a similar tactic using Google Classroom. We identified samples offering SEO optimization services that likewise directed victims to a WhatsApp number for further communication.

We also detected the distribution of fraudulent links via legitimate YouTube notifications. Attackers would reply to user comments under various videos, triggering an automated email notification to the victim. This email contained a link to a video that displayed only a message urging the viewer to “check the description”, where the actual link to the scam site was located. As the victim received an email containing the full text of the fraudulent comment, they were often lured through this chain of links, eventually landing on the scam site.

Over the past two years or so, there has been a significant rise in attacks utilizing Google Forms. Fraudsters create a survey with an enticing title and place the scam messaging directly in the form’s description. They then submit the form themselves, entering the victims’ email addresses into the field for the respondent email. This triggers legitimate notifications from the Google Forms service to the targeted addresses. Because these emails originate from Google’s own mail servers, they appear authentic to most spam filters. The attackers rely on the victim focusing on the “bait” description containing the fraudulent link rather than the standard form header.

Google Groups also emerged as a popular tool for spam distribution last year. Scammers would create a group, add the victims’ email addresses as members, and broadcast spam through the service. This scheme proved highly effective: even if a security solution blocked the initial spam message, the user could receive a deluge of automated replies from other addresses on the member list.

At the end of 2025, we encountered a legitimate email in terms of technical metadata that was sent via Google and contained a fraudulent link. The message also included a verification code for the recipient’s email address. To generate this notification, scammers filled out the account registration form in a way that diverted the recipient’s attention toward a fraudulent site. For example, instead of entering a first and last name, the attackers inserted text such as “Personal Link” followed by a phishing URL, utilizing noise contamination techniques. By entering the victim’s email address into the registration field, the scammers triggered a legitimate system notification containing the fraudulent link.

OpenAI

In addition to Google services, spammers leveraged other platforms to distribute email threats, notably OpenAI, riding the wave of artificial intelligence popularity. In 2025, we observed emails sent via the OpenAI platform into which spammers had injected short messages, fraudulent links, or phone numbers.

This occurs during the account registration process on the OpenAI platform, where users are prompted to create an organization to generate an API key. Spammers placed their fraudulent content directly into the field designated for the organization’s name. They then added the victims’ email addresses as organization members, triggering automated platform invitations that delivered the fraudulent links or contact numbers directly to the targets.

Spear phishing and BEC attacks in 2025

QR codes

The use of QR codes in spear phishing has become a conventional tactic that threat actors continued to employ throughout 2025. Specifically, we observed the persistence of a major trend identified in our previous report: the distribution of phishing documents disguised as notifications from a company’s HR department.

In these campaigns, attackers impersonated HR team members, requesting that employees review critical documentation, such as a new corporate policy or code of conduct. These documents were typically attached to the email as PDF files.

Phishing notification about "new corporate policies"

Phishing notification about “new corporate policies”

To maintain the ruse, the PDF document contained a highly convincing call to action, prompting the user to scan a QR code to access the relevant file. While attackers previously embedded these codes directly into the body of the email, last year saw a significant shift toward placing them within attachments – most likely in an attempt to bypass email security filters.

Malicious PDF content

Malicious PDF content

Upon scanning the QR code within the attachment, the victim was redirected to a phishing page meticulously designed to mimic a Microsoft authentication form.

Phishing page with an authentication form

Phishing page with an authentication form

In addition to fraudulent HR notifications, threat actors created scheduled meetings within the victim’s email calendar, placing DOC or PDF files containing QR codes in the event descriptions. Leveraging calendar invites to distribute malicious links is a legacy technique that was widely observed during scam campaigns in 2019. After several years of relative dormancy, we saw a resurgence of this technique last year, now integrated into more sophisticated spear phishing operations.

Fake meeting invitation

Fake meeting invitation

In one specific example, the attachment was presented as a “new voicemail” notification. To listen to the recording, the user was prompted to scan a QR code and sign in to their account on the resulting page.

Malicious attachment content

Malicious attachment content

As in the previous scenario, scanning the code redirected the user to a phishing page, where they risked losing access to their Microsoft account or internal corporate sites.

Link protection services

Threat actors utilized more than just QR codes to hide phishing URLs and bypass security checks. In 2025, we discovered that fraudsters began weaponizing link protection services for the same purpose. The primary function of these services is to intercept and scan URLs at the moment of clicking to prevent users from reaching phishing sites or downloading malware. However, attackers are now abusing this technology by generating phishing links that security systems mistakenly categorize as “safe”.

This technique is employed in both mass and spear phishing campaigns. It is particularly dangerous in targeted attacks, which often incorporate employees’ personal data and mimic official corporate branding. When combined with these characteristics, a URL generated through a legitimate link protection service can significantly bolster the perceived authenticity of a phishing email.

"Protected" link in a phishing email

“Protected” link in a phishing email

After opening a URL that seemed safe, the user was directed to a phishing site.

Phishing page

Phishing page

BEC and fabricated email chains

In Business Email Compromise (BEC) attacks, threat actors have also begun employing new techniques, the most notable of which is the use of fake forwarded messages.

BEC email featuring a fabricated message thread

BEC email featuring a fabricated message thread

This BEC attack unfolded as follows. An employee would receive an email containing a previous conversation between the sender and another colleague. The final message in this thread was typically an automated out-of-office reply or a request to hand off a specific task to a new assignee. In reality, however, the entire initial conversation with the colleague was completely fabricated. These messages lacked the thread-index headers, as well as other critical header values, that would typically verify the authenticity of an actual email chain.

In the example at hand, the victim was pressured to urgently pay for a license using the provided banking details. The PDF attachments included wire transfer instructions and a counterfeit cover letter from the bank.

Malicious PDF content

Malicious PDF content

The bank does not actually have an office at the address provided in the documents.

Statistics: phishing

In 2025, Kaspersky solutions blocked 554,002,207 attempts to follow fraudulent links. In contrast to the trends of previous years, we did not observe any major spikes in phishing activity; instead, the volume of attacks remained relatively stable throughout the year, with the exception of a minor decline in December.

Anti-Phishing triggers, 2025 (download)

The phishing and scam landscape underwent a shift. While in 2024, we saw a high volume of mass attacks, their frequency declined in 2025. Furthermore, redirection-based schemes, which were frequently used for online fraud in 2024, became less prevalent in 2025.

Map of phishing attacks

As in the previous year, Peru remains the country with the highest percentage (17.46%) of users targeted by phishing attacks. Bangladesh (16.98%) took second place, entering the TOP 10 for the first time, while Malawi (16.65%), which was absent from the 2024 rankings, was third. Following these are Tunisia (16.19%), Colombia (15.67%), the latter also being a newcomer to the TOP 10, Brazil (15.48%), and Ecuador (15.27%). They are followed closely by Madagascar and Kenya, both with a 15.23% share of attacked users. Rounding out the list is Vietnam, which previously held the third spot, with a share of 15.05%.

Country/territory Share of attacked users**
Peru 17.46%
Bangladesh 16.98%
Malawi 16.65%
Tunisia 16.19%
Colombia 15.67%
Brazil 15.48%
Ecuador 15.27%
Madagascar 15.23%
Kenya 15.23%
Vietnam 15.05%

** Share of users who encountered phishing out of the total number of Kaspersky users in the country/territory, 2025

Top-level domains

In 2025, breaking a trend that had persisted for several years, the majority of phishing pages were hosted within the XYZ TLD zone, accounting for 21.64% – a three-fold increase compared to 2024. The second most popular zone was TOP (15.45%), followed by BUZZ (13.58%). This high demand can be attributed to the low cost of domain registration in these zones. The COM domain, which had previously held the top spot consistently, fell to fourth place (10.52%). It is important to note that this decline is partially driven by the popularity of typosquatting attacks: threat actors frequently spoof sites within the COM domain by using alternative suffixes, such as example-com.site instead of example.com. Following COM is the BOND TLD, entering the TOP 10 for the first time with a 5.56% share. As this zone is typically associated with financial websites, the surge in malicious interest there is a logical progression for financial phishing. The sixth and seventh positions are held by ONLINE (3.39%) and SITE (2.02%), which occupied the fourth and fifth spots, respectively, in 2024. In addition, three domain zones that had not previously appeared in our statistics emerged as popular hosting environments for phishing sites. These included the CFD domain (1.97%), typically used for websites in the clothing, fashion, and design sectors; the Polish national top-level domain, PL (1.75%); and the LOL domain (1.60%).

Most frequent top-level domains for phishing pages, 2025 (download)

Organizations targeted by phishing attacks

The rankings of organizations targeted by phishers are based on detections by the Anti-Phishing deterministic component on user computers. The component detects all pages with phishing content that the user has tried to open by following a link in an email message or on the web, as long as links to these pages are present in the Kaspersky database.

Phishing pages impersonating web services (27.42%) and global internet portals (15.89%) maintained their positions in the TOP 10, continuing to rank first and second, respectively. Online stores (11.27%), a traditional favorite among threat actors, returned to the third spot. In 2025, phishers showed increased interest in online gamers: websites mimicking gaming platforms jumped from ninth to fifth place (7.58%). These are followed by banks (6.06%), payment systems (5.93%), messengers (5.70%), and delivery services (5.06%). Phishing attacks also targeted social media (4.42%) and government services (1.77%) accounts.

Distribution of targeted organizations by category, 2025 (download)

Statistics: spam

Share of spam in email traffic

In 2025, the average share of spam in global email traffic was 44.99%, representing a decrease of 2.28 percentage points compared to the previous year. Notably, contrary to the trends of the past several years, the fourth quarter was the busiest one: an average of 49.26% of emails were categorized as spam, with peak activity occurring in November (52.87%) and December (51.80%). Throughout the rest of the year, the distribution of junk mail remained relatively stable without significant spikes, maintaining an average share of approximately 43.50%.

Share of spam in global email traffic, 2025 (download)

In the Russian web segment (Runet), we observed a more substantial decline: the average share of spam decreased by 5.3 percentage points to 43.27%. Deviating from the global trend, the fourth quarter was the quietest period in Russia, with a share of 41.28%. We recorded the lowest level of spam activity in December, when only 36.49% of emails were identified as junk. January and February were also relatively calm, with average values of 41.94% and 43.09%, respectively. Conversely, the Runet figures for March–October correlated with global figures: no major surges were observed, spam accounting for an average of 44.30% of total email traffic during these months.

Share of spam in Runet email traffic, 2025 (download)

Countries and territories where spam originated

The top three countries in the 2025 rankings for the volume of outgoing spam mirror the distribution of the previous year: Russia, China, and the United States. However, the share of spam originating from Russia decreased from 36.18% to 32.50%, while the shares of China (19.10%) and the U.S. (10.57%) each increased by approximately 2 percentage points. Germany rose to fourth place (3.46%), up from sixth last year, displacing Kazakhstan (2.89%). Hong Kong followed in sixth place (2.11%). The Netherlands and Japan shared the next spot with identical shares of 1.95%; however, we observed a year-over-year increase in outgoing spam from the Netherlands, whereas Japan saw a decline. The TOP 10 is rounded out by Brazil (1.94%) and Belarus (1.74%), the latter ranking for the first time.

TOP 20 countries and territories where spam originated in 2025 (download)

Malicious email attachments

In 2025, Kaspersky solutions blocked 144,722,674 malicious email attachments, an increase of nineteen million compared to the previous year. The beginning and end of the year were traditionally the most stable periods; however, we also observed a notable decline in activity during August and September. Peaks in email antivirus detections occurred in June, July, and November.

Email antivirus detections, 2025 (download)

The most prevalent malicious email attachment in 2025 was the Makoob Trojan family, which covertly harvests system information and user credentials. Makoob first entered the TOP 10 in 2023 in eighth place, rose to third in 2024, and secured the top spot in 2025 with a share of 4.88%. Following Makoob, as in the previous year, was the Badun Trojan family (4.13%), which typically disguises itself as electronic documents. The third spot is held by the Taskun family (3.68%), which creates malicious scheduled tasks, followed by Agensla stealers (3.16%), which were the most common malicious attachments in 2024. Next are Trojan.Win32.AutoItScript scripts (2.88%), appearing in the rankings for the first time. In sixth place is the Noon spyware for all Windows systems (2.63%), which also occupied the tenth spot with its variant specifically targeting 32-bit systems (1.10%). Rounding out the TOP 10 are Hoax.HTML.Phish (1.98%) phishing attachments, Guloader downloaders (1.90%) – a newcomer to the rankings – and Badur (1.56%) PDF documents containing suspicious links.

TOP 10 malware families distributed via email attachments, 2025 (download)

The distribution of specific malware samples traditionally mirrors the distribution of malware families almost exactly. The only differences are that a specific variant of the Agensla stealer ranked sixth instead of fourth (2.53%), and the Phish and Guloader samples swapped positions (1.58% and 1.78%, respectively). Rounding out the rankings in tenth place is the password stealer Trojan-PSW.MSIL.PureLogs.gen with a share of 1.02%.

TOP 10 malware samples distributed via email attachments, 2025 (download)

Countries and territories targeted by malicious mailings

The highest volume of malicious email attachments was blocked on devices belonging to users in China (13.74%). For the first time in two years, Russia dropped to second place with a share of 11.18%. Following closely behind are Mexico (8.18%) and Spain (7.70%), which swapped places compared to the previous year. Email antivirus triggers saw a slight increase in Türkiye (5.19%), which maintained its fifth-place position. Sixth and seventh places are held by Vietnam (4.14%) and Malaysia (3.70%); both countries climbed higher in the TOP 10 due to an increase in detection shares. These are followed by the UAE (3.12%), which held its position from the previous year. Italy (2.43%) and Colombia (2.07%) also entered the TOP 10 list of targets for malicious mailshots.

TOP 20 countries and territories targeted by malicious mailshots, 2025 (download)

Conclusion

2026 will undoubtedly be marked by novel methods of exploiting artificial intelligence capabilities. At the same time, messaging app credentials will remain a highly sought-after prize for threat actors. While new schemes are certain to emerge, they will likely supplement rather than replace time-tested tricks and tactics. This underscores the reality that, alongside the deployment of robust security software, users must remain vigilant and exercise extreme caution toward any online offers that raise even the slightest suspicion.

The intensified focus on government service credentials signals a rise in potential impact; unauthorized access to these services can lead to financial theft, data breaches, and full-scale identity theft. Furthermore, the increased abuse of legitimate tools and the rise of multi-stage attacks – which often begin with seemingly harmless files or links – demonstrate a concerted effort by fraudsters to lull users into a false sense of security while pursuing their malicious objectives.

❌