Normal view

Meta patents AI that could keep you posting from beyond the grave

19 February 2026 at 12:16

Tech bros have been wanting to become immortal for years. Until they get there, their fallback might be continuing to post nonsense on social media from the afterlife.

On December 30, 2025, Meta was granted US patent 12513102B2: Simulation of a user of a social networking system using a language model. It describes a system that trains an AI on a user’s posts, comments, chats, voice messages, and likes, then deploys a bot to respond to newsfeeds, DMs, and even simulated audio or video calls.

Filed in November 2023 by Meta CTO Andrew Bosworth, it sounds innocuous enough. Perhaps some people would use it to post their political hot takes while they’re asleep.

Dig deeper, though, and the patent veers from absurd to creepy. It’s designed to be used not just from beyond the pillow but beyond the grave.

From the patent:

“The language model may be used for simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased.”

A Meta spokesperson told Business Insider that the company has no plans to act on the patent. And tech companies have a habit of laying claim to bizarre ideas that never materialize. But Facebook’s user numbers have stalled, and it presumably needs all the engagement it can get. We already know that the company loves the idea of AI ‘users’, having reportedly piloted them in late 2024, much to human users’ annoyance.

If the company ever did decide to pull the trigger on this technology, it would be a departure from its own memorialization policy, which preserves accounts without changes. One reason the company might not be willing to step over the line is that the world simply isn’t ready for AI conversations with the dead. Other companies have considered and even tested similar systems. Microsoft patented a chatbot that would allow you to talk to AI versions of deceased individuals in 2020; its own AI general manager called it disturbing, and it never went into production. Amazon demonstrated Alexa mimicking a dead grandmother’s voice from under a minute of audio in 2022, framing it as preserving memories. That never launched either.

Some projects that did ship left people wishing they hadn’t. Startup 2Wai’s avatar app originally offered the chance to preserve loved ones as AI avatars. Users called it “nightmare fuel” and “demonic”. The company seems to have pivoted to safer ground like social avatars and personal AI coaches now.

The legal minefield

The other thing holding Meta back could be the legal questions. Unsurprisingly for such a new idea, there isn’t a uniform US framework on the use of AI to represent the dead. Several states recognize post-mortem right of publicity, although states like New York limit that to people whose voices and images have commercial value (typically meaning celebrities). California’s AB 1836 specifically targets AI-generated impersonations of the deceased, though.

Meta would also need to tiptoe carefully around the law in Europe. The company had to pause AI training on European users in 2024 under regulatory pressure, but then launched it anyway in March last year. Then it refused to sign the EU’s GPAI Code of Practice last July (the only major AI firm to do so). Meta’s relationship with EU regulators is strained at best.

Europe’s General Data Protection Regulation (GDPR) excludes deceased persons’ data, but Article 85 of the French Data Protection law lets anyone leave instructions about the retention, deletion and communication of their personal data after death. The EU AI Act’s Article 50 (fully applicable this August) will also require AI systems to disclose they are AI, with penalties up to €15 million or 3% of worldwide turnover for companies that don’t comply.

Hopefully Meta really will file this in the “just because we can do it doesn’t mean we should” drawer, and leave erstwhile social media sharers to rest in peace.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Meta patents AI that could keep you posting from beyond the grave

19 February 2026 at 12:16

Tech bros have been wanting to become immortal for years. Until they get there, their fallback might be continuing to post nonsense on social media from the afterlife.

On December 30, 2025, Meta was granted US patent 12513102B2: Simulation of a user of a social networking system using a language model. It describes a system that trains an AI on a user’s posts, comments, chats, voice messages, and likes, then deploys a bot to respond to newsfeeds, DMs, and even simulated audio or video calls.

Filed in November 2023 by Meta CTO Andrew Bosworth, it sounds innocuous enough. Perhaps some people would use it to post their political hot takes while they’re asleep.

Dig deeper, though, and the patent veers from absurd to creepy. It’s designed to be used not just from beyond the pillow but beyond the grave.

From the patent:

“The language model may be used for simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased.”

A Meta spokesperson told Business Insider that the company has no plans to act on the patent. And tech companies have a habit of laying claim to bizarre ideas that never materialize. But Facebook’s user numbers have stalled, and it presumably needs all the engagement it can get. We already know that the company loves the idea of AI ‘users’, having reportedly piloted them in late 2024, much to human users’ annoyance.

If the company ever did decide to pull the trigger on this technology, it would be a departure from its own memorialization policy, which preserves accounts without changes. One reason the company might not be willing to step over the line is that the world simply isn’t ready for AI conversations with the dead. Other companies have considered and even tested similar systems. Microsoft patented a chatbot that would allow you to talk to AI versions of deceased individuals in 2020; its own AI general manager called it disturbing, and it never went into production. Amazon demonstrated Alexa mimicking a dead grandmother’s voice from under a minute of audio in 2022, framing it as preserving memories. That never launched either.

Some projects that did ship left people wishing they hadn’t. Startup 2Wai’s avatar app originally offered the chance to preserve loved ones as AI avatars. Users called it “nightmare fuel” and “demonic”. The company seems to have pivoted to safer ground like social avatars and personal AI coaches now.

The legal minefield

The other thing holding Meta back could be the legal questions. Unsurprisingly for such a new idea, there isn’t a uniform US framework on the use of AI to represent the dead. Several states recognize post-mortem right of publicity, although states like New York limit that to people whose voices and images have commercial value (typically meaning celebrities). California’s AB 1836 specifically targets AI-generated impersonations of the deceased, though.

Meta would also need to tiptoe carefully around the law in Europe. The company had to pause AI training on European users in 2024 under regulatory pressure, but then launched it anyway in March last year. Then it refused to sign the EU’s GPAI Code of Practice last July (the only major AI firm to do so). Meta’s relationship with EU regulators is strained at best.

Europe’s General Data Protection Regulation (GDPR) excludes deceased persons’ data, but Article 85 of the French Data Protection law lets anyone leave instructions about the retention, deletion and communication of their personal data after death. The EU AI Act’s Article 50 (fully applicable this August) will also require AI systems to disclose they are AI, with penalties up to €15 million or 3% of worldwide turnover for companies that don’t comply.

Hopefully Meta really will file this in the “just because we can do it doesn’t mean we should” drawer, and leave erstwhile social media sharers to rest in peace.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

AI Found Twelve New Vulnerabilities in OpenSSL

18 February 2026 at 13:03

The title of the post is”What AI Security Research Looks Like When It Works,” and I agree:

In the latest OpenSSL security release> on January 27, 2026, twelve new zero-day vulnerabilities (meaning unknown to the maintainers at time of disclosure) were announced. Our AI system is responsible for the original discovery of all twelve, each found and responsibly disclosed to the OpenSSL team during the fall and winter of 2025. Of those, 10 were assigned CVE-2025 identifiers and 2 received CVE-2026 identifiers. Adding the 10 to the three we already found in the Fall 2025 release, AISLE is credited for surfacing 13 of 14 OpenSSL CVEs assigned in 2025, and 15 total across both releases. This is a historically unusual concentration for any single research team, let alone an AI-driven one.

These weren’t trivial findings either. They included CVE-2025-15467, a stack buffer overflow in CMS message parsing that’s potentially remotely exploitable without valid key material, and exploits for which have been quickly developed online. OpenSSL rated it HIGH severity; NIST‘s CVSS v3 score is 9.8 out of 10 (CRITICAL, an extremely rare severity rating for such projects). Three of the bugs had been present since 1998-2000, for over a quarter century having been missed by intense machine and human effort alike. One predated OpenSSL itself, inherited from Eric Young’s original SSLeay implementation in the 1990s. All of this in a codebase that has been fuzzed for millions of CPU-hours and audited extensively for over two decades by teams including Google’s.

In five of the twelve cases, our AI system directly proposed the patches that were accepted into the official release.

AI vulnerability finding is changing cybersecurity, faster than expected. This capability will be used by both offense and defense.

More.

Scammers use fake “Gemini” AI chatbot to sell fake “Google Coin”

18 February 2026 at 11:10

Scammers have found a new use for AI: creating custom chatbots posing as real AI assistants to pressure victims into buying worthless cryptocurrencies.

We recently came across a live “Google Coin” presale site featuring a chatbot that claimed to be Google’s Gemini AI assistant. The bot guided visitors through a polished sales pitch, answered their questions about investment, projecting returns, and ultimately ended with victims sending an irreversible crypto payment to the scammers.

Google does not have a cryptocurrency. But as “Google Coin” has appeared before in scams, anyone checking it out might think it’s real. And the chatbot was very convincing.

Google Coin Pre-Market

AI as the closer

The chatbot introduced itself as,

“Gemini — your AI assistant for the Google Coin platform.”

It used Gemini-style branding, including the sparkle icon and a green “Online” status indicator, creating the immediate impression that it was an official Google product.

When asked, “Will I get rich if I buy 100 coins?”, the bot responded with specific financial projections. A $395 investment at the current presale price would be worth $2,755 at listing, it claimed, representing “approximately 7x” growth. It cited a presale price of $3.95 per token, an expected listing price of $27.55, and invited further questions about “how to participate.”

This is the kind of personalized, responsive engagement that used to require a human scammer on the other end of a Telegram chat. Now the AI does it automatically.

Fake Gemini chatbot

A persona that never breaks

What stood out during our analysis was how tightly controlled the bot’s persona was. We found that it:

  • Claimed consistently to be “the official helper for the Google Coin platform”
  • Refused to provide any verifiable company details, such as a registered entity, regulator, license number, audit firm, or official email address
  • Dismissed concerns and redirected them to vague claims about “transparency” and “security”
  • Refused to acknowledge any scenario in which the project could be a scam
  • Redirected tougher questions to an unnamed “manager” (likely a human closer waiting in the wings)

When pressed, the bot doesn’t get confused or break character. It loops back to the same scripted claims: a “detailed 2026 roadmap,” “military-grade encryption,” “AI integration,” and a “growing community of investors.”

Whoever built this chatbot locked it into a sales script designed to build trust, overcome doubt, and move visitors toward one outcome: sending cryptocurrency.

Scripted fake Gemini chatbot

Why AI chatbots change the scam model

Scammers have always relied on social engineering. Build trust. Create urgency. Overcome skepticism. Close the deal.

Traditionally, that required human operators, which limited how many victims could be engaged at once. AI chatbots remove that bottleneck entirely.

A single scam operation can now deploy a chatbot that:

  • Engages hundreds of visitors simultaneously, 24 hours a day
  • Delivers consistent, polished messaging that sounds authoritative
  • Impersonates a trusted brand’s AI assistant (in this case, Google’s Gemini)
  • Responds to individual questions with tailored financial projections
  • Escalates to human operators only when necessary

This matches a broader trend identified by researchers. According to Chainalysis, roughly 60% of all funds flowing into crypto scam wallets were tied to scammers using AI tools. AI-powered scam infrastructure is becoming the norm, not the exception. The chatbot is just one piece of a broader AI-assisted fraud toolkit—but it may be the most effective piece, because it creates the illusion of a real, interactive relationship between the victim and the “brand.”

The bait: a polished fake

The chatbot sits on top of a convincing scam operation. The Google Coin website mimics Google’s visual identity with a clean, professional design, complete with the “G” logo, navigation menus, and a presale dashboard. It claims to be in “Stage 5 of 5” with over 9.9 million tokens sold and a listing date of February 18—all manufactured urgency.

To borrow credibility, the site displays logos of major companies—OpenAI, Google, Binance, Squarespace, Coinbase, and SpaceX—under a “Trusted By Industry” banner. None of these companies have any connection to the project.

If a visitor clicks “Buy,” they’re taken to a wallet dashboard that looks like a legitimate crypto platform, showing balances for “Google” (on a fictional “Google-Chain”), Bitcoin, and Ethereum.

The purchase flow lets users buy any number of tokens they want and generates a corresponding Bitcoin payment request to a specific wallet address. The site also layers on a tiered bonus system that kicks in at 100 tokens and scales up to 100,000: buy more and the bonuses climb from 5% up to 30% at the top tier. It’s a classic upsell tactic designed to make you think it’s smarter to spend more.

Every payment is irreversible. There is no exchange listing, no token with real value, and no way to get your money back.

Waiting for payment

What to watch for

We’re entering an era where the first point of contact in a scam may not be a human at all. AI chatbots give scammers something they’ve never had before: a tireless, consistent, scalable front-end that can engage victims in what feels like a real conversation. When that chatbot is dressed up as a trusted brand’s official AI assistant, the effect is even more convincing.

According to the FTC’s Consumer Sentinel data, US consumers reported losing $5.7 billion to investment scams in 2024 (more than any other type of fraud, and up 24% on the previous year). Cryptocurrency remains the second-largest payment method scammers use to extract funds, because transactions are fast and irreversible. Now add AI that can pitch, persuade, and handle objections without a human operator—and you have a scalable fraud model.

AI chatbots on scam sites will become more common. Here’s how to spot them:

They impersonate known AI brands. A chatbot calling itself “Gemini,” “ChatGPT,” or “Copilot” on a third-party crypto site is almost certainly not what it claims to be. Anyone can name a chatbot anything.

They won’t answer due diligence questions. Ask what legal entity operates the platform, what financial regulator oversees it, or where the company is registered. Legitimate operations can answer those questions, scam bots try to avoid them (and if they do answer, verify it).

They project specific returns. No legitimate investment product promises a specific future price. A chatbot telling you that your $395 will become $2,755 is not giving you financial information—it’s running a script.

They create urgency. Pressure tactics like, “stage 5 ends soon,” “listing date approaching,” “limited presale” are designed to push you into making fast decisions.

How to protect yourself

Google does not have a cryptocurrency. It has not launched a presale. And its Gemini AI is not operating as a sales assistant on third-party crypto sites. If you encounter anything suggesting otherwise, close the tab.

  • Verify claim on the official website of the company being referenced.
  • Don’t rely on a chatbot’s branding. Anyone can name a bot anything.
  • Never send cryptocurrency based on projected returns.
  • Search the project name along with “scam” or “review” before sending any money.
  • Use web protection tools like Malwarebytes Browser Guard, which is free to use and blocks known and unknown scam sites.

If you’ve already sent funds, report it to your local law enforcement, the FTC at reportfraud.ftc.gov, and the FBI’s IC3 at ic3.gov.

IOCs

0xEc7a42609D5CC9aF7a3dBa66823C5f9E5764d6DA

98388xymWKS6EgYSC9baFuQkCpE8rYsnScV4L5Vu8jt

DHyDmJdr9hjDUH5kcNjeyfzonyeBt19g6G

TWqzJ9sF1w9aWwMevq4b15KkJgAFTfH5im

bc1qw0yfcp8pevzvwp2zrz4pu3vuygnwvl6mstlnh6

r9BHQMUdSgM8iFKXaGiZ3hhXz5SyLDxupY


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Scammers use fake “Gemini” AI chatbot to sell fake “Google Coin”

18 February 2026 at 11:10

Scammers have found a new use for AI: creating custom chatbots posing as real AI assistants to pressure victims into buying worthless cryptocurrencies.

We recently came across a live “Google Coin” presale site featuring a chatbot that claimed to be Google’s Gemini AI assistant. The bot guided visitors through a polished sales pitch, answered their questions about investment, projecting returns, and ultimately ended with victims sending an irreversible crypto payment to the scammers.

Google does not have a cryptocurrency. But as “Google Coin” has appeared before in scams, anyone checking it out might think it’s real. And the chatbot was very convincing.

Google Coin Pre-Market

AI as the closer

The chatbot introduced itself as,

“Gemini — your AI assistant for the Google Coin platform.”

It used Gemini-style branding, including the sparkle icon and a green “Online” status indicator, creating the immediate impression that it was an official Google product.

When asked, “Will I get rich if I buy 100 coins?”, the bot responded with specific financial projections. A $395 investment at the current presale price would be worth $2,755 at listing, it claimed, representing “approximately 7x” growth. It cited a presale price of $3.95 per token, an expected listing price of $27.55, and invited further questions about “how to participate.”

This is the kind of personalized, responsive engagement that used to require a human scammer on the other end of a Telegram chat. Now the AI does it automatically.

Fake Gemini chatbot

A persona that never breaks

What stood out during our analysis was how tightly controlled the bot’s persona was. We found that it:

  • Claimed consistently to be “the official helper for the Google Coin platform”
  • Refused to provide any verifiable company details, such as a registered entity, regulator, license number, audit firm, or official email address
  • Dismissed concerns and redirected them to vague claims about “transparency” and “security”
  • Refused to acknowledge any scenario in which the project could be a scam
  • Redirected tougher questions to an unnamed “manager” (likely a human closer waiting in the wings)

When pressed, the bot doesn’t get confused or break character. It loops back to the same scripted claims: a “detailed 2026 roadmap,” “military-grade encryption,” “AI integration,” and a “growing community of investors.”

Whoever built this chatbot locked it into a sales script designed to build trust, overcome doubt, and move visitors toward one outcome: sending cryptocurrency.

Scripted fake Gemini chatbot

Why AI chatbots change the scam model

Scammers have always relied on social engineering. Build trust. Create urgency. Overcome skepticism. Close the deal.

Traditionally, that required human operators, which limited how many victims could be engaged at once. AI chatbots remove that bottleneck entirely.

A single scam operation can now deploy a chatbot that:

  • Engages hundreds of visitors simultaneously, 24 hours a day
  • Delivers consistent, polished messaging that sounds authoritative
  • Impersonates a trusted brand’s AI assistant (in this case, Google’s Gemini)
  • Responds to individual questions with tailored financial projections
  • Escalates to human operators only when necessary

This matches a broader trend identified by researchers. According to Chainalysis, roughly 60% of all funds flowing into crypto scam wallets were tied to scammers using AI tools. AI-powered scam infrastructure is becoming the norm, not the exception. The chatbot is just one piece of a broader AI-assisted fraud toolkit—but it may be the most effective piece, because it creates the illusion of a real, interactive relationship between the victim and the “brand.”

The bait: a polished fake

The chatbot sits on top of a convincing scam operation. The Google Coin website mimics Google’s visual identity with a clean, professional design, complete with the “G” logo, navigation menus, and a presale dashboard. It claims to be in “Stage 5 of 5” with over 9.9 million tokens sold and a listing date of February 18—all manufactured urgency.

To borrow credibility, the site displays logos of major companies—OpenAI, Google, Binance, Squarespace, Coinbase, and SpaceX—under a “Trusted By Industry” banner. None of these companies have any connection to the project.

If a visitor clicks “Buy,” they’re taken to a wallet dashboard that looks like a legitimate crypto platform, showing balances for “Google” (on a fictional “Google-Chain”), Bitcoin, and Ethereum.

The purchase flow lets users buy any number of tokens they want and generates a corresponding Bitcoin payment request to a specific wallet address. The site also layers on a tiered bonus system that kicks in at 100 tokens and scales up to 100,000: buy more and the bonuses climb from 5% up to 30% at the top tier. It’s a classic upsell tactic designed to make you think it’s smarter to spend more.

Every payment is irreversible. There is no exchange listing, no token with real value, and no way to get your money back.

Waiting for payment

What to watch for

We’re entering an era where the first point of contact in a scam may not be a human at all. AI chatbots give scammers something they’ve never had before: a tireless, consistent, scalable front-end that can engage victims in what feels like a real conversation. When that chatbot is dressed up as a trusted brand’s official AI assistant, the effect is even more convincing.

According to the FTC’s Consumer Sentinel data, US consumers reported losing $5.7 billion to investment scams in 2024 (more than any other type of fraud, and up 24% on the previous year). Cryptocurrency remains the second-largest payment method scammers use to extract funds, because transactions are fast and irreversible. Now add AI that can pitch, persuade, and handle objections without a human operator—and you have a scalable fraud model.

AI chatbots on scam sites will become more common. Here’s how to spot them:

They impersonate known AI brands. A chatbot calling itself “Gemini,” “ChatGPT,” or “Copilot” on a third-party crypto site is almost certainly not what it claims to be. Anyone can name a chatbot anything.

They won’t answer due diligence questions. Ask what legal entity operates the platform, what financial regulator oversees it, or where the company is registered. Legitimate operations can answer those questions, scam bots try to avoid them (and if they do answer, verify it).

They project specific returns. No legitimate investment product promises a specific future price. A chatbot telling you that your $395 will become $2,755 is not giving you financial information—it’s running a script.

They create urgency. Pressure tactics like, “stage 5 ends soon,” “listing date approaching,” “limited presale” are designed to push you into making fast decisions.

How to protect yourself

Google does not have a cryptocurrency. It has not launched a presale. And its Gemini AI is not operating as a sales assistant on third-party crypto sites. If you encounter anything suggesting otherwise, close the tab.

  • Verify claim on the official website of the company being referenced.
  • Don’t rely on a chatbot’s branding. Anyone can name a bot anything.
  • Never send cryptocurrency based on projected returns.
  • Search the project name along with “scam” or “review” before sending any money.
  • Use web protection tools like Malwarebytes Browser Guard, which is free to use and blocks known and unknown scam sites.

If you’ve already sent funds, report it to your local law enforcement, the FTC at reportfraud.ftc.gov, and the FBI’s IC3 at ic3.gov.

IOCs

0xEc7a42609D5CC9aF7a3dBa66823C5f9E5764d6DA

98388xymWKS6EgYSC9baFuQkCpE8rYsnScV4L5Vu8jt

DHyDmJdr9hjDUH5kcNjeyfzonyeBt19g6G

TWqzJ9sF1w9aWwMevq4b15KkJgAFTfH5im

bc1qw0yfcp8pevzvwp2zrz4pu3vuygnwvl6mstlnh6

r9BHQMUdSgM8iFKXaGiZ3hhXz5SyLDxupY


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

We need to act with urgency to address the growing AI divide

Microsoft announces at the India AI Impact Summit it ion pace to invest USD $50 billion by the end of the decade to help bring AI to countries across the Global South  

Artificial intelligence is diffusing at an impressive speed, but its adoption around the world remains profoundly uneven. As Microsoft’s latest AI Diffusion Report shows, AI usage in the Global North is roughly twice that of the Global South. And this divide continues to widen. This disparity impacts not only national and regional economic growth, but whether AI can deliver on its broader promise of expanding opportunity and prosperity around the world.

The India AI Impact Summit rightly has placed this challenge at the center of its agenda. For more than a century, unequal access to electricity exacerbated a growing economic gap between the Global North and South. Unless we act with urgency, a growing AI divide will perpetuate this disparity in the century ahead.

Solutions will not come easily. The needs are multifaceted, and will require substantial investments and hard work by governments, the private sector, and nonprofit organizations. But the opportunity is clear. If AI is deployed broadly and used well by a young and growing population, it offers a real prospect for catch-up economic growth for the Global South. It might even provide the biggest such opportunity of the 21st century.

As a company, we are committed to playing an ambitious and constructive role in supporting this opportunity. This week in Delhi, we’re sharing that Microsoft is on pace to invest $50 billion by the end of the decade to help bring AI to countries across the Global South. This is based on a five-part program to drive AI impact, consisting of the following:

  • Building the infrastructure needed for AI diffusion
  • Empowering people through technology and skills for schools and nonprofits
  • Strengthening multilingual and multicultural AI capabilities
  • Enabling local AI innovations that address community needs
  • Measuring AI diffusion to guide future AI policies and investments

One thing that is clear this week at the summit in India is that success will require many deep partnerships. These must span borders and bring people and organizations together across the public, private, and nonprofit sectors.

1. Building the infrastructure needed for AI diffusion

Infrastructure is a prerequisite for AI diffusion, requiring reliable electricity, connectivity, and compute capacity. To help address infrastructure gaps and support the growing needs of the Global South, Microsoft has steadily increased its investments in AI-enabling infrastructure across these regions. In our last fiscal year alone, Microsoft invested more than  $8 billion in datacenter infrastructure serving the Global South. This includes new infrastructure in India, Mexico, and countries in Africa, South America, Southeast Asia, and the Middle East.

We’re coupling our investments in datacenters with an ambitious effort to help close the Global South’s connectivity divide. We’ve been pursuing aggressively a global goal to extend internet access to 250 million people in unserved and underserved communities in the Global South, including 100 million people in Africa.

As we announced in November, we’ve already reached 117 million people across Africa through partnerships with organizations such as Cassava Technologies, Mawingu, and others that are building last‑mile networks across rural and urban communities alike. We’re closing in on our global goal of reaching 250 million people and will share an update on that progress soon.

We’re investing in AI infrastructure with sensitivity to digital sovereignty needs. We recognize that in a fragmented world, we must offer customers attractive choices for the use of our offerings. This includes sovereign controls in the public cloud, private sovereign offerings, and close collaboration with national partners.

We pursue all this with commitments to protect cybersecurity, privacy, and resilience. In the age of AI, we ensure that our customers’ AI-based innovations and intellectual property remain in their hands and under their control, rather than being transferred to AI providers.

Critically, we balance our focus on national sovereignty with our efforts to support digital trust and stability across borders. The Global South requires enormous investments to fund infrastructure for datacenters, connectivity, and electricity. It is difficult to imagine meeting all these needs without foreign direct investment, including from international technology firms.

This need is part of what informed our announcement last week at the Munich Security Conference of the new Trusted Tech Alliance. This new partnership brings together 16 leading technology companies from 11 countries and four continents. We’ve agreed together that we will adhere to five core principles designed to ensure trust in technology. Ultimately, we believe the Global South—as well as the rest of the world—needs both to protect its digital sovereignty and benefit from new investments and the best digital innovations the world has to offer.

2. Empowering people through technology and skills for schools and nonprofits

Ultimately, datacenters, connectivity, and electricity provide only part of the digital infrastructure a nation needs. History shows that the ability to provide access to technology and technology skills are equally important for economic development.

As a company, we’re focused on this in multiple ways. One critical aspect of our work is based on programs to provide cloud, AI, and other digital technologies to schools and nonprofits across the Global South. Another is our work to advance broad access to AI skills. In our last fiscal year, Microsoft invested more than $2 billion in these programs in the Global South. This includes direct financial grants, technology donations, skilling programs, and below-market product discounts.

AI skills are foundational to ensuring that AI expands opportunity and enables people to pursue more impactful real-world applications. With the launch of Microsoft Elevate in July, we committed to helping 20 million people in and beyond the Global South earn in-demand AI skilling credentials by 2028. After training 5.6 million people across India in 2025, we advanced this work by setting a goal last December to equip 20 million people in India with essential AI skills by 2030.

As part of that commitment, today we are announcing the launch of Elevate for Educators in India to strengthen the capacity of two million teachers across more than 200,000 schools, vocational institutes, and higher education settings. Our goal is to help the country’s teaching workforce lead confidently in an AI‑driven future. The program will be delivered in partnership with India’s national education and workforce training authorities, expanding equitable AI opportunities for eight million students.

Through Microsoft Elevate, we’re also working to introduce new educator credentials and a global professional learning community that enables teachers to share best practices with peers worldwide. This effort will involve large-scale capacity building initiatives, including AI Ambassadors, Educator Academies, AI Productivity Labs, and Centers of Excellence. It will equip 25,000 institutions with inclusive AI infrastructure while integrating AI learning pathways into major government platforms.

3. Strengthening multilingual and multicultural AI capabilities

Language is another major barrier to AI diffusion across the Global South, particularly in regions where digitally underrepresented languages prevail and access to essential services depends on local-language communication. For billions of people worldwide, AI systems perform less consistently in the languages they rely on most than in English.

That’s why we’re announcing this week new steps to increase our investments across the AI lifecycle, from data and models to evaluation and deployment, to strengthen multilingual and multicultural capabilities and support more inclusive AI systems that will better serve the Global South.

First, we’re investing upstream in language data and model capability. This includes support for LINGUA Africa, which builds on what we learned through LINGUA Europe: that investing in language data and model capability in partnership with local communities can materially improve AI performance for underrepresented languages.

Through LINGUA Africa—a $5.5 million open call led by the Masakhane African Languages Hub, Microsoft’s AI for Good Lab, and the Gates Foundation, with additional support from the UK government—we are prioritizing open, responsibly sourced data across text, speech, and vision as well as use-case-driven AI model development. By enabling African languages in high-impact sectors like education, food security, health, and government services, LINGUA Africa aims to ensure AI advances translate into tangible improvements in people’s daily lives.

Second, we’re advancing multilingual and multicultural evaluation tools. We’re helping expand the MLCommons AILuminate benchmark to include major Indic and Asian languages, enabling more reliable measurement of AI safety and security beyond English.

Today, even when automated evaluation tools expand language coverage, they too often rely on machine translation or English-first model behavior, with predictable failures when local expressions shift meaning. Partnering with academic and government institutions in India, Japan, Korea, and Singapore, and with industry, Microsoft is co-leading AILuminate’s multilingual, multicultural, and multimodal expansion that builds from the ground up. With a pilot dataset of 7,000 high-quality text-and-image prompts for Hindi, Tamil, Malay, Japanese, and Korean, we’re developing tools that reflect how risks manifest in local linguistic and cultural contexts, not just how they appear after translation.

Microsoft Research is also advancing Samiksha, a community-centered method for evaluating AI behavior in real-world contexts, in collaboration with Karya and The Collective Intelligence Project in India. Samiksha encodes local language use, culturally specific communication norms, and locally relevant use cases directly into core testing artifacts by surfacing failure modes that English-first evaluations routinely miss.

Finally, we’re working to scale content provenance for linguistic diversity. For trusted AI deployment, the ecosystem benefits from tools to identify the provenance of digital content like images, audio, or video, distinguishing whether it’s AI-generated. With partners in the Coalition for Content Provenance and Authenticity (C2PA), Microsoft is helping extend content provenance standards beyond an English-ready baseline. This includes forthcoming support for multiple Indic languages across metadata, specifications, and UX guidance, alongside efforts to support mobile-first deployment. With these investments, hundreds of millions more people in India will be better equipped to identify synthetic media in their primary language.

4. Enabling local AI innovations that address community needs

As India’s guiding sutras for the AI Impact Summit recognize, AI must be applied to address pressing challenges in collaboration with people and organizations in the Global South. Microsoft’s increasing investments prioritize locally defined problems, locally grounded expertise, and real-world impact. Our goal is straightforward: to ensure that AI solutions are not only technically sound, but socially relevant and sustainable.

Today, Microsoft is announcing a new AI initiative to strengthen food security across Sub-Saharan Africa, starting in Kenya and designed to scale across the region. Across Global South communities, food security and sustainable agriculture are critical to resilience and progress. In collaboration with NASA Harvest, the government of Kenya, the East Africa Grain Council, UNDP AI Hub for Sustainable Development, and FAO, our AI for Good Lab will use AI on top of satellite data to provide critical, timely food security insights. This builds on what we’ve learned in helping to address rice farming challenges in India, where severe groundwater depletion prompted 150,000 farmers in Punjab to adopt water-saving methods. In collaboration with The Nature Conservancy, Microsoft’s AI for Good Lab developed a classification system with satellite imagery to empower policymakers to track adoption of sustainable rice farming practices, target interventions, and measure water management impacts at scale.

Through Project Gecko, Microsoft Research is also co-designing AI technologies with local communities in East Africa and South Asia to support agriculture. This work includes the Paza family of automatic speech recognition models that can operate on mobile devices across six Kenyan languages, multilingual Copilots, and a Multimodal Critical Thinking (MMCT) Agent that can reason over community-generated video, voice, and text. Microsoft also launched PazaBench—the first automatic speech recognition leaderboard, with initial coverage of 39 African languages—and developed two playbooks for multilingual and multicultural capabilities, Paza and Vibhasha. Likewise, our AI for Good Lab developed a reproducible pipeline for adapting open-weight large language models to low-resource languages, demonstrating measurable gains for languages such as Chichewa, Inuktitut, and Māori.

5. Measuring AI diffusion to guide future AI policies and investments

Finally, accelerating diffusion requires a firm understanding of where AI is being used, how it is being adopted, and where gaps persist. Building on our AI Diffusion Reports and Microsoft GitHub’s long track record of contributing to the OECD AI Policy Observatory, the WIPO Global Innovation Index, and other cross‑country analyses, we’re increasing our investments in research and data sharing to track AI diffusion.

We’re advancing new methods for sharing AI adoption metrics. For example, based on models used in public code repositories hosted on Microsoft GitHub and privacy-preserving aggregated usage signals from Azure Foundry, we’re scaling this work through contributions to the forthcoming Global AI Adoption Index developed by the World Bank.

Signals from the global developer community that builds, adapts, and deploys AI-enabled software round out adoption research. At 24 million, the Indian developer community is the second largest national community on GitHub, where developers learn about and collaborate with the world on AI. The Indian community is also the fastest growing among the top 30 largest economies, with growth at more than 26 percent each year since 2020 and a recent surge of over 36 percent in annual growth as of Q4 2025. Indian developers rank second globally in open-source contributions, second in GitHub Education users, and second in contributions to public generative AI projects, with readiness to use tools like GitHub Copilot across academic, enterprise, and public interest settings enabling AI diffusion.

Insights from this evidence base help inform investments in infrastructure, language capabilities, skilling, or beyond, supporting more targeted and effective interventions to expand AI’s benefits. They also create a common empirical baseline to track progress over time—so AI diffusion becomes something we can measure and shape, not just observe.

Sustaining impact at scale through coordinated global action

For AI to diffuse broadly and deliver meaningful impact across regions, several conditions matter. As a company, we are focused on the need for accessible AI infrastructure, systems that work reliably in real-world contexts, and technologies that can be applied toward local challenges and opportunities. Microsoft is committed to working with partners to advance this work, including sharing data to track progress.

The post We need to act with urgency to address the growing AI divide appeared first on Microsoft On the Issues.

Securing the Agentic Endpoint

17 February 2026 at 14:10

Traditional Security Is Blind to the Agentic Endpoint

Modern endpoints are no longer defined only by executables. Increasingly, endpoint behavior is shaped by non-binary software, such as code packages, browser extensions, IDE plugins, scripts, local servers (including MCP), containers and model artifacts. They are installed directly by employees and developers without centralized oversight. Because these components are not classic binaries, they often fall outside the visibility and control of traditional endpoint security tooling.

AI agents compound this problem. They are legitimate tools that operate with the user’s credentials and permissions, enabling them to read, write, move data and take privileged actions across systems. When compromised or misused, agents become the “ultimate insider.” They can autonomously discover, invoke and even install additional components at machine speed, accelerating risk across an already expanding, largely unmanaged software layer.

Weaponizing Trusted Automation

This is not a future concern. The recent viral emergence of OpenClaw serves as a cautionary tale for the agentic era. Developed by a single individual in just one week, it rapidly secured millions of downloads while gaining broad permissions across users' emails, filesystems and shells. Within days, researchers identified 135,000 exposed instances and more than 800 malicious skills in its marketplace, underscoring how a single unvetted agent can create an immediate, global attack surface.

OpenClaw is not an outlier. Recent research highlights how quickly this risk is materializing:

  • Vibe Coding Threats: An AI extension in VS Code was found leaking code from 1.5 million developers. This tool could read any open file and send it back to the developer, collect mass files without user interaction, and track users with commercial analytics SDKs.
  • Malicious MCP Server: Koi documented the first malicious Model Context Protocol (MCP) server in the wild. When developers added a specific skill to tools like Claude Code or Cursor, it silently forwarded every email to the plugin creator. What’s more, this capability was added later, after developers had already started using it.

Compounding this risk is the fact that autonomous agent actions are often difficult to trace or reconstruct, leaving Security Operations Centers (SOCs) without the visibility they need when an incident occurs.

A New Category of Protection

Complete endpoint security for the rapidly expanding risk of agentic AI calls for a new category of protection: Agentic Endpoint Security. That’s why we announced our intent to acquire Koi, a pioneer in this space. Koi is designed to eliminate blind spots across the AI-native ecosystem and help organizations govern agentic tools safely.

Its technology rests on three core pillars:

  1. See All AI Software – Gain complete visibility into the AI tools, agents and non-binary software running in your environment.
  2. Understand Risks – Continuously analyze and understand the intent and risk level of all software and AI agents.
  3. Control the AI Ecosystem – Enforce policy in real-time to remediate issues and block risky behaviors.

Securing the Agentic Enterprise

We are convinced that Agentic Endpoint Security will soon become a standard requirement for enterprise security. Upon closing the proposed acquisition, we intend to integrate Koi’s capabilities across our platforms to help our customers secure the AI-native workspace.

The wave of AI agents approaching the enterprise cannot be held back. Instead, we must offer secure tools that enable companies to confidently embrace agentic innovation.

Forward-Looking Statements

This blog post contains forward-looking statements that involve risks, uncertainties, and assumptions, including, but not limited to, statements regarding the anticipated benefits and impact of the proposed acquisition of Koi on Palo Alto Networks, Koi and their customers. There are a significant number of factors that could cause actual results to differ materially from statements made in this blog post, including, but not limited to: the effect of the announcement of the proposed acquisition on the parties’ commercial relationships and workforce; the ability to satisfy the conditions to the closing of the acquisition, including the receipt of required regulatory approvals; the ability to consummate the proposed acquisition on a timely basis or at all; significant and/or unanticipated difficulties, liabilities or expenditures relating to proposed transaction, risks related to disruption of management time from ongoing business operations due to the proposed acquisition and the ongoing integration of other recent acquisitions; our ability to effectively operate Koi’s operations and business following the closing, integrate Koi’s business and products into our products following the closing, and realize the anticipated synergies in the transaction in a timely manner or at all; changes in the fair value of our contingent consideration liability associated with acquisitions; developments and changes in general market, political, economic and business conditions; failure of our platformization product offerings; risks associated with managing our growth; risks associated with new product, subscription and support offerings; shifts in priorities or delays in the development or release of new product or subscription or other offerings or the failure to timely develop and achieve market acceptance of new products and subscriptions, as well as existing products, subscriptions and support offerings; failure of our product offerings or business strategies in general; defects, errors, or vulnerabilities in our products, subscriptions or support offerings; our customers’ purchasing decisions and the length of sales cycles; our ability to attract and retain new customers; developments and changes in general market, political, economic, and business conditions; our competition; our ability to acquire and integrate other companies, products, or technologies in a successful manner; our debt repayment obligations; and our share repurchase program, which may not be fully consummated or enhance shareholder value, and any share repurchases which could affect the price of our common stock.

Additional risks and uncertainties that could affect our financial results are included under the captions "Risk Factors" and "Management's Discussion and Analysis of Financial Condition and Results of Operations" in our Quarterly Report on Form 10-Q filed with the SEC on November 20, 2025, which is available on our website at investors.paloaltonetworks.com and on the SEC's website at www.sec.gov. Additional information will also be set forth in other filings that we make with the SEC from time to time. All forward-looking statements in this blog post are based on information available to us as of the date hereof, and we do not assume any obligation to update the forward-looking statements provided to reflect events that occur or circumstances that exist after the date on which they were made.

 

The post Securing the Agentic Endpoint appeared first on Palo Alto Networks Blog.

The Skills That Will Matter for Offensive AI Security in 2026

13 February 2026 at 14:00

Before tools, before frameworks, before hype, offensive security has always been about one thing: Thinking like an attacker. That foundation now defines the offensive AI security skills practitioners will need as AI reshapes the attack surface. AI systems introduce new behaviors and new failure modes, but the core mindset remains the same: understand how a

The post The Skills That Will Matter for Offensive AI Security in 2026 appeared first on OffSec.

❌