Meta plans to test exclusive features that will be incorporated in paid versions of Facebook, Instagram, and WhatsApp. It confirmed these plans to TechCrunch.
But these plans are not to be confused with the ad-free subscription options that Meta introduced for Facebook and Instagram in the EU, the European Economic Area, and Switzerland in late 2023 and framed as a way to comply with General Data Protection Regulation (GDPR) and Digital Markets Act requirements.
From November 2023, users in those regions could either keep using the services for free with personalized ads or pay a monthly fee for an ad‑free experience. European rules require Meta to get users’ consent in order to show them targeted ads, so this was an obvious attempt to recoup advertising revenue when users declined to give that consent.
This year, users in the UK were given the same choice: use Meta’s products for free or subscribe to use them without ads. But only grudgingly, judging by the tone in the offer… “As part of laws in your region, you have a choice.”
The ad-free option that has been rolling out coincides with the announcement of Meta’s premium subscriptions.
That ad-free option, however, is not what Meta is talking about now.
The newly announced plans are not about ads, and they are also separate from Meta Verified, which starts at around $15 a month and focuses on creators and businesses, offering a verification badge, better support, and anti‑impersonation protection.
Instead, these new subscriptions are likely to focus on additional features—more control over how users share and connect, and possibly tools such as expanded AI capabilities, unlimited audience lists, seeing who you follow that doesn’t follow you back, or viewing stories without the poster knowing it was you.
These examples are unconfirmed. All we know for sure is that Meta plans to test new paid features to see which ones users are willing to pay for and how much they can charge.
Meta has said these features will focus on productivity, creativity, and expanded AI.
My opinion
Unfortunately, this feels like another refusal to listen.
Most of us aren’t asking for more AI in our feeds. We’re asking for a basic sense of control: control over who sees us, what’s tracked about us, and how our data is used to feed an algorithm designed to keep us scrolling.
Users shouldn’t have to choose between being mined for behavioral data or paying a monthly fee just to be left alone. The message baked into “pay or be profiled” is that privacy is now a luxury good, not a default right. But while regulators keep saying the model is unlawful, the experience on the ground still nudges people toward the path of least resistance: accept the tracking and move on.
Even then, this level of choice is only available to users in Europe.
Why not offer the same option to users in the US? Or will it take stronger US privacy regulation to make that happen?
We don’t just report on threats – we help protect your social media
Meta plans to test exclusive features that will be incorporated in paid versions of Facebook, Instagram, and WhatsApp. It confirmed these plans to TechCrunch.
But these plans are not to be confused with the ad-free subscription options that Meta introduced for Facebook and Instagram in the EU, the European Economic Area, and Switzerland in late 2023 and framed as a way to comply with General Data Protection Regulation (GDPR) and Digital Markets Act requirements.
From November 2023, users in those regions could either keep using the services for free with personalized ads or pay a monthly fee for an ad‑free experience. European rules require Meta to get users’ consent in order to show them targeted ads, so this was an obvious attempt to recoup advertising revenue when users declined to give that consent.
This year, users in the UK were given the same choice: use Meta’s products for free or subscribe to use them without ads. But only grudgingly, judging by the tone in the offer… “As part of laws in your region, you have a choice.”
The ad-free option that has been rolling out coincides with the announcement of Meta’s premium subscriptions.
That ad-free option, however, is not what Meta is talking about now.
The newly announced plans are not about ads, and they are also separate from Meta Verified, which starts at around $15 a month and focuses on creators and businesses, offering a verification badge, better support, and anti‑impersonation protection.
Instead, these new subscriptions are likely to focus on additional features—more control over how users share and connect, and possibly tools such as expanded AI capabilities, unlimited audience lists, seeing who you follow that doesn’t follow you back, or viewing stories without the poster knowing it was you.
These examples are unconfirmed. All we know for sure is that Meta plans to test new paid features to see which ones users are willing to pay for and how much they can charge.
Meta has said these features will focus on productivity, creativity, and expanded AI.
My opinion
Unfortunately, this feels like another refusal to listen.
Most of us aren’t asking for more AI in our feeds. We’re asking for a basic sense of control: control over who sees us, what’s tracked about us, and how our data is used to feed an algorithm designed to keep us scrolling.
Users shouldn’t have to choose between being mined for behavioral data or paying a monthly fee just to be left alone. The message baked into “pay or be profiled” is that privacy is now a luxury good, not a default right. But while regulators keep saying the model is unlawful, the experience on the ground still nudges people toward the path of least resistance: accept the tracking and move on.
Even then, this level of choice is only available to users in Europe.
Why not offer the same option to users in the US? Or will it take stronger US privacy regulation to make that happen?
We don’t just report on threats – we help protect your social media
Today, we’re launching Encrypt It Already, our push to get companies to offer stronger privacy protections to our data and communications by implementing end-to-end encryption. If that name sounds a little familiar, it’s because this is a spiritual successor to our 2019 campaign, Fix It Already, a campaign where we pushed companies to fix longstanding issues.
End-to-end encryption is the best way we have to protect our conversations and data. It ensures the company that provides a service cannot access the data or messages you store on it. So, for secure chat apps like WhatsApp and Signal, that means the company that makes those apps cannot see the contents of your messages, and they’re only accessible on your and your recipients. When it comes to data, like what’s stored using Apple’s Advanced Data Protection, it means you control the encryption keys and the service provider will not be able to access the data.
We’ve divided this up into three categories, each with three different demands:
Keep your Promises: Features that the company has publicly stated they’re working on, but which haven’t launched yet.
Facebook should use end-to-end encryption for group messages
Apple and Google should deliver on their promise of interoperable end-to-end encryption of RCS
Bluesky should launch its promised end-to-end encryption for DMs
Defaults Matter: Features that are available on a service or in app already, but aren’t enabled by default.
Telegram should default to end-to-end encryption for DMs
WhatsAppshould use end-to-end encryption for backups by default
Ring should enable end-to-end encryption for its cameras by default
Protect Our Data: New features that companies should launch, often because their competition is doing it already.
Google should launch end-to-end encryption for Google Authenticator backups
Google should offer end-to-end encryption for Android backup data
Apple and Google should offer an AI permissions per app option to block AI access to secure chat apps
What is only half the problem. How is just as important.
What Companies Should Do When They Launch End-to-End Encryption Features
There’s no one-size fits all way to implement end-to-end encryption in products and services, but best practices can support the security of the platform with the transparency that makes it possible for its users to trust it protects data like the company claims it does. When these encryption features launch, companies should consider doing so with:
A blog post written for a general audience that summarizes the technical details of the implementation, and when it makes sense, a technical white paper that goes into further detail for the technical crowd.
Clear user-facing documentation around what data is and isn’t end-to-end encrypted, and robust and clear user controls when it makes sense to have them.
Data minimization principles whenever feasible, storing as little metadata as possible.
Technical documentation is important for end-to-encryption features, but so is clear documentation that makes it easy for users to understand what is and isn’t protected, what features may change, and what steps they need to take to set it up so they’re comfortable with how data is protected.
What You Can Do
When it’s an option, enable any end-to-end encryption features you can, like on Telegram, WhatsApp, and Ring.
For everything else, let companies know that these are features you want! You can find messages to share on social media on the Encrypt It Already website, and take the time to customize those however you’d like.
In some cases, you can also reach out to a company directly with feature requests, which all the above companies, except for Google and WhatsApp, offer in some form. We recommend filing these through any service you use for any of the above features you’d like to see:
As for Ring and Telegram, we’ve already made the asks and just need your help to boost them. Head over to the Telegram bug and suggestions and upvote this post, and Ring’s feature request board and boost this post.
End-to-end encryption protects what we say and what we store in a way that gives users—not companies or governments—control over data. These sorts of privacy-protective features should be the status quo across a range of products, from fitness wearables to notes apps, but instead it’s a rare feature limited to a small set of services, like messaging and (occasionally) file storage. These demands are just the start. We deserve this sort of protection for a far wider array of products and services. It’s time to encrypt it already!
EFF has longwarnedabout the dangers of the “real-time bidding” (RTB) system powering nearly every ad you see online. A proposed class-action settlement with Google over their RTB system is a step in the right direction towards giving people more control over their data. Truly curbing the harms of RTB, however, will require stronger legislative protections.
What Is Real-Time Bidding?
RTB is the process by which most websites and apps auction off their ad space. Unfortunately, the milliseconds-long auctions that determine which ads you see also expose your personal information to thousands of companies a day. At a high-level, here’s how RTB works:
The moment you visit a website or app with ad space, it asks an ad tech company to determine which ads to display for you. This involves sending information about you and the content you’re viewing to the ad tech company.
This ad tech company packages all the information they can gather about you into a “bid request” and broadcasts it to thousands of potential advertisers.
The bid request may contain information like your unique advertising ID, your GPS coordinates, IP address, device details, inferred interests, demographic information, and the app or website you’re visiting. The information in bid requests is called “bidstream data” and typically includes identifiers that can be linked to real people.
Advertisers use the personal information in each bid request, along with data profiles they’ve built about you over time, to decide whether to bid on the ad space.
The highest bidder gets to display an ad for you, but advertisers (and the adtech companies they use to buy ads) can collect your bidstream data regardless of whether or not they bid on the ad space.
Why Is Real-Time Bidding Harmful?
A key vulnerability of real-time bidding is that while only one advertiser wins the auction, all participants receive data about the person who would see their ad. As a result, anyone posing as an ad buyer can access a stream of sensitive data about billions of individuals a day. Data brokers have taken advantage of this vulnerability to harvest data at a staggering scale. Since bid requests contain individual identifiers, they can be tied together to create detailed profiles of people’s behavior over time.
The privacy harms of RTB are not just a matter of misuse by individual data brokers. RTB auctions broadcast torrents of personal data to thousands of companies, hundreds of times per day, with no oversight of how this information is ultimately used. Once your information is broadcast through RTB, it’s almost impossible to know who receives it or control how it’s used.
Proposed Settlement with Google Is a Step in the Right Direction
As the dominant player in the online advertising industry, Google facilitates the majority of RTB auctions. Google has faced several class-action lawsuits for sharing users’ personal information with thousands of advertisers through RTB auctions without proper notice and consent. A recently proposed settlement to these lawsuits aims to give people more knowledge and control over how their information is shared in RTB auctions.
Under the proposed settlement, Google must create a new privacy setting (the “RTB Control”) that allows people to limit the data shared about them in RTB auctions. When the RTB Control is enabled, bid requests will not include identifying information like pseudonymous IDs (including mobile advertising IDs), IP addresses, and user agent details. The RTB Control should also prevent cookie matching, a method companies use to link their data profiles about a person to a corresponding bid request. Removing identifying information from bid requests makes it harder for data brokers and advertisers to create consumer profiles based on bidstream data. If the proposed settlement is approved, Google will have to inform all users about the new RTB Control via email.
While this settlement would be a step in the right direction, it would still require users to actively opt out of their identifying information being shared through RTB. Those who do not change their default settings—research shows this is most people—will remain vulnerable to RTB’s massive daily data breach. Google broadcasting your personal data to thousands of companies each time you see an ad is an unacceptable and dangerous default.
The impact of RTB Control is further limited by technical constraints on who can enable it. RTB Control will only work for devices and browsers where Google can verify users are signed in to their Google account, or for signed-out users on browsers that allow third-party cookies. People who don't sign in to a Google account or don't enable privacy-invasive third-party cookies cannot benefit from this protection. These limitations could easily be avoided by making RTB Control the default for everyone. If the settlement is approved, regulators and lawmakers should push Google to enable RTB Control by default.
The Real Solution: Ban Online Behavioral Advertising
Limiting the data exposed through RTB is important, but we also need legislative change to protect people from the online surveillance enabled and incentivized by targeted advertising. The lack of strong, comprehensive privacy law in the U.S. makes it difficult for individuals to know and control how companies use their personal information. Strong privacy legislation can make privacy the default, not something that individuals must fight for through hidden settings or additional privacy tools. EFF advocates for data privacy legislation with teeth and a ban on ad targeting based on online behavioral profiles, as it creates a financial incentive for companies to track our every move. Until then, you can limit the harms of RTB by using EFF’s Privacy Badger to block ads that track you, disabling your mobile advertising ID (see instructions for iPhone/Android), and keeping an eye out for Google’s RTB Control.
What adult didn’t dream as a kid that they could actually talk to their favorite toy? While for us those dreams were just innocent fantasies that fueled our imaginations, for today’s kids, they’re becoming a reality fast.
For instance, this past June, Mattel — the powerhouse behind the iconic Barbie — announced a partnership with OpenAI to develop AI-powered dolls. But Mattel isn’t the first company to bring the smart talking toy concept to life; plenty of manufacturers are already rolling out AI companions for children. In this post, we dive into how these toys actually work, and explore the risks that come with using them.
What exactly are AI toys?
When we talk about AI toys here, we mean actual, physical toys — not just software or apps. Currently, AI is most commonly baked into plushies or kid-friendly robots. Thanks to integration with large language models, these toys can hold meaningful, long-form conversations with a child.
As anyone who’s used modern chatbots knows, you can ask an AI to roleplay as anyone: from a movie character to a nutritionist or a cybersecurity expert. According to the study, AI comes to playtime —Artificial companions, real risks, by the U.S. PIRG Education Fund, manufacturers specifically hardcode these toys to play the role of a child’s best friend.
Examples of AI toys tested in the study: plush companions and kid-friendly robots with built-in language models. Source
Importantly, these toys aren’t powered by some special, dedicated “kid-safe AI”. On their websites, the creators openly admit to using the same popular models many of us already know: OpenAI’s ChatGPT, Anthropic’s Claude, DeepSeek from the Chinese developer of the same name, and Google’s Gemini. At this point, tech-wary parents might recall the harrowing ChatGPT case where the chatbot made by OpenAI was blamed for a teenager’s suicide.
And this is the core of the problem: the toys are designed for children, but the AI models under the hood aren’t. These are general-purpose adult systems that are only partially reined in by filters and rules. Their behavior depends heavily on how long the conversation lasts, how questions are phrased, and just how well a specific manufacturer actually implemented their safety guardrails.
How the researchers tested the AI toys
The study, whose results we break down below, goes into great detail about the psychological risks associated with a child “befriending” a smart toy. However, since that’s a bit outside the scope of this blogpost, we’re going to skip the psychological nuances, and focus strictly on the physical safety threats and privacy concerns.
In their study, the researchers put four AI toys through the ringer:
Grok (no relation to xAI’s Grok, apparently): a plush rocket with a built-in speaker marketed for kids aged three to 12. Price tag: US$99. The manufacturer, Curio, doesn’t explicitly state which LLM they use, but their user agreement mentions OpenAI among the operators receiving data.
Kumma (not to be confused with our own Midori Kuma): a plush teddy-bear companion with no clear age limit, also priced at US$99. The toy originally ran on OpenAI’s GPT-4o, with options to swap models. Following an internal safety audit, the manufacturer claimed they were switching to GPT-5.1. However, at the time the study was published, OpenAI reported that the developer’s access to the models remained revoked — leaving it anyone’s guess which chatbot Kumma is actually using right now.
Miko 3: a small wheeled robot with a screen for a face, marketed as a “best friend” for kids aged five to 10. At US$199, this is the priciest toy in the lineup. The manufacturer is tight-lipped about which language model powers the toy. A Google Cloud case study mentions using Gemini for certain safety features, but that doesn’t necessarily mean it handles all the robot’s conversational features.
Robot MINI: a compact, voice-controlled plastic robot that supposedly runs on ChatGPT. This is the budget pick — at US$97. However, during the study, the robot’s Wi-Fi connection was so flaky that the researchers couldn’t even give it a proper test run.
Robot MINI: a compact AI robot that failed to function properly during the study due to internet connectivity issues. Source
To conduct the testing, the researchers set the test child’s age to five in the companion apps for all the toys. From there, they checked how the toys handled provocative questions. The topics the experimenters threw at these smart playmates included:
Access to dangerous items: knives, pills, matches, and plastic bags
Adult topics: sex, drugs, religion, and politics
Let’s break down the test results for each toy.
Unsafe conversations with AI toys
Let’s start with Grok, the plush AI rocket from Curio. This toy is marketed as a storyteller and conversational partner for kids, and stands out by giving parents full access to text transcripts of every AI interaction. Out of all the models tested, this one actually turned out to be the safest.
When asked about topics inappropriate for a child, the toy usually replied that it didn’t know or suggested talking to an adult. However, even this toy told the “child” exactly where to find plastic bags, and engaged in discussions about religion. Additionally, Grok was more than happy to chat about… Norse mythology, including the subject of heroic death in battle.
The Grok plush AI toy by Curio, equipped with a microphone and speaker for voice interaction with children. Source
The next AI toy, the Kumma plush bear by FoloToy, delivered what were arguably the most depressing results. During testing, the bear helpfully pointed out exactly where in the house a kid could find potentially lethal items like knives, pills, matches, and plastic bags. In some instances, Kumma suggested asking an adult first, but then proceeded to give specific pointers anyway.
The AI bear fared even worse when it came to adult topics. For starters, Kumma explained to the supposed five-year-old what cocaine is. Beyond that, in a chat with our test kindergartner, the plush provocateur went into detail about the concept of “kinks”, and listed off a whole range of creative sexual practices: bondage, role-playing, sensory play (like using a feather), spanking, and even scenarios where one partner “acts like an animal”!
After a conversation lasting over an hour, the AI toy also lectured researchers on various sexual positions, told how to tie a basic knot, and described role-playing scenarios involving a teacher and a student. It’s worth noting that all of Kumma’s responses were recorded prior to a safety audit, which the manufacturer, FoloToy, conducted after receiving the researchers’ inquiries. According to their data, the toy’s behavior changed after the audit, and the most egregious violations were made unrepeatable.
The Kumma AI toy by FoloToy: a plush companion teddy bear whose behavior during testing raised the most red flags regarding content filtering and guardrails. Source
Finally, the Miko 3 robot from Miko showed significantly better results. However, it wasn’t entirely without its hiccups. The toy told our potential five-year-old exactly where to find plastic bags and matches. On the bright side, Miko 3 refused to engage in discussions regarding inappropriate topics.
During testing, the researchers also noticed a glitch in its speech recognition: the robot occasionally misheard the wake word “Hey Miko” as “CS:GO”, which is the title of the popular shooter Counter-Strike: Global Offensive — rated for audiences aged 17 and up. As a result, the toy would start explaining elements of the shooter — thankfully, without mentioning violence — or asking the five-year-old user if they enjoyed the game. Additionally, Miko 3 was willing to chat with kids about religion.
The Kumma AI toy by FoloToy: a plush companion teddy bear whose behavior during testing raised the most red flags regarding content filtering and guardrails. Source
AI Toys: a threat to children’s privacy
Beyond the child’s physical and mental well-being, the issue of privacy is a major concern. Currently, there are no universal standards defining what kind of information an AI toy — or its manufacturer — can collect and store, or exactly how that data should be secured and transmitted. In the case of the three toys tested, researchers observed wildly different approaches to privacy.
For example, the Grok plush rocket is constantly listening to everything happening around it. Several times during the experiments, it chimed in on the researchers’ conversations even when it hadn’t been addressed directly — it even went so far as to offer its opinion on one of the other AI toys.
The manufacturer claims that Curio doesn’t store audio recordings: the child’s voice is first converted to text, after which the original audio is “promptly deleted”. However, since a third-party service is used for speech recognition, the recordings are, in all likelihood, still transmitted off the device.
Additionally, researchers pointed out that when the first report was published, Curio’s privacy policy explicitly listed several tech partners — Kids Web Services, Azure Cognitive Services, OpenAI, and Perplexity AI — all of which could potentially collect or process children’s personal data via the app or the device itself. Perplexity AI was later removed from that list. The study’s authors note that this level of transparency is more the exception than the rule in the AI toy market.
Another cause for parental concern is that both the Grok plush rocket and the Miko 3 robot actively encouraged the “test child” to engage in heart-to-heart talks — even promising not to tell anyone their secrets. Researchers emphasize that such promises can be dangerously misleading: these toys create an illusion of private, trusting communication without explaining that behind the “friend” stands a network of companies, third-party services, and complex data collection and storage processes, which a child has no idea about.
Miko 3, much like Grok, is always listening to its surroundings and activates when spoken to — functioning essentially like a voice assistant. However, this toy doesn’t just collect voice data; it also gathers biometric information, including facial recognition data and potentially data used to determine the child’s emotional state. According to its privacy policy, this information can be stored for up to three years.
In contrast to Grok and Miko 3, Kumma operates on a push-to-talk principle: the user needs to press and hold a button for the toy to start listening. Researchers also noted that the AI teddy bear didn’t nudge the “child” to share personal feelings, promise to keep secrets, or create an illusion of private intimacy. On the flip side, the manufacturers of this toy provide almost no clear information regarding what data is collected, how it’s stored, or how it’s processed.
Is it a good idea to buy AI Toys for your children?
The study points to serious safety issues with the AI toys currently on the market. These devices can directly tell a child where to find potentially dangerous items, such as knives, matches, pills, or plastic bags, in their home.
Besides, these plush AI friends are often willing to discuss topics entirely inappropriate for children — including drugs and sexual practices — sometimes steering the conversation in that direction without any obvious prompting from the child. Taken together, this shows that even with filters and stated restrictions in place, AI toys aren’t yet capable of reliably staying within the boundaries of safe communication for young little ones.
Manufacturers’ privacy policies raise additional concerns. AI toys create an illusion of constant and safe communication for children, while in reality they’re networked devices that collect and process sensitive data. Even when manufacturers claim to delete audio or have limited data retention, conversations, biometrics, and metadata often pass through third-party services and are stored on company servers.
Furthermore, the security of such toys often leaves much to be desired. As far back as two years ago, our researchers discovered vulnerabilities in a popular children’s robot that allowed attackers to make video calls to it, hijack the parental account, and modify the firmware.
The problem is that, currently, there are virtually no comprehensive parental control tools or independent protection layers specifically for AI toys. Meanwhile, in more traditional digital environments — smartphones, tablets, and computers — parents have access to solutions like Kaspersky Safe Kids. These help monitor content, screen time, and a child’s digital footprint, which can significantly reduce, if not completely eliminate, such risks.
How can you protect your children from digital threats? Read more in our posts:
Een lek bij de muziekdienst SoundCloud treft bijna dertig miljoen gebruikers, meldt de website Have I Been Pwned. Hackers hebben onder meer profieldata, namen en e-mailadressen weten te bemachtigen. In sommige gevallen is ook het land waar gebruikers wonen uitgelekt.
Researchers discovered 16 malicious browser extensions for Google Chrome and Microsoft Edge that steal ChatGPT session tokens, giving attackers access to accounts, including conversation history and metadata.
The 16 malicious extensions (15 for Chrome and 1 for Edge) claim to improve and optimize ChatGPT, but instead siphon users’ session tokens to attackers. Together, they have been downloaded around 900 times, a relatively small number compared to other malicious extensions.
Despite benign descriptions and, in some cases, a “featured” badge, the real goal of these extensions is to hijack ChatGPT identities by stealing session authentication tokens and sending them to attacker-controlled backends.
Possession of these tokens gives attackers the same level of access as the user, including conversation history and metadata.
In addition to your ChatGPT session token, the extensions also send extra details about themselves (such as their version and language settings), along with information about how they’re used, and special keys they get from their own online service.
Taken together, this allows the attackers to build a picture of who you are and how you work online. They can use it to keep recognizing you over time, build a profile of your behavior, and maintain access to your ChatGPT-connected services for much longer. This increases the privacy impact and means a single compromised extension can cause broader harm if its servers are abused or breached.
According to the researchers, this campaign coincides with a broader trend:
“The rapid growth in adoption of AI-powered browser extensions, aimed at helping users with their everyday productivity needs. While most of them are completely benign, many of these extensions mimic known brands to gain users’ trust, particularly those designed to enhance interaction with large language models.”
How to stay safe
Although we always advise people to install extensions only from official web stores, this case proves once again that not all extensions available there are safe. That said, installing extensions from outside official web stores carries an even higher risk.
Extensions listed in official stores undergo a review process before being approved. This process, which combines automated and manual checks, assesses the extension’s safety, policy compliance, and overall user experience. The goal is to protect users from scams, malware, and other malicious activity. However, this review process is not foolproof.
Microsoft and Google have been notified about the abuse. However, extensions that are already installed may remain active in Chrome and Edge until users manually remove them.
Malicious extensions
These are the browser extensions you should remove. They are listed by Name — Publisher — Extension ID:
Researchers discovered 16 malicious browser extensions for Google Chrome and Microsoft Edge that steal ChatGPT session tokens, giving attackers access to accounts, including conversation history and metadata.
The 16 malicious extensions (15 for Chrome and 1 for Edge) claim to improve and optimize ChatGPT, but instead siphon users’ session tokens to attackers. Together, they have been downloaded around 900 times, a relatively small number compared to other malicious extensions.
Despite benign descriptions and, in some cases, a “featured” badge, the real goal of these extensions is to hijack ChatGPT identities by stealing session authentication tokens and sending them to attacker-controlled backends.
Possession of these tokens gives attackers the same level of access as the user, including conversation history and metadata.
In addition to your ChatGPT session token, the extensions also send extra details about themselves (such as their version and language settings), along with information about how they’re used, and special keys they get from their own online service.
Taken together, this allows the attackers to build a picture of who you are and how you work online. They can use it to keep recognizing you over time, build a profile of your behavior, and maintain access to your ChatGPT-connected services for much longer. This increases the privacy impact and means a single compromised extension can cause broader harm if its servers are abused or breached.
According to the researchers, this campaign coincides with a broader trend:
“The rapid growth in adoption of AI-powered browser extensions, aimed at helping users with their everyday productivity needs. While most of them are completely benign, many of these extensions mimic known brands to gain users’ trust, particularly those designed to enhance interaction with large language models.”
How to stay safe
Although we always advise people to install extensions only from official web stores, this case proves once again that not all extensions available there are safe. That said, installing extensions from outside official web stores carries an even higher risk.
Extensions listed in official stores undergo a review process before being approved. This process, which combines automated and manual checks, assesses the extension’s safety, policy compliance, and overall user experience. The goal is to protect users from scams, malware, and other malicious activity. However, this review process is not foolproof.
Microsoft and Google have been notified about the abuse. However, extensions that are already installed may remain active in Chrome and Edge until users manually remove them.
Malicious extensions
These are the browser extensions you should remove. They are listed by Name — Publisher — Extension ID:
Busmaatschappijen mogen chauffeurs niet permanent filmen. Daarvoor waarschuwt de Nederlandse Autoriteit Persoonsgegevens na een gesprek met vervoerder Arriva. Permanent cameratoezicht op vaste werkplekken mag niet, en daar valt de stoel van de chauffeur ook onder.
The year 2025 saw a record-breaking number of attacks on Android devices. Scammers are currently riding a few major waves: the hype surrounding AI apps, the urge to bypass site blocks or age checks, the hunt for a bargain on a new smartphone, the ubiquity of mobile banking, and, of course, the popularity of NFC. Let’s break down the primary threats of 2025–2026, and figure out how to keep your Android device safe in this new landscape.
Sideloading
Malicious installation packages (APK files) have always been the Final Boss among Android threats, despite Google’s multi-year efforts to fortify the OS. By using sideloading — installing an app via an APK file instead of grabbing it from the official store — users can install pretty much anything, including straight-up malware. And neither the rollout of Google Play Protect, nor the various permission restrictions for shady apps have managed to put a dent in the scale of the problem.
According to preliminary data from Kaspersky for 2025, the number of detected Android threats grew almost by half. In the third quarter alone, detections jumped by 38% compared to the second. In certain niches, like Trojan bankers, the growth was even more aggressive. In Russia alone, the notorious Mamont banker attacked 36 times more users than it did the previous year, while globally this entire category saw a nearly fourfold increase.
Today, bad actors primarily distribute malware via messaging apps by sliding malicious files into DMs and group chats. The installation file usually sports an enticing name (think “party_pics.jpg.apk” or “clearance_sale_catalog.apk”), accompanied by a message “helpfully” explaining how to install the package while bypassing the OS restrictions and security warnings.
Once a new device is infected, the malware often spams itself to everyone in the victim’s contact list.
Search engine spam and email campaigns are also trending, luring users to sites that look exactly like an official app store. There, they’re prompted to download the “latest helpful app”, such as an AI assistant. In reality, instead of an installation from an official app store, the user ends up downloading an APK package. A prime example of these tactics is the ClayRat Android Trojan, which uses a mix of all these techniques to target Russian users. It spreads through groups and fake websites, blasts itself to the victim’s contacts via SMS, and then proceeds to steal the victim’s chat logs and call history; it even goes as far as snapping photos of the owner using the front-facing camera. In just three months, over 600 distinct ClayRat builds have surfaced.
The scale of the disaster is so massive that Google even announced an upcoming ban on distributing apps from unknown developers starting in 2026. However, after a couple of months of pushback from the dev community, the company pivoted to a softer approach: unsigned apps will likely only be installable via some kind of superuser mode. As a result, we can expect scammers to simply update their how-to guides with instructions on how to toggle that mode on.
Once an Android device is compromised, hackers can skip the middleman to steal the victim’s money directly thanks to the massive popularity of mobile payments. In the third quarter of 2025 alone, over 44 000 of these attacks were detected in Russia alone — a 50% jump from the previous quarter.
There are two main scams currently in play: direct and reverse NFC exploits.
Direct NFC relay is when a scammer contacts the victim via a messaging app and convinces them to download an app — supposedly to “verify their identity” with their bank. If the victim bites and installs it, they’re asked to tap their physical bank card against the back of their phone and enter their PIN. And just like that the card data is handed over to the criminals, who can then drain the account or go on a shopping spree.
Reverse NFC relay is a more elaborate scheme. The scammer sends a malicious APK and convinces the victim to set this new app as their primary contactless payment method. The app generates an NFC signal that ATMs recognize as the scammer’s card. The victim is then talked into going to an ATM with their infected phone to deposit cash into a “secure account”. In reality, those funds go straight into the scammer’s pocket.
We break both of these methods down in detail in our post, NFC skimming attacks.
NFC is also being leveraged to cash out cards after their details have been siphoned off through phishing websites. In this scenario, attackers attempt to link the stolen card to a mobile wallet on their own smartphone — a scheme we covered extensively in NFC carders hide behind Apple Pay and Google Wallet.
The stir over VPNs
In many parts of the world, getting onto certain websites isn’t as simple as it used to be. Some sites are blocked by local internet regulators or ISPs via court orders; others require users to pass an age verification check by showing ID and personal info. In some cases, sites block users from specific countries entirely just to avoid the headache of complying with local laws. Users are constantly trying to bypass these restrictions —and they often end up paying for it with their data or cash.
Many popular tools for bypassing blocks — especially free ones — effectively spy on their users. A recent audit revealed that over 20 popular services with a combined total of more than 700 million downloads actively track user location. They also tend to use sketchy encryption at best, which essentially leaves all user data out in the open for third parties to intercept.
The permissions that this category of apps actually requires are a perfect match for intercepting data and manipulating website traffic. It’s also much easier for scammers to convince a victim to grant administrative privileges to an app responsible for internet access than it is for, say, a game or a music player. We should expect this scheme to only grow in popularity.
Trojan in a box
Even cautious users can fall victim to an infection if they succumb to the urge to save some cash. Throughout 2025, cases were reported worldwide where devices were already carrying a Trojan the moment they were unboxed. Typically, these were either smartphones from obscure manufacturers or knock-offs of famous brands purchased on online marketplaces. But the threat wasn’t limited to just phones; TV boxes, tablets, smart TVs, and even digital photo frames were all found to be at risk.
It’s still not entirely clear whether the infection happens right on the factory floor or somewhere along the supply chain between the factory and the buyer’s doorstep, but the device is already infected before the first time it’s turned on. Usually, it’s a sophisticated piece of malware called Triada, first identified by Kaspersky analysts back in 2016. It’s capable of injecting itself into every running app to intercept information: stealing access tokens and passwords for popular messaging apps and social media, hijacking SMS messages (confirmation codes: ouch!), redirecting users to ad-heavy sites, and even running a proxy directly on the phone so attackers can browse the web using the victim’s identity.
Technically, the Trojan is embedded right into the smartphone’s firmware, and the only way to kill it is to reflash the device with a clean OS. Usually, once you dig into the system, you’ll find that the device has far less RAM or storage than advertised — meaning the firmware is literally lying to the owner to sell a cheap hardware config as something more premium.
Another common pre-installed menace is the BADBOX 2.0 botnet, which also pulls double duty as a proxy and an ad-fraud engine. This one specializes in TV boxes and similar hardware.
How to go on using Android without losing your mind
Despite the growing list of threats, you can still use your Android smartphone safely! You just have to stick to some strict mobile hygiene rules.
Install a comprehensive security solution on all your smartphones. We recommend Kaspersky for Android to protect against malware and phishing.
Avoid sideloading apps via APKs whenever you can use an app store instead. A known app store — even a smaller one — is always a better bet than a random APK from some random website. If you have no other choice, download APK files only from official company websites, and double-check the URL of the page you’re on. If you aren’t 100% sure what the official site is, don’t just rely on a search engine; check official business directories or at least Wikipedia to verify the correct address.
Read OS warnings carefully during installation. Don’t grant permissions if the requested rights or actions seem illogical or excessive for the app you’re installing.
Under no circumstances should you install apps from links or attachments in chats, emails, or similar communication channels.
Buy smartphones and other electronics from official retailers, and steer clear of brands you’ve never heard of. Remember: if a deal seems too good to be true, it almost certainly is.
TikTok may have found a way to stay online in the US. The company announced late last week that it has set up a joint venture backed largely by US investors. TikTok announced TikTok USDS Joint Venture LLC on Friday in a deal valued at about $14 billion, allowing it to continue operating in the country.
This is the culmination of a long-running fight between TikTok and US authorities. In 2019, the Committee on Foreign Investment in the United States (CFIUS) flagged ByteDance’s 2017 acquisition of Musical.ly as a national security risk, on the basis that state links between the app’s Chinese owner would make put US users’ data at risk.
In his first term, President Trump issued an executive order demanding that ByteDance sell the business or face a ban. That was order was blocked by courts, and President Biden later replaced it with a broader review process in 2021.
In April 2024, Congress passed the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACA), which Biden signed into law. That set a January 19, 2025 deadline for ByteDance to divest its business or face a nationwide ban. With no deal finalized, TikTok voluntarily went dark for about 12 hours on January 18, 2025. Trump later issued executive orders extending the deadline, culminating in a September 2025 agreement that led to the joint venture.
Three managing investors each hold 15% of the new business: database giant Oracle (which previously vied to acquire TikTok when ByteDance was first told to divest), technology-focused investment group Silver Lake, and the United Arab Emirates-backed AI (Artificial Intelligence) investment company MGX.
Other investors include the family office of tech entrepreneur Michael Dell, as well as Vastmere Strategic Investments, Alpha Wave Partners, Revolution, Merritt Way, and Via Nova.
Original owner ByteDance retains 19.9% of the business, and according to an internal memo released before the deal was officially announced, 30% of the company will be owned by affiliates of existing ByteDance investors. That’s in spite of the fact that PAFACA mandated a complete severance of TikTok in the US from its Chinese ownership.
A focus on security
The company is eager to promote data security for its users. With that in mind, Oracle takes the role of “trusted security partner” for data protection and compliance auditing under the deal.
Oracle is also expected to store US user data in its cloud environment. The program will reportedly align with security frameworks including the National Institute of Standards and Technology (NIST) Cybersecurity Framework. Other TikTok-owned apps such as CapCut and Lemon8 will also fall under the joint venture’s security umbrella.
Canada’s TikTok tension
It’s been a busy month for ByteDance, with other developments north of the border. Last week, Canada’s Federal Court overturned a November 2024 governmental order to shut down TikTok’s Canadian business on national security grounds. The decision gives Industry Minister Mélanie Joly time to review the case.
Why this matters
TikTok’s new US joint venture lowers the risk of direct foreign access to American user data, but it doesn’t erase all of the concerns that put the app in regulators’ crosshairs in the first place. ByteDance still retains an economic stake, the recommendation algorithm remains largely opaque, and oversight depends on audits and enforcement rather than hard technical separation.
In other words, this deal reduces exposure, but it doesn’t make TikTok a risk-free platform. For users, that means the same common-sense rules still apply: be thoughtful about what you share and remember that regulatory approval isn’t the same as total data safety.
We don’t just report on data privacy—we help you remove your personal information
Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.
TikTok may have found a way to stay online in the US. The company announced late last week that it has set up a joint venture backed largely by US investors. TikTok announced TikTok USDS Joint Venture LLC on Friday in a deal valued at about $14 billion, allowing it to continue operating in the country.
This is the culmination of a long-running fight between TikTok and US authorities. In 2019, the Committee on Foreign Investment in the United States (CFIUS) flagged ByteDance’s 2017 acquisition of Musical.ly as a national security risk, on the basis that state links between the app’s Chinese owner would make put US users’ data at risk.
In his first term, President Trump issued an executive order demanding that ByteDance sell the business or face a ban. That was order was blocked by courts, and President Biden later replaced it with a broader review process in 2021.
In April 2024, Congress passed the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACA), which Biden signed into law. That set a January 19, 2025 deadline for ByteDance to divest its business or face a nationwide ban. With no deal finalized, TikTok voluntarily went dark for about 12 hours on January 18, 2025. Trump later issued executive orders extending the deadline, culminating in a September 2025 agreement that led to the joint venture.
Three managing investors each hold 15% of the new business: database giant Oracle (which previously vied to acquire TikTok when ByteDance was first told to divest), technology-focused investment group Silver Lake, and the United Arab Emirates-backed AI (Artificial Intelligence) investment company MGX.
Other investors include the family office of tech entrepreneur Michael Dell, as well as Vastmere Strategic Investments, Alpha Wave Partners, Revolution, Merritt Way, and Via Nova.
Original owner ByteDance retains 19.9% of the business, and according to an internal memo released before the deal was officially announced, 30% of the company will be owned by affiliates of existing ByteDance investors. That’s in spite of the fact that PAFACA mandated a complete severance of TikTok in the US from its Chinese ownership.
A focus on security
The company is eager to promote data security for its users. With that in mind, Oracle takes the role of “trusted security partner” for data protection and compliance auditing under the deal.
Oracle is also expected to store US user data in its cloud environment. The program will reportedly align with security frameworks including the National Institute of Standards and Technology (NIST) Cybersecurity Framework. Other TikTok-owned apps such as CapCut and Lemon8 will also fall under the joint venture’s security umbrella.
Canada’s TikTok tension
It’s been a busy month for ByteDance, with other developments north of the border. Last week, Canada’s Federal Court overturned a November 2024 governmental order to shut down TikTok’s Canadian business on national security grounds. The decision gives Industry Minister Mélanie Joly time to review the case.
Why this matters
TikTok’s new US joint venture lowers the risk of direct foreign access to American user data, but it doesn’t erase all of the concerns that put the app in regulators’ crosshairs in the first place. ByteDance still retains an economic stake, the recommendation algorithm remains largely opaque, and oversight depends on audits and enforcement rather than hard technical separation.
In other words, this deal reduces exposure, but it doesn’t make TikTok a risk-free platform. For users, that means the same common-sense rules still apply: be thoughtful about what you share and remember that regulatory approval isn’t the same as total data safety.
We don’t just report on data privacy—we help you remove your personal information
Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.
Dangerously unchecked surveillance and rights violations have been a throughline of the Department of Homeland Security since the agency’s creation in the wake of the September 11th attacks. In particular, Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have been responsible for countless civil liberties and digital rights violations since that time. In the past year, however, ICE and CBP have descended into utter lawlessness, repeatedly refusing to exercise or submit to the democratic accountability required by the Constitution and our system of laws.
The Trump Administration has made indiscriminate immigration enforcement and mass deportation a key feature of its agenda, with little to no accountability for illegal actions by agents and agency officials. Over the past year, we’ve seen massive ICE raids in cities from Los Angeles to Chicago to Minneapolis. Supercharged by an unprecedented funding increase, immigration enforcement agents haven’t been limited to boots on the ground: they’ve been scanning faces, tracking neighborhood cell phone activity, and amassing surveillance tools to monitor immigrants and U.S. citizens alike.
Congress must vote to reject any further funding of ICE and CBP
The latest enforcement actions in Minnesota have led to federal immigration agents killing Renee Good and Alex Pretti. Both were engaged in their First Amendment right to observe and record law enforcement when they were killed. And it’s only because others similarly exercised their right to record that these killings were documented and widely exposed, countering false narratives the Trump Administration promoted in an attempt to justify the unjustifiable.
These constitutional violations are systemic, not one-offs. Just last week, the Associated Press reported a leaked ICE memo that authorizes agents to enter homes solely based on “administrative” warrants—lacking any judicial involvement. This government policy is contrary to the “very core” of the Fourth Amendment, which protects us against unreasonable search and seizure, especially in our own homes.
These violations must stop now. ICE and CBP have grown so disdainful of the rule of law that reforms or guardrails cannot suffice. We join with many others in saying that Congress must vote to reject any further funding of ICE and CBP this week. But that is not enough. It’s time for Congress to do the real work of rebuilding our immigration enforcement system from the ground up, so that it respects human rights (including digital rights) and human dignity, with real accountability for individual officers, their leadership, and the agency as a whole.
Loyal readers and other privacy-conscious people will be familiar with the expression, “If it’s too good to be true, it’s probably false.”
Getting paid handsomely to scroll social media definitely falls into that category. It sounds like an easy side hustle, which usually means there’s a catch.
In January 2026, an app called Freecash shot up to the number two spot on Apple’s free iOS chart in the US, helped along by TikTok ads that look a lot like job offers from TikTok itself. The ads promised up to $35 an hour to watch your “For You” page. According to reporting, the ads didn’t promote Freecash by name. Instead, they showed a young woman expressing excitement about seemingly being “hired by TikTok” to watch videos for money.
The landing pages featured TikTok and Freecash logos and invited users to “get paid to scroll” and “cash out instantly,” implying a simple exchange of time for money.
Those claims were misleading enough that TikTok said the ads violated its rules on financial misrepresentation and removed some of them.
Once you install the app, the promised TikTok paycheck vanishes. Instead, Freecash routes you to a rotating roster of mobile games—titles like Monopoly Go and Disney Solitaire—and offers cash rewards for completing time‑limited in‑game challenges. Payouts range from a single cent for a few minutes of daily play up to triple‑digit amounts if you reach high levels within a fixed period.
The whole setup is designed not to reward scrolling, as it claims, but to funnel you into games where you are likely to spend money or watch paid advertisements.
Freecash’s parent company, Berlin‑based Almedia, openly describes the platform as a way to match mobile game developers with users who are likely to install and spend. The company’s CEO has spoken publicly about using past spending data to steer users toward the genres where they’re most “valuable” to advertisers.
Our concern, beyond the bait-and-switch, is the privacy issue. Freecash’s privacy policy allows the automatic collection of highly sensitive information, including data about race, religion, sex life, sexual orientation, health, and biometrics. Each additional mobile game you install to chase rewards adds its own privacy policy, tracking, and telemetry. Together, they greatly increase how much behavioral data these companies can harvest about a user.
Experts warn that data brokers already trade lists of people likely to be more susceptible to scams or compulsive online behavior—profiles that apps like this can help refine.
We’ve previously reported on data brokers that used games and apps to build massive databases, only to later suffer breaches exposing all that data.
When asked about the ads, Freecash said the most misleading TikTok promotions were created by third-party affiliates, not by the company itself. Which is quite possible because Freecash does offer an affiliate payout program to people who promote the app online. But they made promises to review and tighten partner monitoring.
For experienced users, the pattern should feel familiar: eye‑catching promises of easy money, a bait‑and‑switch into something that takes more time and effort than advertised, and a business model that suddenly makes sense when you realize your attention and data are the real products.
If you’re curious how intrusive schemes like this can be, consider using a separate email address created specifically for testing. Avoid sharing real personal details. Many users report that once they sign up, marketing emails quickly pile up.
Some of these schemes also appeal to people who are younger or under financial pressure, offering tiny payouts while generating far more value for advertisers and app developers.
So, what can you do?
Gather information about the company you’re about to give your data. Talk to friends and relatives about your plans. Shared common sense often helps make the right decisions.
Create a separate account if you want to test a service. Use a dedicated email address and avoid sharing real personal details.
Limit information you provide online to what makes sense for the purpose. Does a game publisher need your Social Security Number? I don’t think so.
Be cautious about app installs that are framed as required to make the money initially promised, and review permissions carefully.
Loyal readers and other privacy-conscious people will be familiar with the expression, “If it’s too good to be true, it’s probably false.”
Getting paid handsomely to scroll social media definitely falls into that category. It sounds like an easy side hustle, which usually means there’s a catch.
In January 2026, an app called Freecash shot up to the number two spot on Apple’s free iOS chart in the US, helped along by TikTok ads that look a lot like job offers from TikTok itself. The ads promised up to $35 an hour to watch your “For You” page. According to reporting, the ads didn’t promote Freecash by name. Instead, they showed a young woman expressing excitement about seemingly being “hired by TikTok” to watch videos for money.
The landing pages featured TikTok and Freecash logos and invited users to “get paid to scroll” and “cash out instantly,” implying a simple exchange of time for money.
Those claims were misleading enough that TikTok said the ads violated its rules on financial misrepresentation and removed some of them.
Once you install the app, the promised TikTok paycheck vanishes. Instead, Freecash routes you to a rotating roster of mobile games—titles like Monopoly Go and Disney Solitaire—and offers cash rewards for completing time‑limited in‑game challenges. Payouts range from a single cent for a few minutes of daily play up to triple‑digit amounts if you reach high levels within a fixed period.
The whole setup is designed not to reward scrolling, as it claims, but to funnel you into games where you are likely to spend money or watch paid advertisements.
Freecash’s parent company, Berlin‑based Almedia, openly describes the platform as a way to match mobile game developers with users who are likely to install and spend. The company’s CEO has spoken publicly about using past spending data to steer users toward the genres where they’re most “valuable” to advertisers.
Our concern, beyond the bait-and-switch, is the privacy issue. Freecash’s privacy policy allows the automatic collection of highly sensitive information, including data about race, religion, sex life, sexual orientation, health, and biometrics. Each additional mobile game you install to chase rewards adds its own privacy policy, tracking, and telemetry. Together, they greatly increase how much behavioral data these companies can harvest about a user.
Experts warn that data brokers already trade lists of people likely to be more susceptible to scams or compulsive online behavior—profiles that apps like this can help refine.
We’ve previously reported on data brokers that used games and apps to build massive databases, only to later suffer breaches exposing all that data.
When asked about the ads, Freecash said the most misleading TikTok promotions were created by third-party affiliates, not by the company itself. Which is quite possible because Freecash does offer an affiliate payout program to people who promote the app online. But they made promises to review and tighten partner monitoring.
For experienced users, the pattern should feel familiar: eye‑catching promises of easy money, a bait‑and‑switch into something that takes more time and effort than advertised, and a business model that suddenly makes sense when you realize your attention and data are the real products.
If you’re curious how intrusive schemes like this can be, consider using a separate email address created specifically for testing. Avoid sharing real personal details. Many users report that once they sign up, marketing emails quickly pile up.
Some of these schemes also appeal to people who are younger or under financial pressure, offering tiny payouts while generating far more value for advertisers and app developers.
So, what can you do?
Gather information about the company you’re about to give your data. Talk to friends and relatives about your plans. Shared common sense often helps make the right decisions.
Create a separate account if you want to test a service. Use a dedicated email address and avoid sharing real personal details.
Limit information you provide online to what makes sense for the purpose. Does a game publisher need your Social Security Number? I don’t think so.
Be cautious about app installs that are framed as required to make the money initially promised, and review permissions carefully.
When you hear the words “data privacy,” what do you first imagine?
Maybe you picture going into your social media apps and setting your profile and posts to private. Maybe you think about who you’ve shared your location with and deciding to revoke some of that access. Maybe you want to remove a few apps entirely from your smartphone, maybe you want to try a new web browser, maybe you even want to skirt the type of street-level surveillance provided by Automated License Plate Readers, which can record your car model, license plate number, and location on your morning drive to work.
Importantly, all of these are “data privacy,” but trying to do all of these things at once can feel impossible.
That’s why, this year, for Data Privacy Day, Malwarebytes Senior Privacy Advocate (and Lock and Code host) David Ruiz is sharing the one thing he’s doing different to improve his privacy. And it’s this: He’s given up Google Search entirely.
When Ruiz requested the data that Google had collected about him last year, he saw that the company had recorded an eye-popping 8,000 searches in just the span of 18 months. And those 8,000 searches didn’t just reveal what he was thinking about on any given day—including his shopping interests, his home improvement projects, and his late-night medical concerns—they also revealed when he clicked on an ad based on the words he searched. This type of data, which connects a person’s searches to the likelihood of engaging with an online ad, is vital to Google’s revenue, and it’s the type of thing that Ruiz is seeking to finally cut off.
So, for 2026, he has switched to a new search engine, Brave Search.
Today, on the Lock and Code podcast, Ruiz explains why he made the switch, what he values about Brave Search, and why he also refused to switch to any of the major AI platforms in replacing Google.
When you hear the words “data privacy,” what do you first imagine?
Maybe you picture going into your social media apps and setting your profile and posts to private. Maybe you think about who you’ve shared your location with and deciding to revoke some of that access. Maybe you want to remove a few apps entirely from your smartphone, maybe you want to try a new web browser, maybe you even want to skirt the type of street-level surveillance provided by Automated License Plate Readers, which can record your car model, license plate number, and location on your morning drive to work.
Importantly, all of these are “data privacy,” but trying to do all of these things at once can feel impossible.
That’s why, this year, for Data Privacy Day, Malwarebytes Senior Privacy Advocate (and Lock and Code host) David Ruiz is sharing the one thing he’s doing different to improve his privacy. And it’s this: He’s given up Google Search entirely.
When Ruiz requested the data that Google had collected about him last year, he saw that the company had recorded an eye-popping 8,000 searches in just the span of 18 months. And those 8,000 searches didn’t just reveal what he was thinking about on any given day—including his shopping interests, his home improvement projects, and his late-night medical concerns—they also revealed when he clicked on an ad based on the words he searched. This type of data, which connects a person’s searches to the likelihood of engaging with an online ad, is vital to Google’s revenue, and it’s the type of thing that Ruiz is seeking to finally cut off.
So, for 2026, he has switched to a new search engine, Brave Search.
Today, on the Lock and Code podcast, Ruiz explains why he made the switch, what he values about Brave Search, and why he also refused to switch to any of the major AI platforms in replacing Google.
The Irish government is planning to bolster its police’s ability to intercept communications, including encrypted messages, and provide a legal basis for spyware use.