Normal view

Regulators around the world are scrutinizing Grok over sexual deepfakes

12 January 2026 at 15:04

Grok’s failure to block sexualized images of minors has turned a single “isolated lapse” into a global regulatory stress test for xAI’s ambitions. The response from lawmakers and regulators suggests this will not be solved with a quick apology and a hotfix.

Last week we reported on Grok’s apology after it generated an image of young girls in “sexualized attire.”

The apology followed the introduction of Grok’s paid “Spicy Mode” in August 2025, which was marketed as edgy and less censored. In practice it enabled users to generate sexual deepfake images, including content that may cross into illegal child sexual abuse material (CSAM) under US and other jurisdictions’ laws.

A report from web-monitoring tool CopyLeaks highlighted “thousands” of incidents of Grok being used to create sexually suggestive images of non-consenting celebrities.

This is starting to backfire. Reportedly, three US senators are asking Google and Apple to remove Elon Musk’s Grok and X apps from their app stores, citing the spread of nonconsensual sexualized AI images of women and minors and arguing it violates the companies’ app store rules.

In their joint letter, the senators state:

“In recent days, X users have used the app’s Grok AI tool to generate nonconsensual sexual imagery of real, private citizens at scale. This trend has included Grok modifying images to depict women being sexually abused, humiliated, hurt, and even killed. In some cases, Grok has reportedly created sexualized images of children—the most heinous type of content imaginable.”

The UK government also threatens to take possible action against the platform. Government officials have said they would fully support any action taken by Ofcom, the independent media regulator, against X. Even if that meant UK regulators could block the platform.

Indonesia and Malaysia already blocked Grok after its “digital undressing” function flooded the internet with suggestive and obscene manipulated images of women and minors.

As it turns out, a user prompted Grok to generate its own “apology,” which it did. After backlash over sexualized images of women and minors, Grok/X announced limits on image generation and editing for paying subscribers only, effectively paywalling those capabilities on main X surfaces.

For lawmakers already worried about disinformation, election interference, deepfakes, and abuse imagery, Grok is fast becoming the textbook case for why “move fast and break things” doesn’t mix with AI that can sexualize real people on demand.

Hopefully, the next wave of rules, ranging from EU AI enforcement to platform-specific safety obligations, will treat this incident as the baseline risk that all large-scale visual models must withstand, not as an outlier.

Keep your children safe

If you ever wondered why parents post images of their children with a smiley across their face, this is the reason.

Don’t make it easy for strangers to copy, reuse, or manipulate your photos.

This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

And treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Regulators around the world are scrutinizing Grok over sexual deepfakes

12 January 2026 at 15:04

Grok’s failure to block sexualized images of minors has turned a single “isolated lapse” into a global regulatory stress test for xAI’s ambitions. The response from lawmakers and regulators suggests this will not be solved with a quick apology and a hotfix.

Last week we reported on Grok’s apology after it generated an image of young girls in “sexualized attire.”

The apology followed the introduction of Grok’s paid “Spicy Mode” in August 2025, which was marketed as edgy and less censored. In practice it enabled users to generate sexual deepfake images, including content that may cross into illegal child sexual abuse material (CSAM) under US and other jurisdictions’ laws.

A report from web-monitoring tool CopyLeaks highlighted “thousands” of incidents of Grok being used to create sexually suggestive images of non-consenting celebrities.

This is starting to backfire. Reportedly, three US senators are asking Google and Apple to remove Elon Musk’s Grok and X apps from their app stores, citing the spread of nonconsensual sexualized AI images of women and minors and arguing it violates the companies’ app store rules.

In their joint letter, the senators state:

“In recent days, X users have used the app’s Grok AI tool to generate nonconsensual sexual imagery of real, private citizens at scale. This trend has included Grok modifying images to depict women being sexually abused, humiliated, hurt, and even killed. In some cases, Grok has reportedly created sexualized images of children—the most heinous type of content imaginable.”

The UK government also threatens to take possible action against the platform. Government officials have said they would fully support any action taken by Ofcom, the independent media regulator, against X. Even if that meant UK regulators could block the platform.

Indonesia and Malaysia already blocked Grok after its “digital undressing” function flooded the internet with suggestive and obscene manipulated images of women and minors.

As it turns out, a user prompted Grok to generate its own “apology,” which it did. After backlash over sexualized images of women and minors, Grok/X announced limits on image generation and editing for paying subscribers only, effectively paywalling those capabilities on main X surfaces.

For lawmakers already worried about disinformation, election interference, deepfakes, and abuse imagery, Grok is fast becoming the textbook case for why “move fast and break things” doesn’t mix with AI that can sexualize real people on demand.

Hopefully, the next wave of rules, ranging from EU AI enforcement to platform-specific safety obligations, will treat this incident as the baseline risk that all large-scale visual models must withstand, not as an outlier.

Keep your children safe

If you ever wondered why parents post images of their children with a smiley across their face, this is the reason.

Don’t make it easy for strangers to copy, reuse, or manipulate your photos.

This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

And treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Nederlandse bedrijven vaak doelwit van phishing, maar doen er bijna niets aan

30 September 2025 at 18:23
Uit recent onderzoek van SIDN, uitgevoerd in aanloop naar de cybersecuritymaand die in oktober plaatsvindt, blijkt een zorgwekkende trend in het Nederlandse bedrijfsleven. Veel bedrijf zijn namelijk nog steeds slachtoffer van phishing, maar nemen tegelijkertijd weinig maatregelen.

Datalek bevolkingsonderzoek nog veel groter dan gedacht: zeker 700.000 slachtoffers

29 August 2025 at 10:25
De datalek door de hack op het laboratorium dat onderzoek deed naar baarmoederhalskanker blijkt nog veel groter dan gedacht. Gevoelige persoonlijke informatie van zeker 700.000 vrouwen is gelekt, veel meer dan in eerste instantie werd gemeld.

Dit zijn de beste wachtwoord-apps van dit moment

21 February 2025 at 17:05
Met onze Bright Stuff-koopgids helpen we je graag aan het beste product van het moment. Deze keer: wachtwoord-managers, password-apps, software voor het aanmaken en bewaren van al je wachtwoorden. Handig, veilig en gewoon heel verstandig.

pcTattletale founder pleads guilty as US cracks down on stalkerware

9 January 2026 at 16:41

Reportedly, pcTattletale founder Bryan Fleming has pleaded guilty in US federal court to computer hacking, unlawfully selling and advertising spyware, and conspiracy.

This is good news not just because we despise stalkerware like pcTattletale, but because it is only the second US federal stalkerware prosecution in a decade. It could could open the door to further cases against people who develop, sell, or promote similar tools.

In 2021, we reported that “employee and child-monitoring” software vendor pcTattletale had not been very careful about securing the screenshots it secretly captured from victims’ phones. A security researcher testing a trial version discovered that the app uploaded screenshots to an unsecured online database, meaning anyone could view them without authentication, such as a username and password.

In 2024, we revisited the app after researchers found it was once again leaking a database containing victim screenshots. One researcher discovered that pcTattletale’s Application Programming Interface (API) allowed anyone to access the most recent screen capture recorded from any device on which the spyware is installed. Another researcher uncovered a separate vulnerability that granted full access to the app’s backend infrastructure. That access allowed them to deface the website and steal AWS credentials, which turned out to be shared across all devices. As a result, the researcher obtained data about both victims and the customers who were doing the tracking.

This is no longer possible. Not because the developers fixed the problems, but because Amazon locked pcTattletale’s entire AWS infrastructure. Fleming later abandoned the product and deleted the contents of its servers.

However, Homeland Security Investigations had already started investigating pcTattletale in June 2021 and did not stop. A few things made Fleming stand out among other stalkerware operators. While many hide behind overseas shell companies, Fleming appeared to be proud of his work. And while others market their products as parental control or employee monitoring tools, pcTattletale explicitly promoted spying on romantic partners and spouses, using phrases such as “catch a cheater” and “surreptitiously spying on spouses and partners.” This made it clear the software was designed for non-consensual surveillance of adults.

Fleming is expected to be sentenced later this year.

Removing stalkerware

Malwarebytes, as one of the founding members of the Coalition Against Stalkerware, makes it a priority to detect and remove stalkerware-type apps from your device.

It is important to keep in mind, however, that removing stalkerware may alert the person spying on you that the app has been discovered. The Coalition Against Stalkerware outlines additional steps and considerations to help you decide the safest next move.

Because the apps often install under different names and hide themselves from users, they can be difficult to find and remove. That is where Malwarebytes can help you.

To scan your device:

  1. Open your Malwarebytes dashboard
  2. Start a Scan

The scan may take a few minutes.

 If malware is detected, you can choose one of the following actions:

  • Uninstall. The threat will be deleted from your device.
  • Ignore Always. The file detection will be added to the Allow List, and excluded from future scans. Legitimate files are sometimes detected as malware. We recommend reviewing scan results and adding files to Ignore Always that you know are safe and want to keep.
  • Ignore Once: The detection is ignored for this scan only. It will be detected again during your next scan.

Malwarebytes detects pcTattleTale as PUP.Optional.PCTattletale.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

pcTattletale founder pleads guilty as US cracks down on stalkerware

9 January 2026 at 16:41

Reportedly, pcTattletale founder Bryan Fleming has pleaded guilty in US federal court to computer hacking, unlawfully selling and advertising spyware, and conspiracy.

This is good news not just because we despise stalkerware like pcTattletale, but because it is only the second US federal stalkerware prosecution in a decade. It could could open the door to further cases against people who develop, sell, or promote similar tools.

In 2021, we reported that “employee and child-monitoring” software vendor pcTattletale had not been very careful about securing the screenshots it secretly captured from victims’ phones. A security researcher testing a trial version discovered that the app uploaded screenshots to an unsecured online database, meaning anyone could view them without authentication, such as a username and password.

In 2024, we revisited the app after researchers found it was once again leaking a database containing victim screenshots. One researcher discovered that pcTattletale’s Application Programming Interface (API) allowed anyone to access the most recent screen capture recorded from any device on which the spyware is installed. Another researcher uncovered a separate vulnerability that granted full access to the app’s backend infrastructure. That access allowed them to deface the website and steal AWS credentials, which turned out to be shared across all devices. As a result, the researcher obtained data about both victims and the customers who were doing the tracking.

This is no longer possible. Not because the developers fixed the problems, but because Amazon locked pcTattletale’s entire AWS infrastructure. Fleming later abandoned the product and deleted the contents of its servers.

However, Homeland Security Investigations had already started investigating pcTattletale in June 2021 and did not stop. A few things made Fleming stand out among other stalkerware operators. While many hide behind overseas shell companies, Fleming appeared to be proud of his work. And while others market their products as parental control or employee monitoring tools, pcTattletale explicitly promoted spying on romantic partners and spouses, using phrases such as “catch a cheater” and “surreptitiously spying on spouses and partners.” This made it clear the software was designed for non-consensual surveillance of adults.

Fleming is expected to be sentenced later this year.

Removing stalkerware

Malwarebytes, as one of the founding members of the Coalition Against Stalkerware, makes it a priority to detect and remove stalkerware-type apps from your device.

It is important to keep in mind, however, that removing stalkerware may alert the person spying on you that the app has been discovered. The Coalition Against Stalkerware outlines additional steps and considerations to help you decide the safest next move.

Because the apps often install under different names and hide themselves from users, they can be difficult to find and remove. That is where Malwarebytes can help you.

To scan your device:

  1. Open your Malwarebytes dashboard
  2. Start a Scan

The scan may take a few minutes.

 If malware is detected, you can choose one of the following actions:

  • Uninstall. The threat will be deleted from your device.
  • Ignore Always. The file detection will be added to the Allow List, and excluded from future scans. Legitimate files are sometimes detected as malware. We recommend reviewing scan results and adding files to Ignore Always that you know are safe and want to keep.
  • Ignore Once: The detection is ignored for this scan only. It will be detected again during your next scan.

Malwarebytes detects pcTattleTale as PUP.Optional.PCTattletale.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Are we ready for ChatGPT Health?

9 January 2026 at 13:26

How comfortable are you with sharing your medical history with an AI?

I’m certainly not.

OpenAI’s announcement about its new ChatGPT Health program prompted discussions about data privacy and how the company plans to keep the information users submit safe.

ChatGPT Health is a dedicated “health space” inside ChatGPT that lets users connect their medical records and wellness apps so the model can answer health and wellness questions in a more personalized way.

ChatGPT health

OpenAI promises additional, layered protections designed specifically for health, “to keep health conversations protected and compartmentalized.”

First off, it’s important to understand that this is not a diagnostic or treatment system. It’s framed as a support tool to help understand health information and prepare for care.

But this is the part that raised questions and concerns:

“You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you.”

In other words, ChatGPT Health lets you link medical records and apps such as Apple Health, MyFitnessPal, and others so the system can explain lab results, track trends (e.g., cholesterol), and help you prepare questions for clinicians or compare insurance options based on your health data.

Given our reservations about the state of AI security in general and chatbots in particular, this is a line that I don’t dare cross. For now, however, I don’t even have the option, since only users with ChatGPT Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the United Kingdom can sign up for the waitlist.

OpenAI only uses partners and apps in ChatGPT Health that meet OpenAI’s privacy and security requirements, which, by design, shifts a great deal of trust onto ChatGPT Health itself.

Users should realize that health information is very sensitive and as Sara Geoghegan, senior counsel at the Electronic Privacy Information Center told The Record: by sharing their electronic medical records with ChatGPT Health, users in the US could effectively remove the HIPAA protection from those records, which is a serious consideration for anyone sharing medical data.

She added:

“ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time.”

Should you decide to try this new feature out, we would advise you to proceed with caution and take the advice to enable 2FA for ChatGPT to heart. OpenAI claims 230 million users already ask ChatGPT health and wellness questions each week. I’d encourage them to do the same.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Are we ready for ChatGPT Health?

9 January 2026 at 13:26

How comfortable are you with sharing your medical history with an AI?

I’m certainly not.

OpenAI’s announcement about its new ChatGPT Health program prompted discussions about data privacy and how the company plans to keep the information users submit safe.

ChatGPT Health is a dedicated “health space” inside ChatGPT that lets users connect their medical records and wellness apps so the model can answer health and wellness questions in a more personalized way.

ChatGPT health

OpenAI promises additional, layered protections designed specifically for health, “to keep health conversations protected and compartmentalized.”

First off, it’s important to understand that this is not a diagnostic or treatment system. It’s framed as a support tool to help understand health information and prepare for care.

But this is the part that raised questions and concerns:

“You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you.”

In other words, ChatGPT Health lets you link medical records and apps such as Apple Health, MyFitnessPal, and others so the system can explain lab results, track trends (e.g., cholesterol), and help you prepare questions for clinicians or compare insurance options based on your health data.

Given our reservations about the state of AI security in general and chatbots in particular, this is a line that I don’t dare cross. For now, however, I don’t even have the option, since only users with ChatGPT Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the United Kingdom can sign up for the waitlist.

OpenAI only uses partners and apps in ChatGPT Health that meet OpenAI’s privacy and security requirements, which, by design, shifts a great deal of trust onto ChatGPT Health itself.

Users should realize that health information is very sensitive and as Sara Geoghegan, senior counsel at the Electronic Privacy Information Center told The Record: by sharing their electronic medical records with ChatGPT Health, users in the US could effectively remove the HIPAA protection from those records, which is a serious consideration for anyone sharing medical data.

She added:

“ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time.”

Should you decide to try this new feature out, we would advise you to proceed with caution and take the advice to enable 2FA for ChatGPT to heart. OpenAI claims 230 million users already ask ChatGPT health and wellness questions each week. I’d encourage them to do the same.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Disney fined $10m for mislabeling kids’ YouTube videos and violating privacy law

6 January 2026 at 13:22

Disney will pay a $10m settlement over allegations that it violated kids’ privacy rights, the Federal Trade Commission (FTC) said this week.

The agreement, first proposed in September 2025, resolves a dispute over Disney’s labeling of child-targeted content on YouTube. The thousands of YouTube videos it targets at kids makes it subject to a US law called the Children’s Online Privacy Protection Act (COPPA). Enacted in 1998, COPPA is designed to protect children under the age of 13 from having their data collected and used online.

That protection matters because children are far less able to understand data collection, advertising, or profiling, and cannot understandingfully consent to it. When COPPA safeguards fail, children may be tracked across videos, served targeted ads, or profiled based on viewing habits, all without parental knowledge or approval.

In 2019, YouTube introduced a policy to help creators comply with COPPA by labeling their content as made for kids (MFK) or not made for kids (NMFK). Content labeled MFK is automatically restricted. For example, it can’t autoplay into related content, appear in the miniplayer, or be added to playlists.

This policy came about after the YouTube’s own painful COPPA-related experience in 2019, when it settled for $170m with the FTC after failing to properly label content directed at children. That still ranks as the biggest ever COPPA settlement by far.

Perhaps the two most important restrictions for videos labeled MFK are these: MFK videos should only autoplay into other kid-appropriate content, preventing (at least in theory) kids from seeing inappropriate content. And advertisers are prohibited from collecting personal data from children watching those videos.

A chastened YouTube warned content creators, including Disney, that they could violate COPPA if they failed to label content correctly. They could do this in two ways: Creators could label entire channels (Disney has about 1,250 of these for its different content brands) or individual videos. So, a channel marked NMFK could still host MFK videos, but those individual videos needed to be labeled correctly.

According to the FTC, Disney’s efforts fell short and plenty of child-targeted videos were incorrectly labeled.

The court complaint stated that Disney applied blanket NMFK labels to entire YouTube channels instead of reviewing videos individually. As a result, some child-targeted videos were incorrectly labeled, allowing data collection and ad targeting that COPPA is meant to prevent. For example, the Pixar channel was labeled NMFK, but showed “very similar” videos from the Pixar Cars channel, which was labeled MFK.

The FTC said YouTube warned Disney in June 2020 that it had reclassified more than 300 of its videos as child-directed across channels including Pixar, Disney Movies, and Walt Disney Animation Studios.

This is not Disney’s first privacy rodeo

Disney has a history of tussles with child privacy laws. In 2011, its Playdom subsidiary paid $3 million (at that point the largest COPPA penalty ever) for collecting data from more than 1.2 million children across 20 virtual world websites. In 2021, Disney also settled a lawsuit that accused it and others of collecting and selling kids’ information via child-focused mobile apps.

In the current case, the FTC voted 3-0 to refer this current case to the Department of Justice, with Commissioners Ferguson, Holyoak, and Meador citing what they described as,

“Disney’s abuse of parents’ trust.”

Under the settlement, Disney must do more than pay up. It also has to notify parents before collecting personal information from children under 13 and obtain parents’ consent to use it. Disney must also review whether individual videos should be labeled as made for kids. However, the FTC provides a get-out clause: Disney won’t have to do this if YouTube implements age assurance technologies that determine a viewer’s age (or age category).

Age assurance is clearly something the FTC is pursuing, saying:

“This forward-looking provision reflects and anticipates the growing use of age assurance technologies to protect kids online.”


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Disney fined $10m for mislabeling kids’ YouTube videos and violating privacy law

6 January 2026 at 13:22

Disney will pay a $10m settlement over allegations that it violated kids’ privacy rights, the Federal Trade Commission (FTC) said this week.

The agreement, first proposed in September 2025, resolves a dispute over Disney’s labeling of child-targeted content on YouTube. The thousands of YouTube videos it targets at kids makes it subject to a US law called the Children’s Online Privacy Protection Act (COPPA). Enacted in 1998, COPPA is designed to protect children under the age of 13 from having their data collected and used online.

That protection matters because children are far less able to understand data collection, advertising, or profiling, and cannot understandingfully consent to it. When COPPA safeguards fail, children may be tracked across videos, served targeted ads, or profiled based on viewing habits, all without parental knowledge or approval.

In 2019, YouTube introduced a policy to help creators comply with COPPA by labeling their content as made for kids (MFK) or not made for kids (NMFK). Content labeled MFK is automatically restricted. For example, it can’t autoplay into related content, appear in the miniplayer, or be added to playlists.

This policy came about after the YouTube’s own painful COPPA-related experience in 2019, when it settled for $170m with the FTC after failing to properly label content directed at children. That still ranks as the biggest ever COPPA settlement by far.

Perhaps the two most important restrictions for videos labeled MFK are these: MFK videos should only autoplay into other kid-appropriate content, preventing (at least in theory) kids from seeing inappropriate content. And advertisers are prohibited from collecting personal data from children watching those videos.

A chastened YouTube warned content creators, including Disney, that they could violate COPPA if they failed to label content correctly. They could do this in two ways: Creators could label entire channels (Disney has about 1,250 of these for its different content brands) or individual videos. So, a channel marked NMFK could still host MFK videos, but those individual videos needed to be labeled correctly.

According to the FTC, Disney’s efforts fell short and plenty of child-targeted videos were incorrectly labeled.

The court complaint stated that Disney applied blanket NMFK labels to entire YouTube channels instead of reviewing videos individually. As a result, some child-targeted videos were incorrectly labeled, allowing data collection and ad targeting that COPPA is meant to prevent. For example, the Pixar channel was labeled NMFK, but showed “very similar” videos from the Pixar Cars channel, which was labeled MFK.

The FTC said YouTube warned Disney in June 2020 that it had reclassified more than 300 of its videos as child-directed across channels including Pixar, Disney Movies, and Walt Disney Animation Studios.

This is not Disney’s first privacy rodeo

Disney has a history of tussles with child privacy laws. In 2011, its Playdom subsidiary paid $3 million (at that point the largest COPPA penalty ever) for collecting data from more than 1.2 million children across 20 virtual world websites. In 2021, Disney also settled a lawsuit that accused it and others of collecting and selling kids’ information via child-focused mobile apps.

In the current case, the FTC voted 3-0 to refer this current case to the Department of Justice, with Commissioners Ferguson, Holyoak, and Meador citing what they described as,

“Disney’s abuse of parents’ trust.”

Under the settlement, Disney must do more than pay up. It also has to notify parents before collecting personal information from children under 13 and obtain parents’ consent to use it. Disney must also review whether individual videos should be labeled as made for kids. However, the FTC provides a get-out clause: Disney won’t have to do this if YouTube implements age assurance technologies that determine a viewer’s age (or age category).

Age assurance is clearly something the FTC is pursuing, saying:

“This forward-looking provision reflects and anticipates the growing use of age assurance technologies to protect kids online.”


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Flock Exposes Its AI-Enabled Surveillance Cameras

2 January 2026 at 13:05

404 Media has the story:

Unlike many of Flock’s cameras, which are designed to capture license plates as people drive by, Flock’s Condor cameras are pan-tilt-zoom (PTZ) cameras designed to record and track people, not vehicles. Condor cameras can be set to automatically zoom in on people’s faces as they walk through a parking lot, down a public street, or play on a playground, or they can be controlled manually, according to marketing material on Flock’s website. We watched Condor cameras zoom in on a woman walking her dog on a bike path in suburban Atlanta; a camera followed a man walking through a Macy’s parking lot in Bakersfield; surveil children swinging on a swingset at a playground; and film high-res video of people sitting at a stoplight in traffic. In one case, we were able to watch a man rollerblade down Brookhaven, Georgia’s Peachtree Creek Greenway bike path. The Flock camera zoomed in on him and tracked him as he rolled past. Minutes later, he showed up on another exposed camera livestream further down the bike path. The camera’s resolution was good enough that we were able to see that, when he stopped beneath one of the cameras, he was watching rollerblading videos on his phone.

In 2025, age checks started locking people out of the internet

31 December 2025 at 11:49

If 2024 was the year lawmakers talked about online age verification, 2025 was the year they actually flipped the switch.​

In 2025, across parts of Europe and the US, age checks for certain websites (especially pornography) turned long‑running child‑protection debates into real‑world access controls. Overnight, users found entire categories of sites locked behind ID checks, platforms geo‑blocking whole countries, and VPN traffic surging as people tried to get around the new walls.​

From France’s hardline stance on adult sites to the UK’s Online Safety Act, to a patchwork of new rules across multiple US states, these “show me your ID before you browse” systems are reshaping the web. The stated goal is to “protect the children,” but in practice the outcome is frequently a blunt national block, followed by users voting with their VPN buttons.​

The core tension: safety vs privacy

The fundamental challenge for websites and services is not checking age in principle, but how to do it without turning everyday browsing into an identity check. Almost every viable method asks users to hand over sensitive data, raising the stakes if (or more likely when) that data leaks in a breach.​

For ordinary users, the result is a confusing mess of blocks, prompts, and workarounds. On paper, countries want better protection for minors. In practice, adults discover that entire platforms are unavailable unless they are prepared to disclose personal information or disguise where they connect from. No website wants to be the one blamed after an age‑verification database is compromised, yet regulators continue to push for stronger identity links.​

How age checks actually work

Regulators such as Ofcom publish lists of acceptable age‑verification methods, each with its own privacy and usability trade‑offs. None are perfect, and many shift risk from governments and platforms onto users’ most sensitive personal data.​

  • Facial age estimation: Users upload a selfie or short video so an algorithm can guess whether they look over 18, which avoids storing documents but relies on sensitive biometrics and imperfect accuracy.​
  • Open banking: An age‑check service queries your bank for a simple “adult or not” answer. It may be convenient on paper but it’s a hard sell when the relying site is an adult platform.​
  • Digital identity services: Digital ID wallets can assert “over 18” without exposing full credentials, but they add yet another app and infrastructure layer that must be trusted and widely adopted.​
  • Credit card checks: Using a valid payment card as a proxy for adulthood is simple and familiar, but it excludes adults without cards and does not cover lower age thresholds like “over 13.”​
  • Email‑based estimation: Systems infer age from where an email address has been used (such as banks or utilities), effectively encouraging cross‑service profiling and “digital snooping.”​
  • Mobile network checks: Providers indicate whether an account has age‑related restrictions. This can be fast, but is unreliable for pay‑as‑you‑go accounts, burner SIMs, or poorly maintained records.​
  • Photo‑ID matching: Users upload an ID document plus a selfie so systems can match faces and ages. This is effective, but concentrates highly sensitive identity data in yet another attractive target for attackers.​

My personal preference would be double‑blind verification: a third‑party provider verifies your age, then issues a simple token like “18+” to sites without revealing your identity or learning which site you visit, offering stronger privacy than most current approaches.​

In almost every case, users must surrender personal information or documents to prove their age, increasing the risk that identity data ends up in the wrong hands. This turns age gates into long‑lived security liabilities rather than temporary access checks.​

Geoblocking, VPNs, and cross‑border frictions

Right now, most platforms comply by detecting user location via IP address and then either demanding age checks or denying access entirely to users in specific regions. France’s enforcement actions, for example, led several major adult sites to geo-block the entire country in 2025, while the UK’s Online Safety Act coincided with a sharp rise in VPN use rather than widespread cross-border blocking.

European regulators generally focus on domestic ISPs, Digital Services Act reporting, and large platform fines rather than on filtering traffic from other countries, partly because broad traffic blocking raises net‑neutrality and technical complexity concerns. In the US, some state proposals have explicitly targeted VPN circumventions, signalling a willingness to attack the workarounds rather than the underlying incentives.​

Meanwhile, network‑level filtering vendors advertise “cross‑border” controls and VPN detection for governments, hinting at future scenarios where unregulated inbound flows or anonymity tools are aggressively throttled. If enforcement pressure grows, these capabilities could evolve from niche offerings into standard state infrastructure.​

A future of less anonymity?

A common argument is that eroding online anonymity will also curb toxic behavior and abuse on social media, since people act differently when their real‑world identity is at stake. But tying everyday browsing to identity checks risks chilling legitimate speech and exploration long before it delivers any proven civility benefits.​

A world where every connection requires ID is unlikely to arrive overnight. Still, the direction of travel is clear: more countries are normalizing age gates that double as identity checks, and more users are learning to route around them. Unless privacy‑preserving systems like robust double‑blind verification become the norm, age‑verification policies intended to protect children may end up undermining both privacy and open access to information.​


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

In 2025, age checks started locking people out of the internet

31 December 2025 at 11:49

If 2024 was the year lawmakers talked about online age verification, 2025 was the year they actually flipped the switch.​

In 2025, across parts of Europe and the US, age checks for certain websites (especially pornography) turned long‑running child‑protection debates into real‑world access controls. Overnight, users found entire categories of sites locked behind ID checks, platforms geo‑blocking whole countries, and VPN traffic surging as people tried to get around the new walls.​

From France’s hardline stance on adult sites to the UK’s Online Safety Act, to a patchwork of new rules across multiple US states, these “show me your ID before you browse” systems are reshaping the web. The stated goal is to “protect the children,” but in practice the outcome is frequently a blunt national block, followed by users voting with their VPN buttons.​

The core tension: safety vs privacy

The fundamental challenge for websites and services is not checking age in principle, but how to do it without turning everyday browsing into an identity check. Almost every viable method asks users to hand over sensitive data, raising the stakes if (or more likely when) that data leaks in a breach.​

For ordinary users, the result is a confusing mess of blocks, prompts, and workarounds. On paper, countries want better protection for minors. In practice, adults discover that entire platforms are unavailable unless they are prepared to disclose personal information or disguise where they connect from. No website wants to be the one blamed after an age‑verification database is compromised, yet regulators continue to push for stronger identity links.​

How age checks actually work

Regulators such as Ofcom publish lists of acceptable age‑verification methods, each with its own privacy and usability trade‑offs. None are perfect, and many shift risk from governments and platforms onto users’ most sensitive personal data.​

  • Facial age estimation: Users upload a selfie or short video so an algorithm can guess whether they look over 18, which avoids storing documents but relies on sensitive biometrics and imperfect accuracy.​
  • Open banking: An age‑check service queries your bank for a simple “adult or not” answer. It may be convenient on paper but it’s a hard sell when the relying site is an adult platform.​
  • Digital identity services: Digital ID wallets can assert “over 18” without exposing full credentials, but they add yet another app and infrastructure layer that must be trusted and widely adopted.​
  • Credit card checks: Using a valid payment card as a proxy for adulthood is simple and familiar, but it excludes adults without cards and does not cover lower age thresholds like “over 13.”​
  • Email‑based estimation: Systems infer age from where an email address has been used (such as banks or utilities), effectively encouraging cross‑service profiling and “digital snooping.”​
  • Mobile network checks: Providers indicate whether an account has age‑related restrictions. This can be fast, but is unreliable for pay‑as‑you‑go accounts, burner SIMs, or poorly maintained records.​
  • Photo‑ID matching: Users upload an ID document plus a selfie so systems can match faces and ages. This is effective, but concentrates highly sensitive identity data in yet another attractive target for attackers.​

My personal preference would be double‑blind verification: a third‑party provider verifies your age, then issues a simple token like “18+” to sites without revealing your identity or learning which site you visit, offering stronger privacy than most current approaches.​

In almost every case, users must surrender personal information or documents to prove their age, increasing the risk that identity data ends up in the wrong hands. This turns age gates into long‑lived security liabilities rather than temporary access checks.​

Geoblocking, VPNs, and cross‑border frictions

Right now, most platforms comply by detecting user location via IP address and then either demanding age checks or denying access entirely to users in specific regions. France’s enforcement actions, for example, led several major adult sites to geo-block the entire country in 2025, while the UK’s Online Safety Act coincided with a sharp rise in VPN use rather than widespread cross-border blocking.

European regulators generally focus on domestic ISPs, Digital Services Act reporting, and large platform fines rather than on filtering traffic from other countries, partly because broad traffic blocking raises net‑neutrality and technical complexity concerns. In the US, some state proposals have explicitly targeted VPN circumventions, signalling a willingness to attack the workarounds rather than the underlying incentives.​

Meanwhile, network‑level filtering vendors advertise “cross‑border” controls and VPN detection for governments, hinting at future scenarios where unregulated inbound flows or anonymity tools are aggressively throttled. If enforcement pressure grows, these capabilities could evolve from niche offerings into standard state infrastructure.​

A future of less anonymity?

A common argument is that eroding online anonymity will also curb toxic behavior and abuse on social media, since people act differently when their real‑world identity is at stake. But tying everyday browsing to identity checks risks chilling legitimate speech and exploration long before it delivers any proven civility benefits.​

A world where every connection requires ID is unlikely to arrive overnight. Still, the direction of travel is clear: more countries are normalizing age gates that double as identity checks, and more users are learning to route around them. Unless privacy‑preserving systems like robust double‑blind verification become the norm, age‑verification policies intended to protect children may end up undermining both privacy and open access to information.​


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

EFF's Investigations Expose Flock Safety's Surveillance Abuses: 2025 in Review

30 December 2025 at 20:03

Throughout 2025, EFF conducted groundbreaking investigations into Flock Safety's automated license plate reader (ALPR) network, revealing a system designed to enable mass surveillance and susceptible to grave abuses. Our research sparked state and federal investigations, drove landmark litigation, and exposed dangerous expansion into always-listening voice detection technology. We documented how Flock's surveillance infrastructure allowed law enforcement to track protesters exercising their First Amendment rights, target Romani people with discriminatory searches, and surveil women seeking reproductive healthcare.

Flock Enables Surveillance of Protesters

When we obtained datasets representing more than 12 million searches logged by more than 3,900 agencies between December 2024 and October 2025, the patterns were unmistakable. Agencies logged hundreds of searches related to political demonstrations—the 50501 protests in February, Hands Off protests in April, and No Kings protests in June and October. Nineteen agencies conducted dozens of searches specifically tied to No Kings protests alone. Sometimes searches explicitly referenced protest activity; other times, agencies used vague terminology to obscure surveillance of constitutionally protected speech.

The surveillance extended beyond mass demonstrations. Three agencies used Flock's system to target activists from Direct Action Everywhere, an animal-rights organization using civil disobedience to expose factory farm conditions. Delaware State Police queried the Flock network nine times in March 2025 related to Direct Action Everywhere actions—showing how ALPR surveillance targets groups engaged in activism challenging powerful industries.

Biased Policing and Discriminatory Searches

Our November analysis revealed deeply troubling patterns: more than 80 law enforcement agencies used language perpetuating harmful stereotypes against Romani people when searching the nationwide Flock Safety ALPR network. Between June 2024 and October 2025, police performed hundreds of searches using terms such as "roma" and racial slurs—often without mentioning any suspected crime.

Audit logs revealed searches including "roma traveler," "possible g*psy," and "g*psy ruse." Grand Prairie Police Department in Texas searched for the slur six times while using Flock's "Convoy" feature, which identifies vehicles traveling together—essentially targeting an entire traveling community without specifying any crime. According to a 2020 Harvard University survey, four out of 10 Romani Americans reported being subjected to racial profiling by police. Flock's system makes such discrimination faster and easier to execute at scale.

Weaponizing Surveillance Against Reproductive Rights

In October, we obtained documents showing that Texas deputies queried Flock Safety's surveillance data in what police characterized as a missing person investigation, but was actually an abortion case. Deputies initiated a "death investigation" of a "non-viable fetus," logged evidence of a woman's self-managed abortion, and consulted prosecutors about possible charges.

A Johnson County official ran two searches with the note "had an abortion, search for female." The second search probed 6,809 networks, accessing 83,345 cameras across nearly the entire country. This case revealed Flock's fundamental danger: a single query accesses more than 83,000 cameras spanning almost the entire nation, with minimal oversight and maximum potential for abuse—particularly when weaponized against people seeking reproductive healthcare.

Feature Updates Miss the Point

In June, EFF explained why Flock Safety's announced feature updates cannot make ALPRs safe. The company promised privacy-enhancing features like geofencing and retention limits in response to public pressure. But these tweaks don't address the core problem: Flock's business model depends on building a nationwide, interconnected surveillance network that creates risks no software update can eliminate. Our 2025 investigations proved that abuses stem from the architecture itself, not just how individual agencies use the technology.

Accountability and Community Action

EFF's work sparked significant accountability measures. U.S. Rep. Raja Krishnamoorthi and Rep. Robert Garcia launched a formal investigation into Flock's role in "enabling invasive surveillance practices that threaten the privacy, safety, and civil liberties of women, immigrants, and other vulnerable Americans."

Illinois Secretary of State Alexi Giannoulias launched an audit after EFF research showed Flock allowed U.S. Customs and Border Protection to access Illinois data in violation of state privacy laws. In November, EFF partnered with the ACLU of Northern California to file a lawsuit against San Jose and its police department, challenging warrantless searches of millions of ALPR records. Between June 5, 2024 and June 17, 2025, SJPD and other California law enforcement agencies searched San Jose's database 3,965,519 times—a staggering figure illustrating the vast scope of warrantless surveillance enabled by Flock's infrastructure.

Our investigations also fueled municipal resistance to Flock Safety. Communities from Austin to Evanston to Eugene successfully canceled or refused to renew their Flock contracts after organizing campaigns centered on our research documenting discriminatory policing, immigration enforcement, threats to reproductive rights, and chilling effects on protest. These victories demonstrate that communities—armed with evidence of Flock's harms—can challenge and reject surveillance infrastructure that threatens civil liberties.

Dangerous New Capabilities: Always-Listening Microphones

In October 2025, Flock announced plans to expand its gunshot detection microphones to listen for "human distress" including screaming. This dangerous expansion transforms audio sensors into powerful surveillance tools monitoring human voices on city streets. High-powered microphones above densely populated areas raise serious questions about wiretapping laws, false alerts, and potential for dangerous police responses to non-emergencies. After EFF exposed this feature, Flock quietly amended its marketing materials to remove explicit references to "screaming"—replacing them with vaguer language about "distress" detection—while continuing to develop and deploy the technology.

Looking Forward

Flock Safety's surveillance infrastructure is not a neutral public safety tool. It's a system that enables and amplifies racist policing, threatens reproductive rights, and chills constitutionally protected speech. Our 2025 investigations proved it beyond doubt. As we head into 2026, EFF will continue exposing these abuses, supporting communities fighting back, and litigating for the constitutional protections that surveillance technology has stripped away.

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

2025 exposed the risks we ignored while rushing AI

30 December 2025 at 11:02

This blog is part of a series where we highlight new or fast-evolving threats in the consumer security landscape. This one looks at how the rapid rise of Artificial Intelligence (AI) is putting users at risk.

In 2025 we saw an ever-accelerating race between AI providers to push out new features. We also saw manufacturers bolt AI onto products simply because it sounded exciting. In many cases, it really shouldn’t have.

Agentic browsers

Agentic or AI browsers that can act autonomously to execute tasks introduced a new set of vulnerabilities—especially to prompt injection attacks. With great AI power comes great responsibility, and risk. If you’re thinking about using an AI browser, it’s worth slowing down and considering the security and privacy implications first. Even experienced AI providers like OpenAI (the makers of ChatGPT) were unable to keep their agentic browser Atlas secure. By pasting a specially crafted link into the Omnibox, attackers were able to trick Atlas into treating a URL input as a trusted command.

Mimicry

The popularity of AI chatbots created the perfect opportunity for scammers to distribute malicious apps. Even if the AI engine itself worked perfectly, attackers have another way in: fake interfaces. According to BleepingComputer, scammers are already creating spoofed AI sidebars that look identical to real ones from browsers like OpenAI’s Atlas and Perplexity’s Comet. These fake sidebars mimic the real interface, making them almost impossible to spot.

Misconfiguration

And then there’s this special category of using AI in products because it sounds cooler with AI or you can ask for more money from buyers.

Toys

We saw a plush teddy bear promising “warmth, fun, and a little extra curiosity” that was taken off the market after researcher found its built-in AI responding with sexual content and advice about weapons. Conversations escalated from innocent to sexual within minutes. The bear didn’t just respond to explicit prompts, which would have been more or less understandable. Researchers said it introduced graphic sexual concepts on its own, including BDSM-related topics, explained “knots for beginners,” and referenced roleplay scenarios involving children and adults.

Misinterpretation

Sometimes we rely on AI systems too much and forget that they hallucinate. As in the case where a school’s AI system mistook a boy’s empty Doritos bag for a gun and triggered a full-blown police response. Multiple police cars arrived with officers drawing their weapons, all because of a false alarm.

Data breaches

Alongside all this comes a surge in privacy concerns. Some issues stem from the data used to train AI models; others come from mishandled chat logs. Two AI companion apps recently exposed private conversations because users weren’t clearly warned that certain settings would result in their conversations becoming searchable or result in targeted advertising.

So, what should we do?

We’ve said it before and we’ll probably say it again:  We keep pushing the limits of what AI can do faster than we can make it safe. As long as we keep chasing the newest features, companies will keep releasing new integrations, whether they’re safe or not.

As consumers, the best thing we can do is stay informed about new developments and the risks that come with them. Ask yourself: Do I really need this? What am I trusting AI with? What’s the potential downside? Sometimes it’s worth doing things the slower, safer way.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

2025 exposed the risks we ignored while rushing AI

30 December 2025 at 11:02

This blog is part of a series where we highlight new or fast-evolving threats in the consumer security landscape. This one looks at how the rapid rise of Artificial Intelligence (AI) is putting users at risk.

In 2025 we saw an ever-accelerating race between AI providers to push out new features. We also saw manufacturers bolt AI onto products simply because it sounded exciting. In many cases, it really shouldn’t have.

Agentic browsers

Agentic or AI browsers that can act autonomously to execute tasks introduced a new set of vulnerabilities—especially to prompt injection attacks. With great AI power comes great responsibility, and risk. If you’re thinking about using an AI browser, it’s worth slowing down and considering the security and privacy implications first. Even experienced AI providers like OpenAI (the makers of ChatGPT) were unable to keep their agentic browser Atlas secure. By pasting a specially crafted link into the Omnibox, attackers were able to trick Atlas into treating a URL input as a trusted command.

Mimicry

The popularity of AI chatbots created the perfect opportunity for scammers to distribute malicious apps. Even if the AI engine itself worked perfectly, attackers have another way in: fake interfaces. According to BleepingComputer, scammers are already creating spoofed AI sidebars that look identical to real ones from browsers like OpenAI’s Atlas and Perplexity’s Comet. These fake sidebars mimic the real interface, making them almost impossible to spot.

Misconfiguration

And then there’s this special category of using AI in products because it sounds cooler with AI or you can ask for more money from buyers.

Toys

We saw a plush teddy bear promising “warmth, fun, and a little extra curiosity” that was taken off the market after researcher found its built-in AI responding with sexual content and advice about weapons. Conversations escalated from innocent to sexual within minutes. The bear didn’t just respond to explicit prompts, which would have been more or less understandable. Researchers said it introduced graphic sexual concepts on its own, including BDSM-related topics, explained “knots for beginners,” and referenced roleplay scenarios involving children and adults.

Misinterpretation

Sometimes we rely on AI systems too much and forget that they hallucinate. As in the case where a school’s AI system mistook a boy’s empty Doritos bag for a gun and triggered a full-blown police response. Multiple police cars arrived with officers drawing their weapons, all because of a false alarm.

Data breaches

Alongside all this comes a surge in privacy concerns. Some issues stem from the data used to train AI models; others come from mishandled chat logs. Two AI companion apps recently exposed private conversations because users weren’t clearly warned that certain settings would result in their conversations becoming searchable or result in targeted advertising.

So, what should we do?

We’ve said it before and we’ll probably say it again:  We keep pushing the limits of what AI can do faster than we can make it safe. As long as we keep chasing the newest features, companies will keep releasing new integrations, whether they’re safe or not.

As consumers, the best thing we can do is stay informed about new developments and the risks that come with them. Ask yourself: Do I really need this? What am I trusting AI with? What’s the potential downside? Sometimes it’s worth doing things the slower, safer way.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

EFFector Audio Speaks Up for Our Rights: 2025 Year in Review

28 December 2025 at 23:57

This year, you may have heard EFF sounding off about our civil liberties on NPR, BBC Radio, or any number of podcasts. But we also started sharing our voices directly with listeners in 2025. In June, we revamped EFFector, our long-running electronic newsletter, and launched a new audio edition to accompany it.

Providing a recap of the week's most important digital rights news, EFFector's audio companion features exclusive interviews where EFF's lawyers, activists, and technologists can dig deeper into the biggest stories in privacy, free speech, and innovation. Here are just some of the best interviews from EFFector Audio in 2025.

Unpacking a Social Media Spying Scheme

Earlier this year, the Trump administration launched a sprawling surveillance program to spy on the social media activity of millions of noncitizens—and punish those who express views it doesn't like. This fall, EFF's Lisa Femia came onto EFFector Audio to explain how this scheme works, its impact on free speech, and, importantly, why EFF is suing to stop it.

"We think all of this is coming together as a way to chill people's speech and make it so they do not feel comfortable expressing core political viewpoints protected by the First Amendment," Femia said.


Challenging the Mass Surveillance of Drivers

But Lisa was hardly the only guest talking about surveillance. In November, EFF's Andrew Crocker spoke to EFFector about Automated License Plate Readers (ALPRs), a particularly invasive and widespread form of surveillance. ALPR camera networks take pictures of every passing vehicle and upload the location information of millions of drivers into central databases. Police can then search these databases—typically without any judicial approval—to instantly reconstruct driver movements over weeks, months, or even years at a time.

"It really is going to be a very detailed picture of your habits over the course of a long period of time," said Crocker, explaining how ALPR location data can reveal where you work, worship, and many other intimate details about your life. Crocker also talked about a new lawsuit, filed by two nonprofits represented by EFF and the ACLU of Northern California, challenging the city of San Jose's use of ALPR searches without a warrant.

Similarly, EFF's Mario Trujillo joined EFFector in early November to discuss the legal issues and mass surveillance risks around face recognition in consumer devices.

Simple Tips to Take Control of Your Privacy

Online privacy isn’t dead. But tech giants have tried to make protecting it as annoying as possible. To help users take back control, we celebrated Opt Out October, sharing daily privacy tips all month long on our blog. In addition to laying down some privacy basics, EFF's Thorin Klosowski talked to EFFector about how small steps to protect your data can build up into big differences.

"This is a way to kind of break it down into small tasks that you can do every day and accomplish a lot," said Klosowski. "By the end of it, you will have taken back a considerable amount of your privacy."

User privacy was the focus of a number of EFFector interviews. In July, EFF's Lena Cohen spoke about what lawmakers, tech companies, and individuals can do to fight online tracking. That same month, Matthew Guariglia talked about precautions consumers can take before bringing surveillance devices like smart doorbells into their homes.

Digging Into the Next Wave of Internet Censorship

One of the most troubling trends of 2025 was the proliferation of age verification laws, which require online services to check, estimate, or verify users’ ages. Though these mandates claim to protect children, they ultimately create harmful censorship and surveillance regimes that put everyone—adults and young people alike—at risk.

This summer, EFF's Rin Alajaji came onto EFFector Audio to explain how these laws work and why we need to speak out against them.

"Every person listening here can push back against these laws that expand censorship," she said. "We like to say that if you care about internet freedom, this fight is yours."

This was just one of several interviews about free speech online. This year, EFFector also hosted Paige Collings to talk about the chaotic rollout of the UK's Online Safety Act and Lisa Femia (again!) to discuss the abortion censorship crisis on social media.

You can hear all these episodes and future installments of EFFector's audio companion on YouTube or the Internet Archive. Or check out our revamped EFFector newsletter by subscribing at eff.org/effector!

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

❌