Normal view

Phishing scammers are posting fake “account restricted” comments on LinkedIn

14 January 2026 at 16:55

Recently, fake LinkedIn profiles have started posting comment replies claiming that a user has “engaged in activities that are not in compliance” with LinkedIn’s policies and that their account has been “temporarily restricted” until they submit an appeal through a specified link in the comment.

The comments come in different shapes and sizes, but here’s one example we found.

Your account is at risk of suspension

The accounts posting the comments all try to look like official LinkedIn bots and use various names. It’s likely they create new accounts when LinkedIn removes them. Either way, multiple accounts similar to the “Linked Very” one above were reported in a short period, suggesting automated creation and posting at scale.

The same pattern is true for the links. The shortened link used in the example above has already been disabled, while others point directly to phishing sites. Scammers often use shortened LinkedIn links to build trust, making targets believe the messages are legitimate. Because LinkedIn can quickly disable these links, attackers likely test different approaches to see which last the longest.

Here’s another example:

As a preventive measure, access to your account is temporarily restricted

Malwarebytes blocks this last link based on the IP address:

Malwarebytes blocks 103.224.182.251

If users follow these links, they are taken to a phishing page designed to steal their LinkedIn login details:

fake LinkedIn log in site
Image courtesy of BleepingComputer

A LinkedIn spokesperson confirmed to BleepingComputer they are aware of the situation:

“I can confirm that we are aware of this activity and our teams are working to take action.”

Stay safe

In situations like this awareness is key—and now you know what to watch for. Some additional tips:

  • Don’t click on unsolicited links in private messages and comments without verifying with the trusted sender that they’re legitimate.
  • Always log in directly on the platform that you are trying to access, rather than through a link.
  • Use a password manager, which won’t auto-fill in credentials on fake websites.
  • Use a real-time, up-to-date anti-malware solution with a web protection module to block malicious sites.

Pro tip: The free Malwarebytes Browser Guard extension blocks known malicious websites and scripts.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Phishing scammers are posting fake “account restricted” comments on LinkedIn

14 January 2026 at 16:55

Recently, fake LinkedIn profiles have started posting comment replies claiming that a user has “engaged in activities that are not in compliance” with LinkedIn’s policies and that their account has been “temporarily restricted” until they submit an appeal through a specified link in the comment.

The comments come in different shapes and sizes, but here’s one example we found.

Your account is at risk of suspension

The accounts posting the comments all try to look like official LinkedIn bots and use various names. It’s likely they create new accounts when LinkedIn removes them. Either way, multiple accounts similar to the “Linked Very” one above were reported in a short period, suggesting automated creation and posting at scale.

The same pattern is true for the links. The shortened link used in the example above has already been disabled, while others point directly to phishing sites. Scammers often use shortened LinkedIn links to build trust, making targets believe the messages are legitimate. Because LinkedIn can quickly disable these links, attackers likely test different approaches to see which last the longest.

Here’s another example:

As a preventive measure, access to your account is temporarily restricted

Malwarebytes blocks this last link based on the IP address:

Malwarebytes blocks 103.224.182.251

If users follow these links, they are taken to a phishing page designed to steal their LinkedIn login details:

fake LinkedIn log in site
Image courtesy of BleepingComputer

A LinkedIn spokesperson confirmed to BleepingComputer they are aware of the situation:

“I can confirm that we are aware of this activity and our teams are working to take action.”

Stay safe

In situations like this awareness is key—and now you know what to watch for. Some additional tips:

  • Don’t click on unsolicited links in private messages and comments without verifying with the trusted sender that they’re legitimate.
  • Always log in directly on the platform that you are trying to access, rather than through a link.
  • Use a password manager, which won’t auto-fill in credentials on fake websites.
  • Use a real-time, up-to-date anti-malware solution with a web protection module to block malicious sites.

Pro tip: The free Malwarebytes Browser Guard extension blocks known malicious websites and scripts.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Data broker fined after selling Alzheimer’s patient info and millions of sensitive profiles

13 January 2026 at 17:05

California’s privacy regulator has fined a Texas data broker $45,000 and banned it from selling Californians’ personal information after it sold Alzheimer patients’ data. Texan company Rickenbacher Data LLC, which does business as Datamasters, bought and resold the names, addresses, phone numbers, and email addresses of people that suffered from serious health conditions, according to the California Privacy Protection Agency (CPPA).

The CPPA’s final order against Datamasters says that the company maintained a database containing 435,245 postal addresses for Alzheimer’s patients. But it didn’t stop there. Also up for grabs were records for 2,317,141 blind or visually impaired people, and 133,142 addiction sufferers. It also sold records for 857,449 people with bladder control issues.

Health-related data wasn’t the only category Datamasters trafficked in. The company also sold information tied to ethnicity, including so-called “Hispanic lists” containing more than 20 million names, as well as age-based “senior lists” and indicators of financial vulnerability. For example, it sold records of people holding high-interest mortgages.

And if buyers wanted data on other likely customer characteristics and actions, such as who was likely a liberal vs a right-winger, it could give you that, too, thanks to 3,370 “Consumer Predictor Models” spanning automotive preferences, financial activity, media use, political affiliation, and nonprofit activity.

Datamasters offers outright purchase of records from its national consumer database, which it claims covers 114 million households and 231 million individuals. Customers can also buy subscription-based updates too.

California regulators began investigating Datamasters after discovering the company had failed to register as a data broker in the state, as required under California’s Delete Act. The law has required data brokers to register since January 31, 2025.

The company originally denied that it did business in California or had data on Californians. However, that claim collapsed when regulators found an Excel spreadsheet on the website listing 204,218 California student records.

Datamasters first said it had not screened its national database to remove Californians’ data. After getting a lawyer, it changed its story, asserting that it did in fact filter Californians out of the data set. That didn’t convince the CPPA though.

The regulator acknowledged that Datamasters did try to comply with Californian privacy laws, but that it

“lacked sufficient written policies and procedures to ensure compliance with the Delete Act.”

The fine imposed on Datamasters also takes into account that it hadn’t registered on the state’s data broker registry. Data brokers that don’t register are liable for $200 per day in fines, and failing to delete consumer data will incur $200 per consumer per day in fines.

Starting January 1, 2028, data brokers registered in California will also be required to undergo independent third-party compliance audits every three years.

Why selling extra-sensitive customer data is so dangerous

“History teaches us that certain types of lists can be dangerous,”

Michael Macko, the CPPA’s head of enforcement, pointed out.

Research has told us that Alzheimer’s patients are especially vulnerable to financial exploitation. If you think that scammers don’t seek out such lists, think again; criminals were found to have accessed data from at least three data brokers in the past. While there’s no suggestion that Datamasters knowingly sold data to scammers, it seems easy for people to buy data broker lists.

It also doesn’t take a PhD to see why many of these records (which, remember, the company holds about people nationwide) could be especially sensitive in the current US political climate.

There’s a broader privacy issue here, too. While many Americans might assume that the federal Health Insurance Portability and Accountability Act (HIPAA) protects their health data, it only applies to healthcare providers. Amazingly, data brokers sit outside its purview.

So what can you do to protect yourself?

Your first port of call should be your state’s data protection law. California introduced the Data Request and Opt-out Platform (DROP) system this year under the Delete Act. It’s an opt-out system for California residents to make all data brokers on the registry delete data held about them.

If you don’t live in a state that takes sensitive data seriously, your options are more limited. You could move—maybe to Europe, where privacy protections are considerably stronger.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Data broker fined after selling Alzheimer’s patient info and millions of sensitive profiles

13 January 2026 at 17:05

California’s privacy regulator has fined a Texas data broker $45,000 and banned it from selling Californians’ personal information after it sold Alzheimer patients’ data. Texan company Rickenbacher Data LLC, which does business as Datamasters, bought and resold the names, addresses, phone numbers, and email addresses of people that suffered from serious health conditions, according to the California Privacy Protection Agency (CPPA).

The CPPA’s final order against Datamasters says that the company maintained a database containing 435,245 postal addresses for Alzheimer’s patients. But it didn’t stop there. Also up for grabs were records for 2,317,141 blind or visually impaired people, and 133,142 addiction sufferers. It also sold records for 857,449 people with bladder control issues.

Health-related data wasn’t the only category Datamasters trafficked in. The company also sold information tied to ethnicity, including so-called “Hispanic lists” containing more than 20 million names, as well as age-based “senior lists” and indicators of financial vulnerability. For example, it sold records of people holding high-interest mortgages.

And if buyers wanted data on other likely customer characteristics and actions, such as who was likely a liberal vs a right-winger, it could give you that, too, thanks to 3,370 “Consumer Predictor Models” spanning automotive preferences, financial activity, media use, political affiliation, and nonprofit activity.

Datamasters offers outright purchase of records from its national consumer database, which it claims covers 114 million households and 231 million individuals. Customers can also buy subscription-based updates too.

California regulators began investigating Datamasters after discovering the company had failed to register as a data broker in the state, as required under California’s Delete Act. The law has required data brokers to register since January 31, 2025.

The company originally denied that it did business in California or had data on Californians. However, that claim collapsed when regulators found an Excel spreadsheet on the website listing 204,218 California student records.

Datamasters first said it had not screened its national database to remove Californians’ data. After getting a lawyer, it changed its story, asserting that it did in fact filter Californians out of the data set. That didn’t convince the CPPA though.

The regulator acknowledged that Datamasters did try to comply with Californian privacy laws, but that it

“lacked sufficient written policies and procedures to ensure compliance with the Delete Act.”

The fine imposed on Datamasters also takes into account that it hadn’t registered on the state’s data broker registry. Data brokers that don’t register are liable for $200 per day in fines, and failing to delete consumer data will incur $200 per consumer per day in fines.

Starting January 1, 2028, data brokers registered in California will also be required to undergo independent third-party compliance audits every three years.

Why selling extra-sensitive customer data is so dangerous

“History teaches us that certain types of lists can be dangerous,”

Michael Macko, the CPPA’s head of enforcement, pointed out.

Research has told us that Alzheimer’s patients are especially vulnerable to financial exploitation. If you think that scammers don’t seek out such lists, think again; criminals were found to have accessed data from at least three data brokers in the past. While there’s no suggestion that Datamasters knowingly sold data to scammers, it seems easy for people to buy data broker lists.

It also doesn’t take a PhD to see why many of these records (which, remember, the company holds about people nationwide) could be especially sensitive in the current US political climate.

There’s a broader privacy issue here, too. While many Americans might assume that the federal Health Insurance Portability and Accountability Act (HIPAA) protects their health data, it only applies to healthcare providers. Amazingly, data brokers sit outside its purview.

So what can you do to protect yourself?

Your first port of call should be your state’s data protection law. California introduced the Data Request and Opt-out Platform (DROP) system this year under the Delete Act. It’s an opt-out system for California residents to make all data brokers on the registry delete data held about them.

If you don’t live in a state that takes sensitive data seriously, your options are more limited. You could move—maybe to Europe, where privacy protections are considerably stronger.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Regulators around the world are scrutinizing Grok over sexual deepfakes

12 January 2026 at 15:04

Grok’s failure to block sexualized images of minors has turned a single “isolated lapse” into a global regulatory stress test for xAI’s ambitions. The response from lawmakers and regulators suggests this will not be solved with a quick apology and a hotfix.

Last week we reported on Grok’s apology after it generated an image of young girls in “sexualized attire.”

The apology followed the introduction of Grok’s paid “Spicy Mode” in August 2025, which was marketed as edgy and less censored. In practice it enabled users to generate sexual deepfake images, including content that may cross into illegal child sexual abuse material (CSAM) under US and other jurisdictions’ laws.

A report from web-monitoring tool CopyLeaks highlighted “thousands” of incidents of Grok being used to create sexually suggestive images of non-consenting celebrities.

This is starting to backfire. Reportedly, three US senators are asking Google and Apple to remove Elon Musk’s Grok and X apps from their app stores, citing the spread of nonconsensual sexualized AI images of women and minors and arguing it violates the companies’ app store rules.

In their joint letter, the senators state:

“In recent days, X users have used the app’s Grok AI tool to generate nonconsensual sexual imagery of real, private citizens at scale. This trend has included Grok modifying images to depict women being sexually abused, humiliated, hurt, and even killed. In some cases, Grok has reportedly created sexualized images of children—the most heinous type of content imaginable.”

The UK government also threatens to take possible action against the platform. Government officials have said they would fully support any action taken by Ofcom, the independent media regulator, against X. Even if that meant UK regulators could block the platform.

Indonesia and Malaysia already blocked Grok after its “digital undressing” function flooded the internet with suggestive and obscene manipulated images of women and minors.

As it turns out, a user prompted Grok to generate its own “apology,” which it did. After backlash over sexualized images of women and minors, Grok/X announced limits on image generation and editing for paying subscribers only, effectively paywalling those capabilities on main X surfaces.

For lawmakers already worried about disinformation, election interference, deepfakes, and abuse imagery, Grok is fast becoming the textbook case for why “move fast and break things” doesn’t mix with AI that can sexualize real people on demand.

Hopefully, the next wave of rules, ranging from EU AI enforcement to platform-specific safety obligations, will treat this incident as the baseline risk that all large-scale visual models must withstand, not as an outlier.

Keep your children safe

If you ever wondered why parents post images of their children with a smiley across their face, this is the reason.

Don’t make it easy for strangers to copy, reuse, or manipulate your photos.

This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

And treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Regulators around the world are scrutinizing Grok over sexual deepfakes

12 January 2026 at 15:04

Grok’s failure to block sexualized images of minors has turned a single “isolated lapse” into a global regulatory stress test for xAI’s ambitions. The response from lawmakers and regulators suggests this will not be solved with a quick apology and a hotfix.

Last week we reported on Grok’s apology after it generated an image of young girls in “sexualized attire.”

The apology followed the introduction of Grok’s paid “Spicy Mode” in August 2025, which was marketed as edgy and less censored. In practice it enabled users to generate sexual deepfake images, including content that may cross into illegal child sexual abuse material (CSAM) under US and other jurisdictions’ laws.

A report from web-monitoring tool CopyLeaks highlighted “thousands” of incidents of Grok being used to create sexually suggestive images of non-consenting celebrities.

This is starting to backfire. Reportedly, three US senators are asking Google and Apple to remove Elon Musk’s Grok and X apps from their app stores, citing the spread of nonconsensual sexualized AI images of women and minors and arguing it violates the companies’ app store rules.

In their joint letter, the senators state:

“In recent days, X users have used the app’s Grok AI tool to generate nonconsensual sexual imagery of real, private citizens at scale. This trend has included Grok modifying images to depict women being sexually abused, humiliated, hurt, and even killed. In some cases, Grok has reportedly created sexualized images of children—the most heinous type of content imaginable.”

The UK government also threatens to take possible action against the platform. Government officials have said they would fully support any action taken by Ofcom, the independent media regulator, against X. Even if that meant UK regulators could block the platform.

Indonesia and Malaysia already blocked Grok after its “digital undressing” function flooded the internet with suggestive and obscene manipulated images of women and minors.

As it turns out, a user prompted Grok to generate its own “apology,” which it did. After backlash over sexualized images of women and minors, Grok/X announced limits on image generation and editing for paying subscribers only, effectively paywalling those capabilities on main X surfaces.

For lawmakers already worried about disinformation, election interference, deepfakes, and abuse imagery, Grok is fast becoming the textbook case for why “move fast and break things” doesn’t mix with AI that can sexualize real people on demand.

Hopefully, the next wave of rules, ranging from EU AI enforcement to platform-specific safety obligations, will treat this incident as the baseline risk that all large-scale visual models must withstand, not as an outlier.

Keep your children safe

If you ever wondered why parents post images of their children with a smiley across their face, this is the reason.

Don’t make it easy for strangers to copy, reuse, or manipulate your photos.

This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

And treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Nederlandse bedrijven vaak doelwit van phishing, maar doen er bijna niets aan

30 September 2025 at 18:23
Uit recent onderzoek van SIDN, uitgevoerd in aanloop naar de cybersecuritymaand die in oktober plaatsvindt, blijkt een zorgwekkende trend in het Nederlandse bedrijfsleven. Veel bedrijf zijn namelijk nog steeds slachtoffer van phishing, maar nemen tegelijkertijd weinig maatregelen.

Datalek bevolkingsonderzoek nog veel groter dan gedacht: zeker 700.000 slachtoffers

29 August 2025 at 10:25
De datalek door de hack op het laboratorium dat onderzoek deed naar baarmoederhalskanker blijkt nog veel groter dan gedacht. Gevoelige persoonlijke informatie van zeker 700.000 vrouwen is gelekt, veel meer dan in eerste instantie werd gemeld.

Dit zijn de beste wachtwoord-apps van dit moment

21 February 2025 at 17:05
Met onze Bright Stuff-koopgids helpen we je graag aan het beste product van het moment. Deze keer: wachtwoord-managers, password-apps, software voor het aanmaken en bewaren van al je wachtwoorden. Handig, veilig en gewoon heel verstandig.

pcTattletale founder pleads guilty as US cracks down on stalkerware

9 January 2026 at 16:41

Reportedly, pcTattletale founder Bryan Fleming has pleaded guilty in US federal court to computer hacking, unlawfully selling and advertising spyware, and conspiracy.

This is good news not just because we despise stalkerware like pcTattletale, but because it is only the second US federal stalkerware prosecution in a decade. It could could open the door to further cases against people who develop, sell, or promote similar tools.

In 2021, we reported that “employee and child-monitoring” software vendor pcTattletale had not been very careful about securing the screenshots it secretly captured from victims’ phones. A security researcher testing a trial version discovered that the app uploaded screenshots to an unsecured online database, meaning anyone could view them without authentication, such as a username and password.

In 2024, we revisited the app after researchers found it was once again leaking a database containing victim screenshots. One researcher discovered that pcTattletale’s Application Programming Interface (API) allowed anyone to access the most recent screen capture recorded from any device on which the spyware is installed. Another researcher uncovered a separate vulnerability that granted full access to the app’s backend infrastructure. That access allowed them to deface the website and steal AWS credentials, which turned out to be shared across all devices. As a result, the researcher obtained data about both victims and the customers who were doing the tracking.

This is no longer possible. Not because the developers fixed the problems, but because Amazon locked pcTattletale’s entire AWS infrastructure. Fleming later abandoned the product and deleted the contents of its servers.

However, Homeland Security Investigations had already started investigating pcTattletale in June 2021 and did not stop. A few things made Fleming stand out among other stalkerware operators. While many hide behind overseas shell companies, Fleming appeared to be proud of his work. And while others market their products as parental control or employee monitoring tools, pcTattletale explicitly promoted spying on romantic partners and spouses, using phrases such as “catch a cheater” and “surreptitiously spying on spouses and partners.” This made it clear the software was designed for non-consensual surveillance of adults.

Fleming is expected to be sentenced later this year.

Removing stalkerware

Malwarebytes, as one of the founding members of the Coalition Against Stalkerware, makes it a priority to detect and remove stalkerware-type apps from your device.

It is important to keep in mind, however, that removing stalkerware may alert the person spying on you that the app has been discovered. The Coalition Against Stalkerware outlines additional steps and considerations to help you decide the safest next move.

Because the apps often install under different names and hide themselves from users, they can be difficult to find and remove. That is where Malwarebytes can help you.

To scan your device:

  1. Open your Malwarebytes dashboard
  2. Start a Scan

The scan may take a few minutes.

 If malware is detected, you can choose one of the following actions:

  • Uninstall. The threat will be deleted from your device.
  • Ignore Always. The file detection will be added to the Allow List, and excluded from future scans. Legitimate files are sometimes detected as malware. We recommend reviewing scan results and adding files to Ignore Always that you know are safe and want to keep.
  • Ignore Once: The detection is ignored for this scan only. It will be detected again during your next scan.

Malwarebytes detects pcTattleTale as PUP.Optional.PCTattletale.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

pcTattletale founder pleads guilty as US cracks down on stalkerware

9 January 2026 at 16:41

Reportedly, pcTattletale founder Bryan Fleming has pleaded guilty in US federal court to computer hacking, unlawfully selling and advertising spyware, and conspiracy.

This is good news not just because we despise stalkerware like pcTattletale, but because it is only the second US federal stalkerware prosecution in a decade. It could could open the door to further cases against people who develop, sell, or promote similar tools.

In 2021, we reported that “employee and child-monitoring” software vendor pcTattletale had not been very careful about securing the screenshots it secretly captured from victims’ phones. A security researcher testing a trial version discovered that the app uploaded screenshots to an unsecured online database, meaning anyone could view them without authentication, such as a username and password.

In 2024, we revisited the app after researchers found it was once again leaking a database containing victim screenshots. One researcher discovered that pcTattletale’s Application Programming Interface (API) allowed anyone to access the most recent screen capture recorded from any device on which the spyware is installed. Another researcher uncovered a separate vulnerability that granted full access to the app’s backend infrastructure. That access allowed them to deface the website and steal AWS credentials, which turned out to be shared across all devices. As a result, the researcher obtained data about both victims and the customers who were doing the tracking.

This is no longer possible. Not because the developers fixed the problems, but because Amazon locked pcTattletale’s entire AWS infrastructure. Fleming later abandoned the product and deleted the contents of its servers.

However, Homeland Security Investigations had already started investigating pcTattletale in June 2021 and did not stop. A few things made Fleming stand out among other stalkerware operators. While many hide behind overseas shell companies, Fleming appeared to be proud of his work. And while others market their products as parental control or employee monitoring tools, pcTattletale explicitly promoted spying on romantic partners and spouses, using phrases such as “catch a cheater” and “surreptitiously spying on spouses and partners.” This made it clear the software was designed for non-consensual surveillance of adults.

Fleming is expected to be sentenced later this year.

Removing stalkerware

Malwarebytes, as one of the founding members of the Coalition Against Stalkerware, makes it a priority to detect and remove stalkerware-type apps from your device.

It is important to keep in mind, however, that removing stalkerware may alert the person spying on you that the app has been discovered. The Coalition Against Stalkerware outlines additional steps and considerations to help you decide the safest next move.

Because the apps often install under different names and hide themselves from users, they can be difficult to find and remove. That is where Malwarebytes can help you.

To scan your device:

  1. Open your Malwarebytes dashboard
  2. Start a Scan

The scan may take a few minutes.

 If malware is detected, you can choose one of the following actions:

  • Uninstall. The threat will be deleted from your device.
  • Ignore Always. The file detection will be added to the Allow List, and excluded from future scans. Legitimate files are sometimes detected as malware. We recommend reviewing scan results and adding files to Ignore Always that you know are safe and want to keep.
  • Ignore Once: The detection is ignored for this scan only. It will be detected again during your next scan.

Malwarebytes detects pcTattleTale as PUP.Optional.PCTattletale.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Are we ready for ChatGPT Health?

9 January 2026 at 13:26

How comfortable are you with sharing your medical history with an AI?

I’m certainly not.

OpenAI’s announcement about its new ChatGPT Health program prompted discussions about data privacy and how the company plans to keep the information users submit safe.

ChatGPT Health is a dedicated “health space” inside ChatGPT that lets users connect their medical records and wellness apps so the model can answer health and wellness questions in a more personalized way.

ChatGPT health

OpenAI promises additional, layered protections designed specifically for health, “to keep health conversations protected and compartmentalized.”

First off, it’s important to understand that this is not a diagnostic or treatment system. It’s framed as a support tool to help understand health information and prepare for care.

But this is the part that raised questions and concerns:

“You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you.”

In other words, ChatGPT Health lets you link medical records and apps such as Apple Health, MyFitnessPal, and others so the system can explain lab results, track trends (e.g., cholesterol), and help you prepare questions for clinicians or compare insurance options based on your health data.

Given our reservations about the state of AI security in general and chatbots in particular, this is a line that I don’t dare cross. For now, however, I don’t even have the option, since only users with ChatGPT Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the United Kingdom can sign up for the waitlist.

OpenAI only uses partners and apps in ChatGPT Health that meet OpenAI’s privacy and security requirements, which, by design, shifts a great deal of trust onto ChatGPT Health itself.

Users should realize that health information is very sensitive and as Sara Geoghegan, senior counsel at the Electronic Privacy Information Center told The Record: by sharing their electronic medical records with ChatGPT Health, users in the US could effectively remove the HIPAA protection from those records, which is a serious consideration for anyone sharing medical data.

She added:

“ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time.”

Should you decide to try this new feature out, we would advise you to proceed with caution and take the advice to enable 2FA for ChatGPT to heart. OpenAI claims 230 million users already ask ChatGPT health and wellness questions each week. I’d encourage them to do the same.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Are we ready for ChatGPT Health?

9 January 2026 at 13:26

How comfortable are you with sharing your medical history with an AI?

I’m certainly not.

OpenAI’s announcement about its new ChatGPT Health program prompted discussions about data privacy and how the company plans to keep the information users submit safe.

ChatGPT Health is a dedicated “health space” inside ChatGPT that lets users connect their medical records and wellness apps so the model can answer health and wellness questions in a more personalized way.

ChatGPT health

OpenAI promises additional, layered protections designed specifically for health, “to keep health conversations protected and compartmentalized.”

First off, it’s important to understand that this is not a diagnostic or treatment system. It’s framed as a support tool to help understand health information and prepare for care.

But this is the part that raised questions and concerns:

“You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you.”

In other words, ChatGPT Health lets you link medical records and apps such as Apple Health, MyFitnessPal, and others so the system can explain lab results, track trends (e.g., cholesterol), and help you prepare questions for clinicians or compare insurance options based on your health data.

Given our reservations about the state of AI security in general and chatbots in particular, this is a line that I don’t dare cross. For now, however, I don’t even have the option, since only users with ChatGPT Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the United Kingdom can sign up for the waitlist.

OpenAI only uses partners and apps in ChatGPT Health that meet OpenAI’s privacy and security requirements, which, by design, shifts a great deal of trust onto ChatGPT Health itself.

Users should realize that health information is very sensitive and as Sara Geoghegan, senior counsel at the Electronic Privacy Information Center told The Record: by sharing their electronic medical records with ChatGPT Health, users in the US could effectively remove the HIPAA protection from those records, which is a serious consideration for anyone sharing medical data.

She added:

“ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time.”

Should you decide to try this new feature out, we would advise you to proceed with caution and take the advice to enable 2FA for ChatGPT to heart. OpenAI claims 230 million users already ask ChatGPT health and wellness questions each week. I’d encourage them to do the same.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Disney fined $10m for mislabeling kids’ YouTube videos and violating privacy law

6 January 2026 at 13:22

Disney will pay a $10m settlement over allegations that it violated kids’ privacy rights, the Federal Trade Commission (FTC) said this week.

The agreement, first proposed in September 2025, resolves a dispute over Disney’s labeling of child-targeted content on YouTube. The thousands of YouTube videos it targets at kids makes it subject to a US law called the Children’s Online Privacy Protection Act (COPPA). Enacted in 1998, COPPA is designed to protect children under the age of 13 from having their data collected and used online.

That protection matters because children are far less able to understand data collection, advertising, or profiling, and cannot understandingfully consent to it. When COPPA safeguards fail, children may be tracked across videos, served targeted ads, or profiled based on viewing habits, all without parental knowledge or approval.

In 2019, YouTube introduced a policy to help creators comply with COPPA by labeling their content as made for kids (MFK) or not made for kids (NMFK). Content labeled MFK is automatically restricted. For example, it can’t autoplay into related content, appear in the miniplayer, or be added to playlists.

This policy came about after the YouTube’s own painful COPPA-related experience in 2019, when it settled for $170m with the FTC after failing to properly label content directed at children. That still ranks as the biggest ever COPPA settlement by far.

Perhaps the two most important restrictions for videos labeled MFK are these: MFK videos should only autoplay into other kid-appropriate content, preventing (at least in theory) kids from seeing inappropriate content. And advertisers are prohibited from collecting personal data from children watching those videos.

A chastened YouTube warned content creators, including Disney, that they could violate COPPA if they failed to label content correctly. They could do this in two ways: Creators could label entire channels (Disney has about 1,250 of these for its different content brands) or individual videos. So, a channel marked NMFK could still host MFK videos, but those individual videos needed to be labeled correctly.

According to the FTC, Disney’s efforts fell short and plenty of child-targeted videos were incorrectly labeled.

The court complaint stated that Disney applied blanket NMFK labels to entire YouTube channels instead of reviewing videos individually. As a result, some child-targeted videos were incorrectly labeled, allowing data collection and ad targeting that COPPA is meant to prevent. For example, the Pixar channel was labeled NMFK, but showed “very similar” videos from the Pixar Cars channel, which was labeled MFK.

The FTC said YouTube warned Disney in June 2020 that it had reclassified more than 300 of its videos as child-directed across channels including Pixar, Disney Movies, and Walt Disney Animation Studios.

This is not Disney’s first privacy rodeo

Disney has a history of tussles with child privacy laws. In 2011, its Playdom subsidiary paid $3 million (at that point the largest COPPA penalty ever) for collecting data from more than 1.2 million children across 20 virtual world websites. In 2021, Disney also settled a lawsuit that accused it and others of collecting and selling kids’ information via child-focused mobile apps.

In the current case, the FTC voted 3-0 to refer this current case to the Department of Justice, with Commissioners Ferguson, Holyoak, and Meador citing what they described as,

“Disney’s abuse of parents’ trust.”

Under the settlement, Disney must do more than pay up. It also has to notify parents before collecting personal information from children under 13 and obtain parents’ consent to use it. Disney must also review whether individual videos should be labeled as made for kids. However, the FTC provides a get-out clause: Disney won’t have to do this if YouTube implements age assurance technologies that determine a viewer’s age (or age category).

Age assurance is clearly something the FTC is pursuing, saying:

“This forward-looking provision reflects and anticipates the growing use of age assurance technologies to protect kids online.”


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Disney fined $10m for mislabeling kids’ YouTube videos and violating privacy law

6 January 2026 at 13:22

Disney will pay a $10m settlement over allegations that it violated kids’ privacy rights, the Federal Trade Commission (FTC) said this week.

The agreement, first proposed in September 2025, resolves a dispute over Disney’s labeling of child-targeted content on YouTube. The thousands of YouTube videos it targets at kids makes it subject to a US law called the Children’s Online Privacy Protection Act (COPPA). Enacted in 1998, COPPA is designed to protect children under the age of 13 from having their data collected and used online.

That protection matters because children are far less able to understand data collection, advertising, or profiling, and cannot understandingfully consent to it. When COPPA safeguards fail, children may be tracked across videos, served targeted ads, or profiled based on viewing habits, all without parental knowledge or approval.

In 2019, YouTube introduced a policy to help creators comply with COPPA by labeling their content as made for kids (MFK) or not made for kids (NMFK). Content labeled MFK is automatically restricted. For example, it can’t autoplay into related content, appear in the miniplayer, or be added to playlists.

This policy came about after the YouTube’s own painful COPPA-related experience in 2019, when it settled for $170m with the FTC after failing to properly label content directed at children. That still ranks as the biggest ever COPPA settlement by far.

Perhaps the two most important restrictions for videos labeled MFK are these: MFK videos should only autoplay into other kid-appropriate content, preventing (at least in theory) kids from seeing inappropriate content. And advertisers are prohibited from collecting personal data from children watching those videos.

A chastened YouTube warned content creators, including Disney, that they could violate COPPA if they failed to label content correctly. They could do this in two ways: Creators could label entire channels (Disney has about 1,250 of these for its different content brands) or individual videos. So, a channel marked NMFK could still host MFK videos, but those individual videos needed to be labeled correctly.

According to the FTC, Disney’s efforts fell short and plenty of child-targeted videos were incorrectly labeled.

The court complaint stated that Disney applied blanket NMFK labels to entire YouTube channels instead of reviewing videos individually. As a result, some child-targeted videos were incorrectly labeled, allowing data collection and ad targeting that COPPA is meant to prevent. For example, the Pixar channel was labeled NMFK, but showed “very similar” videos from the Pixar Cars channel, which was labeled MFK.

The FTC said YouTube warned Disney in June 2020 that it had reclassified more than 300 of its videos as child-directed across channels including Pixar, Disney Movies, and Walt Disney Animation Studios.

This is not Disney’s first privacy rodeo

Disney has a history of tussles with child privacy laws. In 2011, its Playdom subsidiary paid $3 million (at that point the largest COPPA penalty ever) for collecting data from more than 1.2 million children across 20 virtual world websites. In 2021, Disney also settled a lawsuit that accused it and others of collecting and selling kids’ information via child-focused mobile apps.

In the current case, the FTC voted 3-0 to refer this current case to the Department of Justice, with Commissioners Ferguson, Holyoak, and Meador citing what they described as,

“Disney’s abuse of parents’ trust.”

Under the settlement, Disney must do more than pay up. It also has to notify parents before collecting personal information from children under 13 and obtain parents’ consent to use it. Disney must also review whether individual videos should be labeled as made for kids. However, the FTC provides a get-out clause: Disney won’t have to do this if YouTube implements age assurance technologies that determine a viewer’s age (or age category).

Age assurance is clearly something the FTC is pursuing, saying:

“This forward-looking provision reflects and anticipates the growing use of age assurance technologies to protect kids online.”


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Flock Exposes Its AI-Enabled Surveillance Cameras

2 January 2026 at 13:05

404 Media has the story:

Unlike many of Flock’s cameras, which are designed to capture license plates as people drive by, Flock’s Condor cameras are pan-tilt-zoom (PTZ) cameras designed to record and track people, not vehicles. Condor cameras can be set to automatically zoom in on people’s faces as they walk through a parking lot, down a public street, or play on a playground, or they can be controlled manually, according to marketing material on Flock’s website. We watched Condor cameras zoom in on a woman walking her dog on a bike path in suburban Atlanta; a camera followed a man walking through a Macy’s parking lot in Bakersfield; surveil children swinging on a swingset at a playground; and film high-res video of people sitting at a stoplight in traffic. In one case, we were able to watch a man rollerblade down Brookhaven, Georgia’s Peachtree Creek Greenway bike path. The Flock camera zoomed in on him and tracked him as he rolled past. Minutes later, he showed up on another exposed camera livestream further down the bike path. The camera’s resolution was good enough that we were able to see that, when he stopped beneath one of the cameras, he was watching rollerblading videos on his phone.

In 2025, age checks started locking people out of the internet

31 December 2025 at 11:49

If 2024 was the year lawmakers talked about online age verification, 2025 was the year they actually flipped the switch.​

In 2025, across parts of Europe and the US, age checks for certain websites (especially pornography) turned long‑running child‑protection debates into real‑world access controls. Overnight, users found entire categories of sites locked behind ID checks, platforms geo‑blocking whole countries, and VPN traffic surging as people tried to get around the new walls.​

From France’s hardline stance on adult sites to the UK’s Online Safety Act, to a patchwork of new rules across multiple US states, these “show me your ID before you browse” systems are reshaping the web. The stated goal is to “protect the children,” but in practice the outcome is frequently a blunt national block, followed by users voting with their VPN buttons.​

The core tension: safety vs privacy

The fundamental challenge for websites and services is not checking age in principle, but how to do it without turning everyday browsing into an identity check. Almost every viable method asks users to hand over sensitive data, raising the stakes if (or more likely when) that data leaks in a breach.​

For ordinary users, the result is a confusing mess of blocks, prompts, and workarounds. On paper, countries want better protection for minors. In practice, adults discover that entire platforms are unavailable unless they are prepared to disclose personal information or disguise where they connect from. No website wants to be the one blamed after an age‑verification database is compromised, yet regulators continue to push for stronger identity links.​

How age checks actually work

Regulators such as Ofcom publish lists of acceptable age‑verification methods, each with its own privacy and usability trade‑offs. None are perfect, and many shift risk from governments and platforms onto users’ most sensitive personal data.​

  • Facial age estimation: Users upload a selfie or short video so an algorithm can guess whether they look over 18, which avoids storing documents but relies on sensitive biometrics and imperfect accuracy.​
  • Open banking: An age‑check service queries your bank for a simple “adult or not” answer. It may be convenient on paper but it’s a hard sell when the relying site is an adult platform.​
  • Digital identity services: Digital ID wallets can assert “over 18” without exposing full credentials, but they add yet another app and infrastructure layer that must be trusted and widely adopted.​
  • Credit card checks: Using a valid payment card as a proxy for adulthood is simple and familiar, but it excludes adults without cards and does not cover lower age thresholds like “over 13.”​
  • Email‑based estimation: Systems infer age from where an email address has been used (such as banks or utilities), effectively encouraging cross‑service profiling and “digital snooping.”​
  • Mobile network checks: Providers indicate whether an account has age‑related restrictions. This can be fast, but is unreliable for pay‑as‑you‑go accounts, burner SIMs, or poorly maintained records.​
  • Photo‑ID matching: Users upload an ID document plus a selfie so systems can match faces and ages. This is effective, but concentrates highly sensitive identity data in yet another attractive target for attackers.​

My personal preference would be double‑blind verification: a third‑party provider verifies your age, then issues a simple token like “18+” to sites without revealing your identity or learning which site you visit, offering stronger privacy than most current approaches.​

In almost every case, users must surrender personal information or documents to prove their age, increasing the risk that identity data ends up in the wrong hands. This turns age gates into long‑lived security liabilities rather than temporary access checks.​

Geoblocking, VPNs, and cross‑border frictions

Right now, most platforms comply by detecting user location via IP address and then either demanding age checks or denying access entirely to users in specific regions. France’s enforcement actions, for example, led several major adult sites to geo-block the entire country in 2025, while the UK’s Online Safety Act coincided with a sharp rise in VPN use rather than widespread cross-border blocking.

European regulators generally focus on domestic ISPs, Digital Services Act reporting, and large platform fines rather than on filtering traffic from other countries, partly because broad traffic blocking raises net‑neutrality and technical complexity concerns. In the US, some state proposals have explicitly targeted VPN circumventions, signalling a willingness to attack the workarounds rather than the underlying incentives.​

Meanwhile, network‑level filtering vendors advertise “cross‑border” controls and VPN detection for governments, hinting at future scenarios where unregulated inbound flows or anonymity tools are aggressively throttled. If enforcement pressure grows, these capabilities could evolve from niche offerings into standard state infrastructure.​

A future of less anonymity?

A common argument is that eroding online anonymity will also curb toxic behavior and abuse on social media, since people act differently when their real‑world identity is at stake. But tying everyday browsing to identity checks risks chilling legitimate speech and exploration long before it delivers any proven civility benefits.​

A world where every connection requires ID is unlikely to arrive overnight. Still, the direction of travel is clear: more countries are normalizing age gates that double as identity checks, and more users are learning to route around them. Unless privacy‑preserving systems like robust double‑blind verification become the norm, age‑verification policies intended to protect children may end up undermining both privacy and open access to information.​


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

❌