Normal view

An AI plush toy exposed thousands of private chats with children

3 February 2026 at 17:55

Bondu’s AI plush toy exposed a web console that let anyone with a Gmail account read about 50,000 private chats between children and their cuddly toys.

Bondu’s toy is marketed as:

“A soft, cuddly toy powered by AI that can chat, teach, and play with your child.”

What it doesn’t say is that anyone with a Gmail account could read the transcripts from virtually every child who used a Bondu toy. Without any actual hacking, simply by logging in with an arbitrary Google account, two researchers found themselves looking at children’s private conversations.

What Bondu has to say about safety does not mention security or privacy:

“Bondu’s safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from Bondu throughout the entire beta period.”

Bondu’s emphasis on successful beta testing is understandable. Remember the AI teddy bear marketed by FoloToy that quickly veered from friendly chat into sexual topics and unsafe household advice?

The researchers were stunned to find the company’s public-facing web console allowed anyone to log in with their Google account. The chat logs between children and their plushies revealed names, birth dates, family details, and intimate conversations. The only conversations not available were those manually deleted by parents or company staff.

Potentially, these chat logs could been a burglar’s or kidnapper’s dream, offering insight into household routines and upcoming events.

Bondu took the console offline within minutes of disclosure, then relaunched it with authentication. The CEO said fixes were completed within hours, they saw “no evidence” of other access, and they brought in a security firm and added monitoring.

In the past, we’ve pointed out that AI-powered stuffed animals may not be a good alternative for screen time. Critics warn that when a toy uses personalized, human‑like dialogue, it risks replacing aspects of the caregiver–child relationship. One Curio founder even described their plushie as a stimulating sidekick so parents, “don’t feel like you have to be sitting them in front of a TV.”

So, whether it’s a foul-mouth, a blabbermouth, or just a feeble replacement for real friends, we don’t encourage using Artificial Intelligence in children’s toys—unless we ever make it to a point where they can be used safely, privately, securely, and even then, sparingly.

How to stay safe

AI-powered toys are coming, like it or not. But being the first or the cutest doesn’t mean they’re safe. The lesson history keeps teaching us is this: oversight, privacy, and a healthy dose of skepticism are the best defenses parents have.

  • Turn off what you can. If the toy has a removable AI component, consider disabling it when you’re not able to supervise directly.
  • Read the privacy policy. Yes, I knowall of it. Look for what will be recorded, stored, and potentially shared. Pay particular attention to sensitive data, like voice recordings, video recordings (if the toy has a camera), and location data.
  • Limit connectivity. Avoid toys that require constant Wi-Fi or cloud interaction if possible.
  • Monitor conversations. Regularly check in with your kids about what the toy says and supervise play where practical.
  • Keep personal info private. Teach kids to never share their names, addresses, or family details, even with their plush friend.
  • Trust your instincts. If a toy seems to cross boundaries or interfere with natural play, don’t be afraid to step in or simply say no.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

An AI plush toy exposed thousands of private chats with children

3 February 2026 at 17:55

Bondu’s AI plush toy exposed a web console that let anyone with a Gmail account read about 50,000 private chats between children and their cuddly toys.

Bondu’s toy is marketed as:

“A soft, cuddly toy powered by AI that can chat, teach, and play with your child.”

What it doesn’t say is that anyone with a Gmail account could read the transcripts from virtually every child who used a Bondu toy. Without any actual hacking, simply by logging in with an arbitrary Google account, two researchers found themselves looking at children’s private conversations.

What Bondu has to say about safety does not mention security or privacy:

“Bondu’s safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from Bondu throughout the entire beta period.”

Bondu’s emphasis on successful beta testing is understandable. Remember the AI teddy bear marketed by FoloToy that quickly veered from friendly chat into sexual topics and unsafe household advice?

The researchers were stunned to find the company’s public-facing web console allowed anyone to log in with their Google account. The chat logs between children and their plushies revealed names, birth dates, family details, and intimate conversations. The only conversations not available were those manually deleted by parents or company staff.

Potentially, these chat logs could been a burglar’s or kidnapper’s dream, offering insight into household routines and upcoming events.

Bondu took the console offline within minutes of disclosure, then relaunched it with authentication. The CEO said fixes were completed within hours, they saw “no evidence” of other access, and they brought in a security firm and added monitoring.

In the past, we’ve pointed out that AI-powered stuffed animals may not be a good alternative for screen time. Critics warn that when a toy uses personalized, human‑like dialogue, it risks replacing aspects of the caregiver–child relationship. One Curio founder even described their plushie as a stimulating sidekick so parents, “don’t feel like you have to be sitting them in front of a TV.”

So, whether it’s a foul-mouth, a blabbermouth, or just a feeble replacement for real friends, we don’t encourage using Artificial Intelligence in children’s toys—unless we ever make it to a point where they can be used safely, privately, securely, and even then, sparingly.

How to stay safe

AI-powered toys are coming, like it or not. But being the first or the cutest doesn’t mean they’re safe. The lesson history keeps teaching us is this: oversight, privacy, and a healthy dose of skepticism are the best defenses parents have.

  • Turn off what you can. If the toy has a removable AI component, consider disabling it when you’re not able to supervise directly.
  • Read the privacy policy. Yes, I knowall of it. Look for what will be recorded, stored, and potentially shared. Pay particular attention to sensitive data, like voice recordings, video recordings (if the toy has a camera), and location data.
  • Limit connectivity. Avoid toys that require constant Wi-Fi or cloud interaction if possible.
  • Monitor conversations. Regularly check in with your kids about what the toy says and supervise play where practical.
  • Keep personal info private. Teach kids to never share their names, addresses, or family details, even with their plush friend.
  • Trust your instincts. If a toy seems to cross boundaries or interfere with natural play, don’t be afraid to step in or simply say no.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Alert fatigue is costing you: Why your SOC misses 1% of real threats

3 February 2026 at 15:04

Introducing the 2026 Intezer AI SOC Report for CISOs

For years, security leaders have lived with an uncomfortable truth. It has been to date, simply impossible to investigate every alert. As alert volumes exploded and teams failed to scale, SOCs, whether in-house or outsourced, normalized “acceptable risk” with the deprioritization of low-severity and informational alerts.

Our latest research shows that this approach is no longer defensible.

Intezer has just released the 2026 AI SOC Report for CISOs, based on the forensic analysis of more than 25 million security alerts across live enterprise environments. The findings reveal a critical disconnect between how security teams prioritize alerts and where real threats actually originate, and the cost of that gap is far higher than most organizations realize .

Why “acceptable risk” is no longer acceptable 

Across endpoint, cloud, identity, network, and phishing telemetry, Intezer found that nearly 1% of confirmed incidents originated from alerts initially labeled as low-severity or informational. On endpoints, that figure climbed to nearly 2%.

At enterprise scale, that percentage is not noise.

For a typical organization generating roughly 450,000 alerts per year, this translates to ~50 real threats annually, about one per week, never investigated by a SOC or MDR team. These are not theoretical risks. They are real compromises hiding in plain sight, dismissed not because they were benign, but because teams lacked the capacity to look.

What the data revealed across the attack surface

Because Intezer AI SOC investigates 100% of alerts using forensic-grade analysis, the report exposes how attackers actually operate once you remove triage bias from the equation.

Endpoint security is more fragile than reported

More than half of endpoint alerts were not automatically mitigated by endpoint protection tools. Of those, nearly 9% were confirmed malicious. Even more concerning, 1.6% of endpoints undergoing live forensic scans were still actively compromised despite being reported as “mitigated” by EDR tools.

See the full endpoint threat data → Download the 2026 AI SOC Report

Low-severity does not mean low-risk

Within endpoint alerts alone, 1.9% of low-severity and informational alerts were real incidents, the exact alerts most SOCs never review.

Attackers favor stealth over noise

Cloud telemetry was dominated by defense evasion and persistence techniques, reflecting a shift toward long-term access, token abuse, and misuse of legitimate services rather than overt exploitation.

Phishing has moved into trusted platforms and browsers

Fewer than 6% of malicious phishing emails contained attachments. Most relied on links, language, and abuse of legitimate services such as cloud file sharing, code sandboxes, CAPTCHA mechanisms, where traditional controls have limited visibility.

Cloud misconfigurations persist as silent risk multipliers

Most cloud posture findings stemmed from legacy or default configurations, especially in Amazon S3, including missing encryption, weak access controls, and lack of logging—issues often classified as “low severity,” yet repeatedly exploited once attackers gain a foothold.

To read the full report and all the findings, download the CISOs guide to AI SOC 2026 here. 

Why traditional SOCs fail: capacity, fragmentation and judging alerts by their severity

Modern SOC failures are rarely the result of a single broken tool or negligent team. They are the outcome of structural tradeoffs that every traditional SOC—internal or MDR—has been forced to make.

Capacity is the first constraint.
Human analysts do not scale linearly with alert volume. As telemetry expands across endpoint, cloud, identity, network, and SaaS, SOCs hit a hard ceiling. The only way to cope is aggressive triage: close most alerts automatically, investigate only what looks “important,” and hope severity labels align with reality. The 2026 AI SOC Report shows that this assumption is false at scale.

Tool fragmentation compounds the problem.
Most SOC stacks are collections of siloed detections, EDR, SIEM, identity, cloud posture, email, each optimized for a narrow signal. Severity is assigned locally, without cross-surface context or forensic validation. As a result, alerts are scored based on abstract rules, not evidence of compromise. When SOCs trust these labels blindly, they inherit the tools’ blind spots.

Process tradeoffs lock risk in place.
Once triage rules are defined, they become institutionalized. Low-severity alerts are ignored by design. MDR providers codify this into SLAs. Internal SOCs bake it into runbooks. Crucially, there is no closed-loop feedback: missed threats do not automatically improve detections, because they were never investigated in the first place.

The outcome is not an occasional failure. It is systematic, repeatable risk, embedded directly into how SOCs operate.

Real-world examples of missed threats hiding in plain sight

The data in the 2026 AI SOC Report makes clear that missed threats are not exotic edge cases. They are ordinary attacks progressing quietly through environments because no one looked.

Endpoints marked “mitigated” but still compromised
In over 1.6% of live forensic endpoint scans, Intezer found active malicious code running in memory even though the EDR had already reported the threat as resolved. These cases included stealers, RATs, and post-exploitation frameworks, often originating from low-severity alerts that never triggered deeper inspection. Without memory-level forensics, these compromises would have remained invisible.

Phishing hosted on trusted platforms
Attackers increasingly host phishing pages on legitimate developer platforms like Vercel and CodePen, or abuse trusted cloud services such as OneDrive and PayPal. The parent domains appear reputable, so alerts are downgraded or ignored. Yet behind them are live credential-harvesting pages that bypass email gateways and browser-based defenses alike.

Cloud misconfigurations as delayed breach accelerators
Many cloud posture findings such as unencrypted S3 buckets, missing access logs and permissive cross-account policies rarely trigger action. But once an attacker gains any foothold, these long-standing misconfigurations dramatically accelerate lateral movement, persistence, and data exposure.

In every case, the failure was not detection. The signal existed. The failure was investigation.

How attackers deliberately exploit SOC blind spots

Attackers understand SOC economics better than most defenders.

They know which alerts generate fatigue.
They know which detections are noisy.
They know which categories are deprioritized by default.

As a result, modern attackers design their campaigns to blend into the backlog, not trigger alarms.

Stealth over speed
Cloud intrusions favor defense evasion, persistence, and token abuse over loud exploitation. These behaviors generate alerts, but rarely high-severity ones. The report shows cloud telemetry dominated by exactly these tactics, indicating attackers are optimizing for long-term access rather than immediate impact.

Living off trusted infrastructure
Phishing campaigns increasingly abuse legitimate brands, file-sharing services, CAPTCHA frameworks, and developer platforms. These environments inherit trust by default, allowing attackers to operate under severity thresholds that SOCs routinely ignore.

Multi-stage loaders and memory-only execution
On endpoints, attackers rely on layered loaders, in-memory payloads, and obfuscation techniques that evade static detections. Initial alerts may look benign or incomplete. Without forensic follow-through, SOCs miss the actual compromise entirely.

Attackers are not evading detection systems alone, rather they are exploiting SOC decision-making models.

What this means for your SOC operations

For CISOs and SOC leaders, the implication is stark:
Risk is no longer defined by what you detect, but by what you choose not to investigate.

If your SOC:

  • Ignores low-severity alerts by default
  • Relies on severity labels without forensic validation
  • Limits investigations based on human capacity
  • Operates without a feedback loop between outcomes and detections

Then missed threats are not anomalies, they are guaranteed.

The organizations that will reduce risk in 2026 are not adding more dashboards or rewriting triage rules. They are adopting operating models where investigation is no longer a scarce resource.

This is why AI-driven, forensic-grade SOC platforms fundamentally change the equation. When every alert is investigated:

  • Severity becomes evidence-based, not assumed
  • Detection quality improves through real-world validation
  • Attackers lose the ability to hide in “acceptable risk”
  • SOC teams regain control without scaling headcount

This is the shift behind the Intezer AI SOC model and why the concept of acceptable risk must be redefined for the modern threat landscape.

This all changes when you can investigate everything

The data in the 2026 AI SOC Report points to a different reality, one where AI-driven forensic analysis removes investigation capacity as a constraint.

When every alert is investigated:

  • “Low severity” stops being a proxy for “safe”
  • Detection quality improves through real-world validation
  • Missed threats drop from dozens per year to near zero
  • Escalations fall below 2%, without sacrificing coverage
  • Risk tolerance is defined by evidence, not exhaustion

This is the operating model behind Intezer AI SOC, powered by ForensicAI™ and it is why the definition of acceptable risk must be reset.

Download the report and join the discussion

The 2026 AI SOC Report for CISOs is grounded in:

  • 25 million alerts analyzed
  • 10 million monitored endpoints and identities
  • 82,000 forensic endpoint investigations, including live memory scans
  • Telemetry from 7 million IP addresses, 3 million domains and URLs, and over 550,000 phishing emails

All data was aggregated and anonymized across Intezer’s global enterprise customer base.

👉 Download the full report to explore the findings in detail, and
👉 Join Intezer’s research team on Wednesday, February 4th at 12 p.m. ET for a live webinar breaking down what this data means for SOC leaders and CISOs.

Because in 2026, the biggest risk is no longer what you detect, it’s what you choose not to investigate.

The post Alert fatigue is costing you: Why your SOC misses 1% of real threats appeared first on Intezer.

AT&T breach data resurfaces with new risks for customers

3 February 2026 at 12:48

When data resurfaces, it never comes back weaker. A newly shared dataset tied to AT&T shows just how much more dangerous an “old” breach can become once criminals have enough of the right details to work with.

The dataset, privately circulated since February 2, 2026, is described as AT&T customer data likely gathered over the years. It doesn’t just contain a few scraps of contact information. It reportedly includes roughly 176 million records, with…

  • Up to 148 million Social Security numbers (full SSNs and last four digits)
  • More than 133 million full names and street addresses
  • More than 132 million phone numbers.
  • Dates of birth for around 75 million people
  • More than 131 million email addresses

Taken together, that’s the kind of rich, structured data set that makes a criminal’s life much easier.

On their own, any one of these data points would be inconvenient but manageable. An email address fuels spam and basic phishing. A phone number enables smishing and robocalls. An address helps attackers guess which services you might use. But when attackers can look up a single person and see name, full address, phone, email, complete or partial SSN, and date of birth in one place, the risk shifts from “annoying” to high‑impact.

That combination is exactly what many financial institutions and mobile carriers still rely on for identity checks. For cybercriminals, this sort of dataset is a Swiss Army knife.

It can be used to craft convincing AT&T‑themed phishing emails and texts, complete with correct names and partial SSNs to “prove” legitimacy. It can power large‑scale SIM‑swap attempts and account takeovers, where criminals call carriers and banks pretending to be you, armed with the answers those call centers expect to hear. It can also enable long‑term identity theft, with SSNs and dates of birth abused to open new lines of credit or file fraudulent tax returns.

The uncomfortable part is that a fresh hack isn’t always required to end up here. Breach data tends to linger, then get merged, cleaned up, and expanded over time. What’s different in this case is the breadth and quality of the profiles. They include more email addresses, more SSNs, more complete records per person. That makes the data more attractive, more searchable, and more actionable for criminals.

For potential victims, the lesson is simple but important. If you have ever been an AT&T customer, treat this as a reminder that your data may already be circulating in a form that is genuinely useful to attackers. Be cautious of any AT&T‑related email or text, enable multi‑factor authentication wherever possible, lock down your mobile account with extra passcodes, and consider monitoring your credit. You can’t pull your data back out of a criminal dataset—but you can make sure it’s much harder to use against you.

What to do when your data is involved in a breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but it increases risk if a retailer suffers a breach.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

Use Malwarebytes’ free Digital Footprint scan to see whether your personal information has been exposed online.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

AT&T breach data resurfaces with new risks for customers

3 February 2026 at 12:48

When data resurfaces, it never comes back weaker. A newly shared dataset tied to AT&T shows just how much more dangerous an “old” breach can become once criminals have enough of the right details to work with.

The dataset, privately circulated since February 2, 2026, is described as AT&T customer data likely gathered over the years. It doesn’t just contain a few scraps of contact information. It reportedly includes roughly 176 million records, with…

  • Up to 148 million Social Security numbers (full SSNs and last four digits)
  • More than 133 million full names and street addresses
  • More than 132 million phone numbers.
  • Dates of birth for around 75 million people
  • More than 131 million email addresses

Taken together, that’s the kind of rich, structured data set that makes a criminal’s life much easier.

On their own, any one of these data points would be inconvenient but manageable. An email address fuels spam and basic phishing. A phone number enables smishing and robocalls. An address helps attackers guess which services you might use. But when attackers can look up a single person and see name, full address, phone, email, complete or partial SSN, and date of birth in one place, the risk shifts from “annoying” to high‑impact.

That combination is exactly what many financial institutions and mobile carriers still rely on for identity checks. For cybercriminals, this sort of dataset is a Swiss Army knife.

It can be used to craft convincing AT&T‑themed phishing emails and texts, complete with correct names and partial SSNs to “prove” legitimacy. It can power large‑scale SIM‑swap attempts and account takeovers, where criminals call carriers and banks pretending to be you, armed with the answers those call centers expect to hear. It can also enable long‑term identity theft, with SSNs and dates of birth abused to open new lines of credit or file fraudulent tax returns.

The uncomfortable part is that a fresh hack isn’t always required to end up here. Breach data tends to linger, then get merged, cleaned up, and expanded over time. What’s different in this case is the breadth and quality of the profiles. They include more email addresses, more SSNs, more complete records per person. That makes the data more attractive, more searchable, and more actionable for criminals.

For potential victims, the lesson is simple but important. If you have ever been an AT&T customer, treat this as a reminder that your data may already be circulating in a form that is genuinely useful to attackers. Be cautious of any AT&T‑related email or text, enable multi‑factor authentication wherever possible, lock down your mobile account with extra passcodes, and consider monitoring your credit. You can’t pull your data back out of a criminal dataset—but you can make sure it’s much harder to use against you.

What to do when your data is involved in a breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but it increases risk if a retailer suffers a breach.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

Use Malwarebytes’ free Digital Footprint scan to see whether your personal information has been exposed online.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Apple’s new iOS setting addresses a hidden layer of location tracking

3 February 2026 at 12:20

Most iPhone owners have hopefully learned to manage app permissions by now, including allowing location access. But there’s another layer of location tracking that operates outside these controls. Your cellular carrier has been collecting your location data all along, and until now, there was nothing you could do about it.

Apple just changed this in iOS 26.3 with a new setting called “limit precise location.”

How Apple’s anti-carrier tracking system works

Cellular networks track your phone’s location based on the cell towers it connects to, in a process known as triangulation. In cities where towers are densely packed, triangulation is precise enough to track you down to a street address.

This tracking is different from app-based location monitoring, because your phone’s privacy settings have historically been powerless to stop it. Toggle Location Services off entirely, and your carrier still knows where you are.

The new setting reduces the precision of location data shared with carriers. Rather than a street address, carriers would see only the neighborhood where a device is located. It doesn’t affect emergency calls, though, which still transmit precise coordinates to first responders. Apps like Apple’s “Find My” service, which locates your devices, or its navigation services, aren’t affected because they work using the phone’s location sharing feature.

Why is Apple doing this? Apple hasn’t said, but the move comes after years of carriers mishandling location data.

Unfortunately, cellular network operators have played fast and free with this data. In April 2024, the FCC fined Sprint and T-Mobile (which have since merged), along with AT&T and Verizon nearly $200 million combined for illegally sharing this location data. They sold access to customers’ location information to third party aggregators, who then sold it on to third parties without customer consent.

This turned into a privacy horror story for customers. One aggregator, LocationSmart, had a free demo on its website that reportedly allowed anyone to pinpoint the location of most mobile phones in North America.

Limited rollout

The feature only works with devices equipped with Apple’s custom C1 or C1X modems. That means just three devices: the iPhone Air, iPhone 16e, and the cellular iPad Pro with M5 chip. The iPhone 17, which uses Qualcomm silicon, is excluded. Apple can only control what its own modems transmit.

Carrier support is equally narrow. In the US, only Boost Mobile is participating in the feature at launch, while Verizon, AT&T, and T-Mobile are notable absences from the list given their past record. In Germany, Telekom is on the participant list, while both EE and BT are involved in the UK. In Thailand, AIS and True are on the list. There are no other carriers taking part as of today though.

Android also offers some support

Google also introduced a similar capability with Android 15’s Location Privacy hardware abstraction layer (HAL) last year. It faces the same constraint, though: modem vendors must cooperate, and most have not. Apple and Google don’t get to control the modems in most phones. This kind of privacy protection requires vertical integration that few manufacturers possess and few carriers seem eager to enable.

Most people think controlling app permissions means they’re in control of their location. This feature highlights something many users didn’t know existed: a separate layer of tracking handled by cellular networks, and one that still offers users very limited control.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Apple’s new iOS setting addresses a hidden layer of location tracking

3 February 2026 at 12:20

Most iPhone owners have hopefully learned to manage app permissions by now, including allowing location access. But there’s another layer of location tracking that operates outside these controls. Your cellular carrier has been collecting your location data all along, and until now, there was nothing you could do about it.

Apple just changed this in iOS 26.3 with a new setting called “limit precise location.”

How Apple’s anti-carrier tracking system works

Cellular networks track your phone’s location based on the cell towers it connects to, in a process known as triangulation. In cities where towers are densely packed, triangulation is precise enough to track you down to a street address.

This tracking is different from app-based location monitoring, because your phone’s privacy settings have historically been powerless to stop it. Toggle Location Services off entirely, and your carrier still knows where you are.

The new setting reduces the precision of location data shared with carriers. Rather than a street address, carriers would see only the neighborhood where a device is located. It doesn’t affect emergency calls, though, which still transmit precise coordinates to first responders. Apps like Apple’s “Find My” service, which locates your devices, or its navigation services, aren’t affected because they work using the phone’s location sharing feature.

Why is Apple doing this? Apple hasn’t said, but the move comes after years of carriers mishandling location data.

Unfortunately, cellular network operators have played fast and free with this data. In April 2024, the FCC fined Sprint and T-Mobile (which have since merged), along with AT&T and Verizon nearly $200 million combined for illegally sharing this location data. They sold access to customers’ location information to third party aggregators, who then sold it on to third parties without customer consent.

This turned into a privacy horror story for customers. One aggregator, LocationSmart, had a free demo on its website that reportedly allowed anyone to pinpoint the location of most mobile phones in North America.

Limited rollout

The feature only works with devices equipped with Apple’s custom C1 or C1X modems. That means just three devices: the iPhone Air, iPhone 16e, and the cellular iPad Pro with M5 chip. The iPhone 17, which uses Qualcomm silicon, is excluded. Apple can only control what its own modems transmit.

Carrier support is equally narrow. In the US, only Boost Mobile is participating in the feature at launch, while Verizon, AT&T, and T-Mobile are notable absences from the list given their past record. In Germany, Telekom is on the participant list, while both EE and BT are involved in the UK. In Thailand, AIS and True are on the list. There are no other carriers taking part as of today though.

Android also offers some support

Google also introduced a similar capability with Android 15’s Location Privacy hardware abstraction layer (HAL) last year. It faces the same constraint, though: modem vendors must cooperate, and most have not. Apple and Google don’t get to control the modems in most phones. This kind of privacy protection requires vertical integration that few manufacturers possess and few carriers seem eager to enable.

Most people think controlling app permissions means they’re in control of their location. This feature highlights something many users didn’t know existed: a separate layer of tracking handled by cellular networks, and one that still offers users very limited control.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

[updated] A fake cloud storage alert that ends at Freecash

3 February 2026 at 11:38

Last week we talked about an app that promises users they can make money testing games, or even just by scrolling through TikTok.

Imagine our surprise when we ended up on a site promoting that same Freecash app while investigating a “cloud storage” phish. We’ve all probably seen one of those. They’re common enough and according to recent investigation by BleepingComputer, there’s a

“large-scale cloud storage subscription scam campaign targeting users worldwide with repeated emails falsely warning recipients that their photos, files, and accounts are about to be blocked or deleted due to an alleged payment failure.”

Based on the description in that article, the email we found appears to be part of this campaign.

Cloud storage payment issue email

The subject line of the email is:

“{Recipient}. Your Cloud Account has been locked on Sat, 24 Jan 2026 09:57:55 -0500. Your photos and videos will be removed!”

This matches one of the subject lines that BleepingComputer listed.

And the content of the email:

Payment Issue – Cloud Storage

Dear User,

We encountered an issue while attempting to renew your Cloud Storage subscription.

Unfortunately, your payment method has expired. To ensure your Cloud continues without interruption, please update your payment details.

Subscription ID: 9371188

Product: Cloud Storage Premium

Expiration Date: Sat,24 Jan-2026

If you do not update your payment information, you may lose access to your Cloud Storage, which may prevent you from saving and syncing your data such as photos, videos, and documents.

Update Payment Details {link button}

Security Recommendations:

  • Always access your account through our official website
  • Never share your password with anyone
  • Ensure your contact and billing information are up to date”

The link in the email leads to  https://storage.googleapis[.]com/qzsdqdqsd/dsfsdxc.html#/redirect.html, which helps the scammer establish a certain amount of trust because it points to Google Cloud Storage (GCS). GCS is a legitimate service that allows authorized users to store and manage data such as files, images, and videos in buckets. However, as in this case, attackers can abuse it for phishing.

The redirect carries some parameters to the next website.

first redirect

The feed.headquartoonjpn[.]com domain was blocked by Malwarebytes. We’ve seen it before in an earlier campaign involving an Endurance-themed phish.

Endiurance phish

After a few more redirects, we ended up at hx5.submitloading[.]com, where a fake CAPTCHA triggered the last redirect to freecash[.]com, once it was solved.

slider captcha

The end goal of this phish likely depends on the parameters passed along during the redirects, so results may vary.

Rather than stealing credentials directly, the campaign appears designed to monetize traffic, funneling victims into affiliate offers where the operators get paid for sign-ups or conversions.

BleepingComputer noted that they were redirected to affiliate marketing websites for various products.

“Products promoted in this phishing campaign include VPN services, little-known security software, and other subscription-based offerings with no connection to cloud storage.”

How to stay safe

Ironically, the phishing email itself includes some solid advice:

  • Always access your account through our official website.
  • Never share your password with anyone.

We’d like to add:

  • Never click on links in unsolicited emails without verifying with a trusted source.
  • Use an up-to-date, real-time anti-malware solution with a web protection component.
  • Do not engage with websites that attract visitors like this.

Pro tip: Malwarebytes Scam Guard would have helped you identify this email as a scam and provided advice on how to proceed.

Redirect flow (IOCs)

storage.googleapis[.]com/qzsdqdqsd/dsfsdxc.html

feed.headquartoonjpn[.]com

revivejudgemental[.]com

hx5.submitloading[.]com

freecash[.]com

Update February 5, 2026

Almedia GmbH, the company behind the Freecash platform, reached out to us for information about the chain of redirects that lead to their platform. And after an investigation they notified us that:

“Following Malwarebytes’ reporting and the additional information they shared with us, we investigated the issue and identified an affiliate operating in breach of our policies. That partner has been removed from our network.

Almedia does not sell user data, and we take compliance, user trust, and responsible advertising seriously.”


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

[updated] A fake cloud storage alert that ends at Freecash

3 February 2026 at 11:38

Last week we talked about an app that promises users they can make money testing games, or even just by scrolling through TikTok.

Imagine our surprise when we ended up on a site promoting that same Freecash app while investigating a “cloud storage” phish. We’ve all probably seen one of those. They’re common enough and according to recent investigation by BleepingComputer, there’s a

“large-scale cloud storage subscription scam campaign targeting users worldwide with repeated emails falsely warning recipients that their photos, files, and accounts are about to be blocked or deleted due to an alleged payment failure.”

Based on the description in that article, the email we found appears to be part of this campaign.

Cloud storage payment issue email

The subject line of the email is:

“{Recipient}. Your Cloud Account has been locked on Sat, 24 Jan 2026 09:57:55 -0500. Your photos and videos will be removed!”

This matches one of the subject lines that BleepingComputer listed.

And the content of the email:

Payment Issue – Cloud Storage

Dear User,

We encountered an issue while attempting to renew your Cloud Storage subscription.

Unfortunately, your payment method has expired. To ensure your Cloud continues without interruption, please update your payment details.

Subscription ID: 9371188

Product: Cloud Storage Premium

Expiration Date: Sat,24 Jan-2026

If you do not update your payment information, you may lose access to your Cloud Storage, which may prevent you from saving and syncing your data such as photos, videos, and documents.

Update Payment Details {link button}

Security Recommendations:

  • Always access your account through our official website
  • Never share your password with anyone
  • Ensure your contact and billing information are up to date”

The link in the email leads to  https://storage.googleapis[.]com/qzsdqdqsd/dsfsdxc.html#/redirect.html, which helps the scammer establish a certain amount of trust because it points to Google Cloud Storage (GCS). GCS is a legitimate service that allows authorized users to store and manage data such as files, images, and videos in buckets. However, as in this case, attackers can abuse it for phishing.

The redirect carries some parameters to the next website.

first redirect

The feed.headquartoonjpn[.]com domain was blocked by Malwarebytes. We’ve seen it before in an earlier campaign involving an Endurance-themed phish.

Endiurance phish

After a few more redirects, we ended up at hx5.submitloading[.]com, where a fake CAPTCHA triggered the last redirect to freecash[.]com, once it was solved.

slider captcha

The end goal of this phish likely depends on the parameters passed along during the redirects, so results may vary.

Rather than stealing credentials directly, the campaign appears designed to monetize traffic, funneling victims into affiliate offers where the operators get paid for sign-ups or conversions.

BleepingComputer noted that they were redirected to affiliate marketing websites for various products.

“Products promoted in this phishing campaign include VPN services, little-known security software, and other subscription-based offerings with no connection to cloud storage.”

How to stay safe

Ironically, the phishing email itself includes some solid advice:

  • Always access your account through our official website.
  • Never share your password with anyone.

We’d like to add:

  • Never click on links in unsolicited emails without verifying with a trusted source.
  • Use an up-to-date, real-time anti-malware solution with a web protection component.
  • Do not engage with websites that attract visitors like this.

Pro tip: Malwarebytes Scam Guard would have helped you identify this email as a scam and provided advice on how to proceed.

Redirect flow (IOCs)

storage.googleapis[.]com/qzsdqdqsd/dsfsdxc.html

feed.headquartoonjpn[.]com

revivejudgemental[.]com

hx5.submitloading[.]com

freecash[.]com

Update February 5, 2026

Almedia GmbH, the company behind the Freecash platform, reached out to us for information about the chain of redirects that lead to their platform. And after an investigation they notified us that:

“Following Malwarebytes’ reporting and the additional information they shared with us, we investigated the issue and identified an affiliate operating in breach of our policies. That partner has been removed from our network.

Almedia does not sell user data, and we take compliance, user trust, and responsible advertising seriously.”


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Why should renters like me have to trade away our privacy just to get a roof over our heads? | Samantha Floreani

The rise in real estate tech means renters often hand over huge amounts of revealing information to digital third parties – at great risk

Would you trade your data privacy and security for housing? Thanks to the rise in real estate technologies, renters often have no choice but to hand over huge amounts of revealing information to digital third parties just to have somewhere to live. All the while we are told: trust us, we take your privacy seriously.

But recent Guardian reporting has revealed that seven popular “rent-tech” platforms have serious security vulnerabilities, leaving millions of documents containing personal information of renters exposed on the open web for years. When they were alerted to the risk, only two of the seven companies responded to say they would put additional security measures in place. Is this what taking renter privacy seriously looks like?

Continue reading...

© Photograph: Jacob Wackerhausen/Getty Images

© Photograph: Jacob Wackerhausen/Getty Images

© Photograph: Jacob Wackerhausen/Getty Images

Real estate agents in Australia using apps that leave millions of lease documents at risk, digital researcher says

Exclusive: ‘This is a blatant and disturbing disregard for the law and for people’s security,’ digital rights advocate says

Australian platforms used by real estate agents to upload documentation for renters and landlords are leaving people’s personal information exposed in hyperlinks accessible online.

An analysis of seven rent platforms provided to Guardian Australia by a researcher, who wished to remain anonymous, revealed millions of leasing documents could be accessed by threat actors.

Continue reading...

© Photograph: Carly Earl/The Guardian

© Photograph: Carly Earl/The Guardian

© Photograph: Carly Earl/The Guardian

In Other News: Paid for Being Jailed, Google’s $68M Settlement, CISA Chief’s ChatGPT Leak

30 January 2026 at 18:49

Other noteworthy stories that might have slipped under the radar: Apple updates platform security guide, LastPass detects new phishing wave, CISA withdraws from RSA Conference.

The post In Other News: Paid for Being Jailed, Google’s $68M Settlement, CISA Chief’s ChatGPT Leak appeared first on SecurityWeek.

Match, Hinge, OkCupid, and Panera Bread breached by ransomware group

30 January 2026 at 15:23

The ShinyHunters ransomware group has claimed the theft of data containing 10 million records belonging to the Match Group and 14 million records from bakery-café chain Panera Bread.

Claims posted by ShinyHunters
Claims posted by ShinyHunters

The Match Group, that runs multiple popular online dating services like Tinder, Match.com, Meetic, OkCupid, and Hinge has confirmed a cyber incident and is investigating the data breach.

Panera Bread also confirmed that an incident occurred and has alerted authorities. “The data involved is contact information,” it said in an emailed statement to Reuters.

ShinyHunters seems to be gaining access through Single-Sign-On (SSO) platforms and using voice-cloning techniques, which has resulted in a growing number of breaches across different companies. However, not all of these breaches have the same impact.

The impact

For the Match Group, ShinyHunters claims:

“Over 10 million records of Hinge, Match, and OkCupid usage data from Appsflyer and hundreds of internal documents.”

Match says there is no evidence that logins, financial data, or private chats were stolen, but Personally Identifiable Information (PII) and tracking data for some users are in scope. A notification process has been set in motion.

For Panera Bread, ShinyHunters claims to have compromised 14 million records containing PII.

Panera Bread reassures users that there is no indication that the hackers accessed user login credentials, financial information, or private communications.

ShinyHunters also breached Bumblr, Carmax, and Edmunds among others, but I wanted to use Panera Bread and the Match Group as two examples that have very different consequences for users.

When your activity on a dating app is compromised, the impact can be deeply personal. Concerns can range from partners, family members, or employers discovering dating profiles to the risk of doxxing. For many people, stigma around certain apps can lead to fears of being outed, accused of infidelity, or even extorted.

The impact of the Panera Bread breach will be very different. “I just ordered a sandwich and now some criminals have my home address?” Data like this is useful to enrich existing data sets. And the more they know, the easier and better they can target you in phishing attempts.

Protecting yourself after a data breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but it increases risk if a retailer suffers a breach.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

You can use Malwarebytes’ free Digital Footprint scan to find out if your private information is exposed online.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Match, Hinge, OkCupid, and Panera Bread breached by ransomware group

30 January 2026 at 15:23

The ShinyHunters ransomware group has claimed the theft of data containing 10 million records belonging to the Match Group and 14 million records from bakery-café chain Panera Bread.

Claims posted by ShinyHunters
Claims posted by ShinyHunters

The Match Group, that runs multiple popular online dating services like Tinder, Match.com, Meetic, OkCupid, and Hinge has confirmed a cyber incident and is investigating the data breach.

Panera Bread also confirmed that an incident occurred and has alerted authorities. “The data involved is contact information,” it said in an emailed statement to Reuters.

ShinyHunters seems to be gaining access through Single-Sign-On (SSO) platforms and using voice-cloning techniques, which has resulted in a growing number of breaches across different companies. However, not all of these breaches have the same impact.

The impact

For the Match Group, ShinyHunters claims:

“Over 10 million records of Hinge, Match, and OkCupid usage data from Appsflyer and hundreds of internal documents.”

Match says there is no evidence that logins, financial data, or private chats were stolen, but Personally Identifiable Information (PII) and tracking data for some users are in scope. A notification process has been set in motion.

For Panera Bread, ShinyHunters claims to have compromised 14 million records containing PII.

Panera Bread reassures users that there is no indication that the hackers accessed user login credentials, financial information, or private communications.

ShinyHunters also breached Bumblr, Carmax, and Edmunds among others, but I wanted to use Panera Bread and the Match Group as two examples that have very different consequences for users.

When your activity on a dating app is compromised, the impact can be deeply personal. Concerns can range from partners, family members, or employers discovering dating profiles to the risk of doxxing. For many people, stigma around certain apps can lead to fears of being outed, accused of infidelity, or even extorted.

The impact of the Panera Bread breach will be very different. “I just ordered a sandwich and now some criminals have my home address?” Data like this is useful to enrich existing data sets. And the more they know, the easier and better they can target you in phishing attempts.

Protecting yourself after a data breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but it increases risk if a retailer suffers a breach.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

You can use Malwarebytes’ free Digital Footprint scan to find out if your private information is exposed online.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

TikTok’s privacy update mentions immigration status. Here’s why.

30 January 2026 at 12:48

In 2026, could any five words be more chilling than “We’re changing our privacy terms?”

The timing could not have been worse for TikTok US when it sent millions of US users a mandatory privacy pop-up on January 22. The message forced users to accept updated terms if they wanted to keep using the app. Buried in that update was language about collecting “citizenship or immigration status.”

Specifically, TikTok said:

“Information You Provide may include sensitive personal information, as defined under applicable state privacy laws, such as information from users under the relevant age threshold, information you disclose in survey responses or in your user content about your racial or ethnic origin, national origin, religious beliefs, mental or physical health diagnosis, sexual life or sexual orientation, status as transgender or nonbinary, citizenship or immigration status, or financial information.”

The internet reacted badly. TikTok users took to social media, with some suggesting that TikTok was building a database of immigration status, and others pledging to delete their accounts. It didn’t help that TikTok’s US operation became a US-owned company on the same day, with Senator Ed Markey (D-Mass.) criticizing what he sees as a lack of transparency around the deal.

A legal requirement

In this case, things are may be less sinister than you’d think. The language is not new—it first appeared around August 2024. And TikTok is not asking users to provide their immigration status directly.

Instead, the disclosure covers sensitive information that users might voluntarily share in videos, surveys, or interactions with AI features.

The change appears to be driven largely by California’s AB-947, signed in October 2023. The law added immigration status to the state’s definition of sensitive personal information, placing it under stricter protections. Companies are required to disclose how they process sensitive personal information, even if they do not actively seek it out.

Other social media companies, including Meta, do not explicitly mention immigration status in their privacy policies. According to TechCrunch, that difference likely reflects how specific their disclosure language is—not a meaningful difference in what data is actually collected.

One meaningful change in TikTok’s updated policy does concern location tracking. Previous versions stated that TikTok did not collect GPS data from US users. The new policy says it may collect precise location data, depending on user settings. Users can reportedly opt out of this tracking.

Read the whole board, not just one square

So, does this mean TikTok—or any social media company—deserves our trust? That’s a harder question.

There are still red flags. In April, TikTok quietly removed a commitment to notify users before sharing data with law enforcement. According to Forbes, the company has also declined to say whether it shares, or would share, user data with agencies such as the Department of Homeland Security (DHS) or Immigration and Customs Enforcement (ICE).

That uncertainty is the real issue. Social media companies are notorious for collecting vast amounts of user data, and for being vague about how it may be used later. Outrage over a particularly explicit disclosure is understandable, but the privacy problem runs much deeper than a single policy update from one company.

People have reason to worry unless platforms explicitly commit to not collecting or inferring sensitive data—and explicitly commit to not sharing it with government agencies. And even then, skepticism is healthy. These companies have a long history of changing policies quietly when it suits them.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

TikTok’s privacy update mentions immigration status. Here’s why.

30 January 2026 at 12:48

In 2026, could any five words be more chilling than “We’re changing our privacy terms?”

The timing could not have been worse for TikTok US when it sent millions of US users a mandatory privacy pop-up on January 22. The message forced users to accept updated terms if they wanted to keep using the app. Buried in that update was language about collecting “citizenship or immigration status.”

Specifically, TikTok said:

“Information You Provide may include sensitive personal information, as defined under applicable state privacy laws, such as information from users under the relevant age threshold, information you disclose in survey responses or in your user content about your racial or ethnic origin, national origin, religious beliefs, mental or physical health diagnosis, sexual life or sexual orientation, status as transgender or nonbinary, citizenship or immigration status, or financial information.”

The internet reacted badly. TikTok users took to social media, with some suggesting that TikTok was building a database of immigration status, and others pledging to delete their accounts. It didn’t help that TikTok’s US operation became a US-owned company on the same day, with Senator Ed Markey (D-Mass.) criticizing what he sees as a lack of transparency around the deal.

A legal requirement

In this case, things are may be less sinister than you’d think. The language is not new—it first appeared around August 2024. And TikTok is not asking users to provide their immigration status directly.

Instead, the disclosure covers sensitive information that users might voluntarily share in videos, surveys, or interactions with AI features.

The change appears to be driven largely by California’s AB-947, signed in October 2023. The law added immigration status to the state’s definition of sensitive personal information, placing it under stricter protections. Companies are required to disclose how they process sensitive personal information, even if they do not actively seek it out.

Other social media companies, including Meta, do not explicitly mention immigration status in their privacy policies. According to TechCrunch, that difference likely reflects how specific their disclosure language is—not a meaningful difference in what data is actually collected.

One meaningful change in TikTok’s updated policy does concern location tracking. Previous versions stated that TikTok did not collect GPS data from US users. The new policy says it may collect precise location data, depending on user settings. Users can reportedly opt out of this tracking.

Read the whole board, not just one square

So, does this mean TikTok—or any social media company—deserves our trust? That’s a harder question.

There are still red flags. In April, TikTok quietly removed a commitment to notify users before sharing data with law enforcement. According to Forbes, the company has also declined to say whether it shares, or would share, user data with agencies such as the Department of Homeland Security (DHS) or Immigration and Customs Enforcement (ICE).

That uncertainty is the real issue. Social media companies are notorious for collecting vast amounts of user data, and for being vague about how it may be used later. Outrage over a particularly explicit disclosure is understandable, but the privacy problem runs much deeper than a single policy update from one company.

People have reason to worry unless platforms explicitly commit to not collecting or inferring sensitive data—and explicitly commit to not sharing it with government agencies. And even then, skepticism is healthy. These companies have a long history of changing policies quietly when it suits them.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Meta confirms it’s working on premium subscription for its apps

29 January 2026 at 22:06

Meta plans to test exclusive features that will be incorporated in paid versions of Facebook, Instagram, and WhatsApp. It confirmed these plans to TechCrunch.

But these plans are not to be confused with the ad-free subscription options that Meta introduced for Facebook and Instagram in the EU, the European Economic Area, and Switzerland in late 2023 and framed as a way to comply with General Data Protection Regulation (GDPR) and Digital Markets Act requirements.

From November 2023, users in those regions could either keep using the services for free with personalized ads or pay a monthly fee for an ad‑free experience. European rules require Meta to get users’ consent in order to show them targeted ads, so this was an obvious attempt to recoup advertising revenue when users declined to give that consent.

This year, users in the UK were given the same choice: use Meta’s products for free or subscribe to use them without ads. But only grudgingly, judging by the tone in the offer… “As part of laws in your region, you have a choice.”

As part of laws in your region, you have a choice
The ad-free option that has been rolling out coincides with the announcement of Meta’s premium subscriptions.

That ad-free option, however, is not what Meta is talking about now.

The newly announced plans are not about ads, and they are also separate from Meta Verified, which starts at around $15 a month and focuses on creators and businesses, offering a verification badge, better support, and anti‑impersonation protection.

Instead, these new subscriptions are likely to focus on additional features—more control over how users share and connect, and possibly tools such as expanded AI capabilities, unlimited audience lists, seeing who you follow that doesn’t follow you back, or viewing stories without the poster knowing it was you.

These examples are unconfirmed. All we know for sure is that Meta plans to test new paid features to see which ones users are willing to pay for and how much they can charge.

Meta has said these features will focus on productivity, creativity, and expanded AI.

My opinion

Unfortunately, this feels like another refusal to listen.

Most of us aren’t asking for more AI in our feeds. We’re asking for a basic sense of control: control over who sees us, what’s tracked about us, and how our data is used to feed an algorithm designed to keep us scrolling.

Users shouldn’t have to choose between being mined for behavioral data or paying a monthly fee just to be left alone. The message baked into “pay or be profiled” is that privacy is now a luxury good, not a default right. But while regulators keep saying the model is unlawful, the experience on the ground still nudges people toward the path of least resistance: accept the tracking and move on.

Even then, this level of choice is only available to users in Europe.

Why not offer the same option to users in the US? Or will it take stronger US privacy regulation to make that happen?


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Meta confirms it’s working on premium subscription for its apps

29 January 2026 at 22:06

Meta plans to test exclusive features that will be incorporated in paid versions of Facebook, Instagram, and WhatsApp. It confirmed these plans to TechCrunch.

But these plans are not to be confused with the ad-free subscription options that Meta introduced for Facebook and Instagram in the EU, the European Economic Area, and Switzerland in late 2023 and framed as a way to comply with General Data Protection Regulation (GDPR) and Digital Markets Act requirements.

From November 2023, users in those regions could either keep using the services for free with personalized ads or pay a monthly fee for an ad‑free experience. European rules require Meta to get users’ consent in order to show them targeted ads, so this was an obvious attempt to recoup advertising revenue when users declined to give that consent.

This year, users in the UK were given the same choice: use Meta’s products for free or subscribe to use them without ads. But only grudgingly, judging by the tone in the offer… “As part of laws in your region, you have a choice.”

As part of laws in your region, you have a choice
The ad-free option that has been rolling out coincides with the announcement of Meta’s premium subscriptions.

That ad-free option, however, is not what Meta is talking about now.

The newly announced plans are not about ads, and they are also separate from Meta Verified, which starts at around $15 a month and focuses on creators and businesses, offering a verification badge, better support, and anti‑impersonation protection.

Instead, these new subscriptions are likely to focus on additional features—more control over how users share and connect, and possibly tools such as expanded AI capabilities, unlimited audience lists, seeing who you follow that doesn’t follow you back, or viewing stories without the poster knowing it was you.

These examples are unconfirmed. All we know for sure is that Meta plans to test new paid features to see which ones users are willing to pay for and how much they can charge.

Meta has said these features will focus on productivity, creativity, and expanded AI.

My opinion

Unfortunately, this feels like another refusal to listen.

Most of us aren’t asking for more AI in our feeds. We’re asking for a basic sense of control: control over who sees us, what’s tracked about us, and how our data is used to feed an algorithm designed to keep us scrolling.

Users shouldn’t have to choose between being mined for behavioral data or paying a monthly fee just to be left alone. The message baked into “pay or be profiled” is that privacy is now a luxury good, not a default right. But while regulators keep saying the model is unlawful, the experience on the ground still nudges people toward the path of least resistance: accept the tracking and move on.

Even then, this level of choice is only available to users in Europe.

Why not offer the same option to users in the US? Or will it take stronger US privacy regulation to make that happen?


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

❌