Normal view

An AI plush toy exposed thousands of private chats with children

3 February 2026 at 17:55

Bondu’s AI plush toy exposed a web console that let anyone with a Gmail account read about 50,000 private chats between children and their cuddly toys.

Bondu’s toy is marketed as:

“A soft, cuddly toy powered by AI that can chat, teach, and play with your child.”

What it doesn’t say is that anyone with a Gmail account could read the transcripts from virtually every child who used a Bondu toy. Without any actual hacking, simply by logging in with an arbitrary Google account, two researchers found themselves looking at children’s private conversations.

What Bondu has to say about safety does not mention security or privacy:

“Bondu’s safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from Bondu throughout the entire beta period.”

Bondu’s emphasis on successful beta testing is understandable. Remember the AI teddy bear marketed by FoloToy that quickly veered from friendly chat into sexual topics and unsafe household advice?

The researchers were stunned to find the company’s public-facing web console allowed anyone to log in with their Google account. The chat logs between children and their plushies revealed names, birth dates, family details, and intimate conversations. The only conversations not available were those manually deleted by parents or company staff.

Potentially, these chat logs could been a burglar’s or kidnapper’s dream, offering insight into household routines and upcoming events.

Bondu took the console offline within minutes of disclosure, then relaunched it with authentication. The CEO said fixes were completed within hours, they saw “no evidence” of other access, and they brought in a security firm and added monitoring.

In the past, we’ve pointed out that AI-powered stuffed animals may not be a good alternative for screen time. Critics warn that when a toy uses personalized, human‑like dialogue, it risks replacing aspects of the caregiver–child relationship. One Curio founder even described their plushie as a stimulating sidekick so parents, “don’t feel like you have to be sitting them in front of a TV.”

So, whether it’s a foul-mouth, a blabbermouth, or just a feeble replacement for real friends, we don’t encourage using Artificial Intelligence in children’s toys—unless we ever make it to a point where they can be used safely, privately, securely, and even then, sparingly.

How to stay safe

AI-powered toys are coming, like it or not. But being the first or the cutest doesn’t mean they’re safe. The lesson history keeps teaching us is this: oversight, privacy, and a healthy dose of skepticism are the best defenses parents have.

  • Turn off what you can. If the toy has a removable AI component, consider disabling it when you’re not able to supervise directly.
  • Read the privacy policy. Yes, I knowall of it. Look for what will be recorded, stored, and potentially shared. Pay particular attention to sensitive data, like voice recordings, video recordings (if the toy has a camera), and location data.
  • Limit connectivity. Avoid toys that require constant Wi-Fi or cloud interaction if possible.
  • Monitor conversations. Regularly check in with your kids about what the toy says and supervise play where practical.
  • Keep personal info private. Teach kids to never share their names, addresses, or family details, even with their plush friend.
  • Trust your instincts. If a toy seems to cross boundaries or interfere with natural play, don’t be afraid to step in or simply say no.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

An AI plush toy exposed thousands of private chats with children

3 February 2026 at 17:55

Bondu’s AI plush toy exposed a web console that let anyone with a Gmail account read about 50,000 private chats between children and their cuddly toys.

Bondu’s toy is marketed as:

“A soft, cuddly toy powered by AI that can chat, teach, and play with your child.”

What it doesn’t say is that anyone with a Gmail account could read the transcripts from virtually every child who used a Bondu toy. Without any actual hacking, simply by logging in with an arbitrary Google account, two researchers found themselves looking at children’s private conversations.

What Bondu has to say about safety does not mention security or privacy:

“Bondu’s safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from Bondu throughout the entire beta period.”

Bondu’s emphasis on successful beta testing is understandable. Remember the AI teddy bear marketed by FoloToy that quickly veered from friendly chat into sexual topics and unsafe household advice?

The researchers were stunned to find the company’s public-facing web console allowed anyone to log in with their Google account. The chat logs between children and their plushies revealed names, birth dates, family details, and intimate conversations. The only conversations not available were those manually deleted by parents or company staff.

Potentially, these chat logs could been a burglar’s or kidnapper’s dream, offering insight into household routines and upcoming events.

Bondu took the console offline within minutes of disclosure, then relaunched it with authentication. The CEO said fixes were completed within hours, they saw “no evidence” of other access, and they brought in a security firm and added monitoring.

In the past, we’ve pointed out that AI-powered stuffed animals may not be a good alternative for screen time. Critics warn that when a toy uses personalized, human‑like dialogue, it risks replacing aspects of the caregiver–child relationship. One Curio founder even described their plushie as a stimulating sidekick so parents, “don’t feel like you have to be sitting them in front of a TV.”

So, whether it’s a foul-mouth, a blabbermouth, or just a feeble replacement for real friends, we don’t encourage using Artificial Intelligence in children’s toys—unless we ever make it to a point where they can be used safely, privately, securely, and even then, sparingly.

How to stay safe

AI-powered toys are coming, like it or not. But being the first or the cutest doesn’t mean they’re safe. The lesson history keeps teaching us is this: oversight, privacy, and a healthy dose of skepticism are the best defenses parents have.

  • Turn off what you can. If the toy has a removable AI component, consider disabling it when you’re not able to supervise directly.
  • Read the privacy policy. Yes, I knowall of it. Look for what will be recorded, stored, and potentially shared. Pay particular attention to sensitive data, like voice recordings, video recordings (if the toy has a camera), and location data.
  • Limit connectivity. Avoid toys that require constant Wi-Fi or cloud interaction if possible.
  • Monitor conversations. Regularly check in with your kids about what the toy says and supervise play where practical.
  • Keep personal info private. Teach kids to never share their names, addresses, or family details, even with their plush friend.
  • Trust your instincts. If a toy seems to cross boundaries or interfere with natural play, don’t be afraid to step in or simply say no.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Microsoft is Giving the FBI BitLocker Keys

3 February 2026 at 13:05

Microsoft gives the FBI the ability to decrypt BitLocker in response to court orders: about twenty times per year.

It’s possible for users to store those keys on a device they own, but Microsoft also recommends BitLocker users store their keys on its servers for convenience. While that means someone can access their data if they forget their password, or if repeated failed attempts to login lock the device, it also makes them vulnerable to law enforcement subpoenas and warrants.

Apple’s new iOS setting addresses a hidden layer of location tracking

3 February 2026 at 12:20

Most iPhone owners have hopefully learned to manage app permissions by now, including allowing location access. But there’s another layer of location tracking that operates outside these controls. Your cellular carrier has been collecting your location data all along, and until now, there was nothing you could do about it.

Apple just changed this in iOS 26.3 with a new setting called “limit precise location.”

How Apple’s anti-carrier tracking system works

Cellular networks track your phone’s location based on the cell towers it connects to, in a process known as triangulation. In cities where towers are densely packed, triangulation is precise enough to track you down to a street address.

This tracking is different from app-based location monitoring, because your phone’s privacy settings have historically been powerless to stop it. Toggle Location Services off entirely, and your carrier still knows where you are.

The new setting reduces the precision of location data shared with carriers. Rather than a street address, carriers would see only the neighborhood where a device is located. It doesn’t affect emergency calls, though, which still transmit precise coordinates to first responders. Apps like Apple’s “Find My” service, which locates your devices, or its navigation services, aren’t affected because they work using the phone’s location sharing feature.

Why is Apple doing this? Apple hasn’t said, but the move comes after years of carriers mishandling location data.

Unfortunately, cellular network operators have played fast and free with this data. In April 2024, the FCC fined Sprint and T-Mobile (which have since merged), along with AT&T and Verizon nearly $200 million combined for illegally sharing this location data. They sold access to customers’ location information to third party aggregators, who then sold it on to third parties without customer consent.

This turned into a privacy horror story for customers. One aggregator, LocationSmart, had a free demo on its website that reportedly allowed anyone to pinpoint the location of most mobile phones in North America.

Limited rollout

The feature only works with devices equipped with Apple’s custom C1 or C1X modems. That means just three devices: the iPhone Air, iPhone 16e, and the cellular iPad Pro with M5 chip. The iPhone 17, which uses Qualcomm silicon, is excluded. Apple can only control what its own modems transmit.

Carrier support is equally narrow. In the US, only Boost Mobile is participating in the feature at launch, while Verizon, AT&T, and T-Mobile are notable absences from the list given their past record. In Germany, Telekom is on the participant list, while both EE and BT are involved in the UK. In Thailand, AIS and True are on the list. There are no other carriers taking part as of today though.

Android also offers some support

Google also introduced a similar capability with Android 15’s Location Privacy hardware abstraction layer (HAL) last year. It faces the same constraint, though: modem vendors must cooperate, and most have not. Apple and Google don’t get to control the modems in most phones. This kind of privacy protection requires vertical integration that few manufacturers possess and few carriers seem eager to enable.

Most people think controlling app permissions means they’re in control of their location. This feature highlights something many users didn’t know existed: a separate layer of tracking handled by cellular networks, and one that still offers users very limited control.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Apple’s new iOS setting addresses a hidden layer of location tracking

3 February 2026 at 12:20

Most iPhone owners have hopefully learned to manage app permissions by now, including allowing location access. But there’s another layer of location tracking that operates outside these controls. Your cellular carrier has been collecting your location data all along, and until now, there was nothing you could do about it.

Apple just changed this in iOS 26.3 with a new setting called “limit precise location.”

How Apple’s anti-carrier tracking system works

Cellular networks track your phone’s location based on the cell towers it connects to, in a process known as triangulation. In cities where towers are densely packed, triangulation is precise enough to track you down to a street address.

This tracking is different from app-based location monitoring, because your phone’s privacy settings have historically been powerless to stop it. Toggle Location Services off entirely, and your carrier still knows where you are.

The new setting reduces the precision of location data shared with carriers. Rather than a street address, carriers would see only the neighborhood where a device is located. It doesn’t affect emergency calls, though, which still transmit precise coordinates to first responders. Apps like Apple’s “Find My” service, which locates your devices, or its navigation services, aren’t affected because they work using the phone’s location sharing feature.

Why is Apple doing this? Apple hasn’t said, but the move comes after years of carriers mishandling location data.

Unfortunately, cellular network operators have played fast and free with this data. In April 2024, the FCC fined Sprint and T-Mobile (which have since merged), along with AT&T and Verizon nearly $200 million combined for illegally sharing this location data. They sold access to customers’ location information to third party aggregators, who then sold it on to third parties without customer consent.

This turned into a privacy horror story for customers. One aggregator, LocationSmart, had a free demo on its website that reportedly allowed anyone to pinpoint the location of most mobile phones in North America.

Limited rollout

The feature only works with devices equipped with Apple’s custom C1 or C1X modems. That means just three devices: the iPhone Air, iPhone 16e, and the cellular iPad Pro with M5 chip. The iPhone 17, which uses Qualcomm silicon, is excluded. Apple can only control what its own modems transmit.

Carrier support is equally narrow. In the US, only Boost Mobile is participating in the feature at launch, while Verizon, AT&T, and T-Mobile are notable absences from the list given their past record. In Germany, Telekom is on the participant list, while both EE and BT are involved in the UK. In Thailand, AIS and True are on the list. There are no other carriers taking part as of today though.

Android also offers some support

Google also introduced a similar capability with Android 15’s Location Privacy hardware abstraction layer (HAL) last year. It faces the same constraint, though: modem vendors must cooperate, and most have not. Apple and Google don’t get to control the modems in most phones. This kind of privacy protection requires vertical integration that few manufacturers possess and few carriers seem eager to enable.

Most people think controlling app permissions means they’re in control of their location. This feature highlights something many users didn’t know existed: a separate layer of tracking handled by cellular networks, and one that still offers users very limited control.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

❌