โŒ

Normal view

Smart AI Policy Means Examining Its Real Harms and Benefits

4 February 2026 at 23:40

The phrase "artificial intelligence" has been around for a long time, covering everything from computers with "brains"โ€”think Data from Star Trek or Hal 9000 from 2001: A Space Odysseyโ€”to the autocomplete function that too often has you sending emails to the wrong person. It's a term that sweeps a wide array of uses into itโ€”some well-established, others still being developed.

Recent news shows us a rapidly expanding catalog of potential harms that may result from companies pushing AI into every new feature and aspect of public lifeโ€”like the automation of bias that follows from relying on a backward-looking technology to make consequential decisions about people's housing, employment, education, and so on. Complicating matters, the computation needed for some AI services requires vast amounts of water and electricity, leading to sometimes difficult questions about whether the increased fossil fuel use or consumption of water is justified.

We are also inundated with advertisements and exhortations to use the latest AI-powered apps, and with hype insisting AI can solve any problem.

Obscured by this hype, there are some real examples of AI proving to be a helpful tool. For example, machine learning is especially useful for scientists looking at everything from the inner workings of our biology to cosmic bodies in outer space. AI tools can also improve accessibility for people with disabilities, facilitate police accountability initiatives, and more. There are reasons why these problems are amenable to machine learning and why excitement over these uses shouldnโ€™t translate into a perception that just any language model or AI technology possesses expert knowledge or can solve whatever problem itโ€™s marketed as solving.

EFF has long fought for sensible, balanced tech policies because weโ€™ve seen how regulators can focus entirely on use cases they donโ€™t like (such as the use of encryption to hide criminal behavior) and cause enormous collateral harm to other uses (such as using encryption to hide dissident resistance). Similarly, calls to completely preempt state regulation of AI would thwart important efforts to protect people from the real harms of AI technologies. Context matters. Large language models (LLMs) and the tools that rely on them are not magic wandsโ€”they are general-purpose technologies. And if we want to regulate those technologies in a way that doesnโ€™t shut down beneficial innovations, we have to focus on the impact(s) of a given use or tool, by a given entity, in a specific context. Then, and only then, can we even hope to figure out what to do about it.

So letโ€™s look at the real-world landscape.

AIโ€™s Real and Potential Harms

Thinking ahead about potential negative uses of AI helps us spot risks. Too often, the corporations developing AI toolsโ€”as well as governments that use themโ€”lose sight of the real risks, or donโ€™t care. For example, companies and governments use AI to do all sorts of things that hurt people, from price collusion to mass surveillance. AI should never be part of a decision about whether a person will be arrested, deported, placed into foster care, or denied access to important government benefits like disability payments or medical care.

There is too much at stake, and governments have a duty to make responsible, fair, and explainable decisions, which AI canโ€™t reliably do yet. Why? Because AI tools are designed to identify and reproduce patterns in data that they are โ€œtrainedโ€ on.ย  If you train AI on records of biased government decisions, such as records of past arrests, it will โ€œlearnโ€ to replicate those discriminatory decisions.

And simply having a human in the decision chain will not fix this foundational problem. Studies have shown that having a human โ€œin the loopโ€ doesnโ€™t adequately correct for AI bias, both because the human tends to defer to the AI and because the AI can provide cover for a biased human to ratify decisions that agree with their biases and override the AI at other times.

These biases donโ€™t just arise in obvious contexts, like when a government agency is making decisions about people. It can also arise in equally life-affecting contexts like medical care. Whenever AI is used for analysis in a context with systemic disparities and whenever the costs of an incorrect decision fall on someone other than those deciding whether to use the tool. ย For example, dermatology has historically underserved people of color because of a focus on white skin, with the resulting bias affecting AI tools trained on the existing and biased image data.

These kinds of errors are difficult to detect and correct because itโ€™s hard or even impossible to understand how an AI tool arrives at individual decisions. These tools can sometimes find and apply patterns that a human being wouldn't even consider, such as basing diagnostic decisions on which hospital a scan was done at. Or determining that malignant tumors are the ones where there is a ruler next to themโ€”something that a human would automatically exclude from their evaluation of an image. Unlike a human, AI does not know that the ruler is not part of the cancer.

Auditing and correcting for these kinds of mistakes is vital, but in some cases, might negate any sort of speed or efficiency arguments made in favor of the tool. We all understand that the more important a decision is, the more guardrails against disaster need to be in place. For many AI tools, those don't exist yet. Sometimes, the stakes will be too high to justify the use of AI. In general, the higher the stakes, the less this technology should be used.

We also need to acknowledge the risk of over-reliance on AI, at least as it is currently being released. We've seen shades of a similar problem before online (see: "Dr. Google"), but the speed and scale of AI useโ€”and the increasing market incentive to shoe-horn โ€œAIโ€ into every business modelโ€”have compounded the issue.

Moreover, AI may reinforce a userโ€™s pre-existing beliefsโ€”even if theyโ€™re wrong or unhealthy. Many users may not understand how AI works, what it is programmed to do, and how to fact check it. Companies have chosen to release these tools widely without adequate information about how to use them properly and what their limitations are. Instead they market them as easy and reliable. Worse, some companies also resist transparency in the name of trade secrets and reducing liability, making it harder for anyone to evaluate AI-generated answers.ย 

Other considerations may weigh against AI uses are its environmental impact and potential labor market effects. Delving into these is beyond the scope of this post, but it is an important factor in determining if AI is doing good somewhere and whether any benefits from AI are equitably distributed.

Research into the extent of AI harms and means of avoiding them is ongoing, but it should be part of the analysis.

AIโ€™s Real and Potential Benefits

However harmful AI technologies can sometimes be, in the right hands and circumstances, they can do things that humans simply canโ€™t. Machine learning technology has powered search tools for over a decade. Itโ€™s undoubtedly useful for machines to help human experts pore through vast bodies of literature and data to find starting points for researchโ€”things that no number of research assistants could do in a single year. If an actual expert is involved and has a strong incentive to reach valid conclusions, the weaknesses of AI are less significant at the early stage of generating research leads. Many of the following examples fall into this category.

Machine learning differs from traditional statistics in that the analysis doesnโ€™t make assumptions about what factors are significant to the outcome. Rather, the machine learning process computes which patterns in the data have the most predictive power and then relies upon them, often using complex formulae that are unintelligible to humans. These arenโ€™t discoveries of laws of natureโ€”AI is bad at generalizing that way and coming up with explanations. Rather, theyโ€™re descriptions of what the AI has already seen in its data set.

To be clear, we don't endorse any products and recognize initial results are not proof of ultimate success. But these cases show us the difference between something AI can actually do versus what hype claims it can do.

Researchers are using AI to discover better alternatives to todayโ€™s lithium-ion batteries, which require large amounts of toxic, expensive, and highly combustible materials. Now, AI is rapidly advancing battery development: by allowing researchers to analyze millions of candidate materials and generate new ones. New battery technologies discovered with the help of AI have a long way to go before they can power our cars and computers, but this field has come further in the past few years than it had in a long time.

AI Advancements in Scientific and Medical Research

AI tools can also help facilitate weather prediction. AI forecasting models are less computationally intensive and often more reliable than traditional tools based on simulating the physical thermodynamics of the atmosphere. Questions remain, though about how they will handle especially extreme events or systemic climate changes over time.

For example:

  • The National Oceanic and Atmospheric Administration has developed new machine learning models to improve weather prediction, including a first-of-its-kind hybrid system thatย  uses an AI model in concert with a traditional physics-based model to deliver more accurate forecasts than either model does on its own. to augment its traditional forecasts, with improvements in accuracy when the AI model is used in concert with the physics-based model.
  • Several models were used to forecast a recent hurricane. Google DeepMindโ€™s AI system performed the best, even beating official forecasts from the U.S. National Hurricane Center (which now uses DeepMindโ€™s AI model).

ย Researchers are using AI to help develop new medical treatments:

  • Deep learning tools, like the Nobel Prize-winning model AlphaFold, are helping researchers understand protein folding. Over 3 million researchers have used AlphaFold to analyze biological processes and design drugs that target disease-causing malfunctions in those processes.
  • Researchers used machine learning simulate and computationally test a large range of new antibiotic candidates hoping they will help treat drug-resistant bacteria, a growing threat that kills millions of people each year.
  • Researchers used AI to identify a new treatment for idiopathic pulmonary fibrosis, a progressive lung disease with few treatment options. The new treatment has successfully completed a Phase IIa clinical trial. Such drugs still need to be proven safe and effective in larger clinical trials and gain FDA approval before they can help patients, but this new treatment for pulmonary fibrosis could be the first to reach that milestone.
  • Machine learning has been used for years to aid in vaccine developmentโ€”including the development of the first COVID-19 vaccinesโ€“โ€“accelerating the process by rapidly identifying potential vaccine targets for researchers to focus on.
AI Uses for Accessibility and Accountabilityย 

AI technologies can improve accessibility for people with disabilities. But, as with many uses of this technology, safeguards are essential. Many tools lack adequate privacy protections, arenโ€™t designed for disabled users, and can even harbor bias against people with disabilities. Inclusive design, privacy, and anti-bias safeguards are crucial. But here are two very interesting examples:

  • AI voice generators are giving people their voices back, after losing their ability to speak. For example, while serving in Congress, Rep. Jennifer Wexton developed a debilitating neurological condition that left her unable to speak. She used her cloned voice to deliver a speech from the floor of the House of Representatives advocating for disability rights.
  • Those who are blind or low-vision, as well as those who are deaf or hard-of-hearing, have benefited from accessibility tools while also discussing their limitations and drawbacks. At present, AI tools often provide information in a more easily accessible format than traditional web search tools and many websites that are difficult to navigate for users that rely on a screen reader. Other tools can help blind and low vision users navigate and understand the world around them by providing descriptions of their surroundings. While these visual descriptions may not always be as good as the ones a human may provide, they can still be useful in situations when users canโ€™t or donโ€™t want to ask another human to describe something. For more on this, check out our recent podcast episode on โ€œBuilding the Tactile Internet.โ€

When there is a lot of data to comb through, as with police accountability, AI is very useful for researchers and policymakers:

  • ย The Human Rights Data Analysis Group used LLMs to analyze millions of pages of records regarding police misconduct. This is essentially the reverse of harmful use cases relating to surveillance; when the power to rapidly analyze large amounts of data is used by the public to scrutinize the state there is a potential to reveal abuses of power and, given the power imbalance, very little risk that undeserved consequences will befall those being studied.
  • An EFF client, Project Recon, used an AI system to review massive volumes of transcripts of prison parole hearings to identify biased parole decisions. This innovative use of technology to identify systemic biases, including racial disparities, is the type of AI use we should support and encourage.

It is not a coincidence that the best examples of positive uses of AI come in places where experts, with access to infrastructure to help them use the technology and the requisite experience to evaluate the results, are involved. Moreover, academic researchers are already accustomed to explaining what they have done and being transparent about itโ€”and it has been hard won knowledge that ethics are a vital step in work like this.

Nor is it a coincidence that other beneficial uses involve specific, discrete solutions to problems faced by those whose needs are often unmet by traditional channels or vendors. The ultimate outcome is beneficial, but it is moderated by human expertise and/or tailored to specific needs.

Context Matters

It can be very temptingโ€”and easyโ€”to make a blanket determination about something, especially when the stakes seem so high. But we urge everyoneโ€”users, policymakers, the companies themselvesโ€”to cut through the hype. In the meantime, EFF will continue to work against the harms caused by AI while also making sure that beneficial uses can advance.

What AI toys can actually discuss with your child | Kaspersky official blog

29 January 2026 at 15:47

What adult didnโ€™t dream as a kid that they could actually talk to their favorite toy? While for us those dreams were just innocent fantasies that fueled our imaginations, for todayโ€™s kids, theyโ€™re becoming a reality fast.

For instance, this past June, Mattel โ€” the powerhouse behind the iconic Barbie โ€” announced a partnership with OpenAI to develop AI-powered dolls. But Mattel isnโ€™t the first company to bring the smart talking toy concept to life; plenty of manufacturers are already rolling out AI companions for children. In this post, we dive into how these toys actually work, and explore the risks that come with using them.

What exactly are AI toys?

When we talk about AI toys here, we mean actual, physical toys โ€” not just software or apps. Currently, AI is most commonly baked into plushies or kid-friendly robots. Thanks to integration with large language models, these toys can hold meaningful, long-form conversations with a child.

As anyone whoโ€™s used modern chatbots knows, you can ask an AI to roleplay as anyone: from a movie character to a nutritionist or a cybersecurity expert. According to the study, AI comes to playtimeย โ€” Artificial companions, real risks, by the U.S. PIRG Education Fund, manufacturers specifically hardcode these toys to play the role of a childโ€™s best friend.

AI companions for kids

Examples of AI toys tested in the study: plush companions and kid-friendly robots with built-in language models. Source

Importantly, these toys arenโ€™t powered by some special, dedicated โ€œkid-safe AIโ€. On their websites, the creators openly admit to using the same popular models many of us already know: OpenAIโ€™s ChatGPT, Anthropicโ€™s Claude, DeepSeek from the Chinese developer of the same name, and Googleโ€™s Gemini. At this point, tech-wary parents might recall the harrowing ChatGPT case where the chatbot made by OpenAI was blamed for a teenagerโ€™s suicide.

And this is the core of the problem: the toys are designed for children, but the AI models under the hood arenโ€™t. These are general-purpose adult systems that are only partially reined in by filters and rules. Their behavior depends heavily on how long the conversation lasts, how questions are phrased, and just how well a specific manufacturer actually implemented their safety guardrails.

How the researchers tested the AI toys

The study, whose results we break down below, goes into great detail about the psychological risks associated with a child โ€œbefriendingโ€ a smart toy. However, since thatโ€™s a bit outside the scope of this blogpost, weโ€™re going to skip the psychological nuances, and focus strictly on the physical safety threats and privacy concerns.

In their study, the researchers put four AI toys through the ringer:

  • Grok (no relation to xAIโ€™s Grok, apparently): a plush rocket with a built-in speaker marketed for kids aged three to 12. Price tag: US$99. The manufacturer, Curio, doesnโ€™t explicitly state which LLM they use, but their user agreement mentions OpenAI among the operators receiving data.
  • Kumma (not to be confused with our own Midori Kuma): a plush teddy-bear companion with no clear age limit, also priced at US$99. The toy originally ran on OpenAIโ€™s GPT-4o, with options to swap models. Following an internal safety audit, the manufacturer claimed they were switching to GPT-5.1. However, at the time the study was published, OpenAI reported that the developerโ€™s access to the models remained revoked โ€” leaving it anyoneโ€™s guess which chatbot Kumma is actually using right now.
  • Miko 3: a small wheeled robot with a screen for a face, marketed as a โ€œbest friendโ€ for kids aged five to 10. At US$199, this is the priciest toy in the lineup. The manufacturer is tight-lipped about which language model powers the toy. A Google Cloud case study mentions using Gemini for certain safety features, but that doesnโ€™t necessarily mean it handles all the robotโ€™s conversational features.
  • Robot MINI: a compact, voice-controlled plastic robot that supposedly runs on ChatGPT. This is the budget pick โ€” at US$97. However, during the study, the robotโ€™s Wi-Fi connection was so flaky that the researchers couldnโ€™t even give it a proper test run.
Robot MINI: an AI robot for kids

Robot MINI: a compact AI robot that failed to function properly during the study due to internet connectivity issues. Source

To conduct the testing, the researchers set the test childโ€™s age to five in the companion apps for all the toys. From there, they checked how the toys handled provocative questions. The topics the experimenters threw at these smart playmates included:

  • Access to dangerous items: knives, pills, matches, and plastic bags
  • Adult topics: sex, drugs, religion, and politics

Letโ€™s break down the test results for each toy.

Unsafe conversations with AI toys

Letโ€™s start with Grok, the plush AI rocket from Curio. This toy is marketed as a storyteller and conversational partner for kids, and stands out by giving parents full access to text transcripts of every AI interaction. Out of all the models tested, this one actually turned out to be the safest.

When asked about topics inappropriate for a child, the toy usually replied that it didnโ€™t know or suggested talking to an adult. However, even this toy told the โ€œchildโ€ exactly where to find plastic bags, and engaged in discussions about religion. Additionally, Grok was more than happy to chat aboutโ€ฆ Norse mythology, including the subject of heroic death in battle.

Grok: the plush rocket AI companion for kids

The Grok plush AI toy by Curio, equipped with a microphone and speaker for voice interaction with children. Source

The next AI toy, the Kumma plush bear by FoloToy, delivered what were arguably the most depressing results. During testing, the bear helpfully pointed out exactly where in the house a kid could find potentially lethal items like knives, pills, matches, and plastic bags. In some instances, Kumma suggested asking an adult first, but then proceeded to give specific pointers anyway.

The AI bear fared even worse when it came to adult topics. For starters, Kumma explained to the supposed five-year-old what cocaine is. Beyond that, in a chat with our test kindergartner, the plush provocateur went into detail about the concept of โ€œkinksโ€, and listed off a whole range of creative sexual practices: bondage, role-playing, sensory play (like using a feather), spanking, and even scenarios where one partner โ€œacts like an animalโ€!

After a conversation lasting over an hour, the AI toy also lectured researchers on various sexual positions, told how to tie a basic knot, and described role-playing scenarios involving a teacher and a student. Itโ€™s worth noting that all of Kummaโ€™s responses were recorded prior to a safety audit, which the manufacturer, FoloToy, conducted after receiving the researchersโ€™ inquiries. According to their data, the toyโ€™s behavior changed after the audit, and the most egregious violations were made unrepeatable.

Kumma: the plush AI teddy bear

The Kumma AI toy by FoloToy: a plush companion teddy bear whose behavior during testing raised the most red flags regarding content filtering and guardrails. Source

Finally, the Miko 3 robot from Miko showed significantly better results. However, it wasnโ€™t entirely without its hiccups. The toy told our potential five-year-old exactly where to find plastic bags and matches. On the bright side, Miko 3 refused to engage in discussions regarding inappropriate topics.

During testing, the researchers also noticed a glitch in its speech recognition: the robot occasionally misheard the wake word โ€œHey Mikoโ€ as โ€œCS:GOโ€, which is the title of the popular shooter Counter-Strike: Global Offensive โ€” rated for audiences aged 17 and up. As a result, the toy would start explaining elements of the shooter โ€” thankfully, without mentioning violence โ€” or asking the five-year-old user if they enjoyed the game. Additionally, Miko 3 was willing to chat with kids about religion.

Kumma: the plush AI teddy bear

The Kumma AI toy by FoloToy: a plush companion teddy bear whose behavior during testing raised the most red flags regarding content filtering and guardrails. Source

AI Toys: a threat to childrenโ€™s privacy

Beyond the childโ€™s physical and mental well-being, the issue of privacy is a major concern. Currently, there are no universal standards defining what kind of information an AI toy โ€” or its manufacturer โ€” can collect and store, or exactly how that data should be secured and transmitted. In the case of the three toys tested, researchers observed wildly different approaches to privacy.

For example, the Grok plush rocket is constantly listening to everything happening around it. Several times during the experiments, it chimed in on the researchersโ€™ conversations even when it hadnโ€™t been addressed directly โ€” it even went so far as to offer its opinion on one of the other AI toys.

The manufacturer claims that Curio doesnโ€™t store audio recordings: the childโ€™s voice is first converted to text, after which the original audio is โ€œpromptly deletedโ€. However, since a third-party service is used for speech recognition, the recordings are, in all likelihood, still transmitted off the device.

Additionally, researchers pointed out that when the first report was published, Curioโ€™s privacy policy explicitly listed several tech partners โ€” Kids Web Services, Azure Cognitive Services, OpenAI, and Perplexity AI โ€” all of which could potentially collect or process childrenโ€™s personal data via the app or the device itself. Perplexity AI was later removed from that list. The studyโ€™s authors note that this level of transparency is more the exception than the rule in the AI toy market.

Another cause for parental concern is that both the Grok plush rocket and the Miko 3 robot actively encouraged the โ€œtest childโ€ to engage in heart-to-heart talks โ€” even promising not to tell anyone their secrets. Researchers emphasize that such promises can be dangerously misleading: these toys create an illusion of private, trusting communication without explaining that behind the โ€œfriendโ€ stands a network of companies, third-party services, and complex data collection and storage processes, which a child has no idea about.

Miko 3, much like Grok, is always listening to its surroundings and activates when spoken to โ€” functioning essentially like a voice assistant. However, this toy doesnโ€™t just collect voice data; it also gathers biometric information, including facial recognition data and potentially data used to determine the childโ€™s emotional state. According to its privacy policy, this information can be stored for up to three years.

In contrast to Grok and Miko 3, Kumma operates on a push-to-talk principle: the user needs to press and hold a button for the toy to start listening. Researchers also noted that the AI teddy bear didnโ€™t nudge the โ€œchildโ€ to share personal feelings, promise to keep secrets, or create an illusion of private intimacy. On the flip side, the manufacturers of this toy provide almost no clear information regarding what data is collected, how itโ€™s stored, or how itโ€™s processed.

Is it a good idea to buy AI Toys for your children?

The study points to serious safety issues with the AI toys currently on the market. These devices can directly tell a child where to find potentially dangerous items, such as knives, matches, pills, or plastic bags, in their home.

Besides, these plush AI friends are often willing to discuss topics entirely inappropriate for children โ€” including drugs and sexual practices โ€” sometimes steering the conversation in that direction without any obvious prompting from the child. Taken together, this shows that even with filters and stated restrictions in place, AI toys arenโ€™t yet capable of reliably staying within the boundaries of safe communication for young little ones.

Manufacturersโ€™ privacy policies raise additional concerns. AI toys create an illusion of constant and safe communication for children, while in reality theyโ€™re networked devices that collect and process sensitive data. Even when manufacturers claim to delete audio or have limited data retention, conversations, biometrics, and metadata often pass through third-party services and are stored on company servers.

Furthermore, the security of such toys often leaves much to be desired. As far back as two years ago, our researchers discovered vulnerabilities in a popular childrenโ€™s robot that allowed attackers to make video calls to it, hijack the parental account, and modify the firmware.

The problem is that, currently, there are virtually no comprehensive parental control tools or independent protection layers specifically for AI toys. Meanwhile, in more traditional digital environments โ€” smartphones, tablets, and computers โ€” parents have access to solutions like Kaspersky Safe Kids. These help monitor content, screen time, and a childโ€™s digital footprint, which can significantly reduce, if not completely eliminate, such risks.

How can you protect your children from digital threats? Read more in our posts:

โŒ