This blog is part of a series where we highlight new or fast-evolving threats in the consumer security landscape. This one looks at how the rapid rise of Artificial Intelligence (AI) is putting users at risk.
In 2025 we saw an ever-accelerating race between AI providers to push out new features. We also saw manufacturers bolt AI onto products simply because it sounded exciting. In many cases, it really shouldnβt have.
Agentic browsers
Agentic or AI browsers that can act autonomously to execute tasks introduced a new set of vulnerabilitiesβespecially to prompt injection attacks. With great AI power comes great responsibility, and risk. If youβre thinking about using an AI browser, itβs worth slowing down and considering the security and privacy implications first. Even experienced AI providers like OpenAI (the makers of ChatGPT) were unable to keep their agentic browser Atlas secure. By pasting a specially crafted link into the Omnibox, attackers were able to trick Atlas into treating a URL input as a trusted command.
Mimicry
The popularity of AI chatbots created the perfect opportunity for scammers to distribute malicious apps. Even if the AI engine itself worked perfectly, attackers have another way in: fake interfaces. According to BleepingComputer, scammers are already creating spoofed AI sidebars that look identical to real ones from browsers like OpenAIβs Atlas and Perplexityβs Comet. These fake sidebars mimic the real interface, making them almost impossible to spot.
Misconfiguration
And then thereβs this special category of using AI in products because it sounds cooler with AI or you can ask for more money from buyers.
Toys
We saw a plush teddy bear promising βwarmth, fun, and a little extra curiosityβ that was taken off the market after researcher found its built-in AI responding with sexual content and advice about weapons. Conversations escalated from innocent to sexual within minutes. The bear didnβt just respond to explicit prompts, which would have been more or less understandable. Researchers said it introduced graphic sexual concepts on its own, including BDSM-related topics, explained βknots for beginners,β and referenced roleplay scenarios involving children and adults.
Misinterpretation
Sometimes we rely on AI systems too much and forget that they hallucinate. As in the case where a schoolβs AI system mistook a boyβs empty Doritos bag for a gun and triggered a full-blown police response. Multiple police cars arrived with officers drawing their weapons, all because of a false alarm.
Data breaches
Alongside all this comes a surge in privacy concerns. Some issues stem from the data used to train AI models; others come from mishandled chat logs. Two AI companion apps recently exposed private conversations because users werenβt clearly warned that certain settings would result in their conversations becoming searchable or result in targeted advertising.
So, what should we do?
Weβve said it before and weβll probably say it again:Β We keep pushing the limits of what AI can do faster than we can make it safe. As long as we keep chasing the newest features, companies will keep releasing new integrations, whether theyβre safe or not.
As consumers, the best thing we can do is stay informed about new developments and the risks that come with them. Ask yourself: Do I really need this? What am I trusting AI with? Whatβs the potential downside? Sometimes itβs worth doing things the slower, safer way.
We donβt just report on privacyβwe offer you the option to use it.
Privacy risks should never spread beyond a headline. Keep your online privacy yours by usingΒ Malwarebytes Privacy VPN.