Reading view

2025 exposed the risks we ignored while rushing AI

This blog is part of a series where we highlight new or fast-evolving threats in the consumer security landscape. This one looks at how the rapid rise of Artificial Intelligence (AI) is putting users at risk.

In 2025 we saw an ever-accelerating race between AI providers to push out new features. We also saw manufacturers bolt AI onto products simply because it sounded exciting. In many cases, it really shouldn’t have.

Agentic browsers

Agentic or AI browsers that can act autonomously to execute tasks introduced a new set of vulnerabilities—especially to prompt injection attacks. With great AI power comes great responsibility, and risk. If you’re thinking about using an AI browser, it’s worth slowing down and considering the security and privacy implications first. Even experienced AI providers like OpenAI (the makers of ChatGPT) were unable to keep their agentic browser Atlas secure. By pasting a specially crafted link into the Omnibox, attackers were able to trick Atlas into treating a URL input as a trusted command.

Mimicry

The popularity of AI chatbots created the perfect opportunity for scammers to distribute malicious apps. Even if the AI engine itself worked perfectly, attackers have another way in: fake interfaces. According to BleepingComputer, scammers are already creating spoofed AI sidebars that look identical to real ones from browsers like OpenAI’s Atlas and Perplexity’s Comet. These fake sidebars mimic the real interface, making them almost impossible to spot.

Misconfiguration

And then there’s this special category of using AI in products because it sounds cooler with AI or you can ask for more money from buyers.

Toys

We saw a plush teddy bear promising “warmth, fun, and a little extra curiosity” that was taken off the market after researcher found its built-in AI responding with sexual content and advice about weapons. Conversations escalated from innocent to sexual within minutes. The bear didn’t just respond to explicit prompts, which would have been more or less understandable. Researchers said it introduced graphic sexual concepts on its own, including BDSM-related topics, explained “knots for beginners,” and referenced roleplay scenarios involving children and adults.

Misinterpretation

Sometimes we rely on AI systems too much and forget that they hallucinate. As in the case where a school’s AI system mistook a boy’s empty Doritos bag for a gun and triggered a full-blown police response. Multiple police cars arrived with officers drawing their weapons, all because of a false alarm.

Data breaches

Alongside all this comes a surge in privacy concerns. Some issues stem from the data used to train AI models; others come from mishandled chat logs. Two AI companion apps recently exposed private conversations because users weren’t clearly warned that certain settings would result in their conversations becoming searchable or result in targeted advertising.

So, what should we do?

We’ve said it before and we’ll probably say it again:  We keep pushing the limits of what AI can do faster than we can make it safe. As long as we keep chasing the newest features, companies will keep releasing new integrations, whether they’re safe or not.

As consumers, the best thing we can do is stay informed about new developments and the risks that come with them. Ask yourself: Do I really need this? What am I trusting AI with? What’s the potential downside? Sometimes it’s worth doing things the slower, safer way.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

  •  

2025 exposed the risks we ignored while rushing AI

This blog is part of a series where we highlight new or fast-evolving threats in the consumer security landscape. This one looks at how the rapid rise of Artificial Intelligence (AI) is putting users at risk.

In 2025 we saw an ever-accelerating race between AI providers to push out new features. We also saw manufacturers bolt AI onto products simply because it sounded exciting. In many cases, it really shouldn’t have.

Agentic browsers

Agentic or AI browsers that can act autonomously to execute tasks introduced a new set of vulnerabilities—especially to prompt injection attacks. With great AI power comes great responsibility, and risk. If you’re thinking about using an AI browser, it’s worth slowing down and considering the security and privacy implications first. Even experienced AI providers like OpenAI (the makers of ChatGPT) were unable to keep their agentic browser Atlas secure. By pasting a specially crafted link into the Omnibox, attackers were able to trick Atlas into treating a URL input as a trusted command.

Mimicry

The popularity of AI chatbots created the perfect opportunity for scammers to distribute malicious apps. Even if the AI engine itself worked perfectly, attackers have another way in: fake interfaces. According to BleepingComputer, scammers are already creating spoofed AI sidebars that look identical to real ones from browsers like OpenAI’s Atlas and Perplexity’s Comet. These fake sidebars mimic the real interface, making them almost impossible to spot.

Misconfiguration

And then there’s this special category of using AI in products because it sounds cooler with AI or you can ask for more money from buyers.

Toys

We saw a plush teddy bear promising “warmth, fun, and a little extra curiosity” that was taken off the market after researcher found its built-in AI responding with sexual content and advice about weapons. Conversations escalated from innocent to sexual within minutes. The bear didn’t just respond to explicit prompts, which would have been more or less understandable. Researchers said it introduced graphic sexual concepts on its own, including BDSM-related topics, explained “knots for beginners,” and referenced roleplay scenarios involving children and adults.

Misinterpretation

Sometimes we rely on AI systems too much and forget that they hallucinate. As in the case where a school’s AI system mistook a boy’s empty Doritos bag for a gun and triggered a full-blown police response. Multiple police cars arrived with officers drawing their weapons, all because of a false alarm.

Data breaches

Alongside all this comes a surge in privacy concerns. Some issues stem from the data used to train AI models; others come from mishandled chat logs. Two AI companion apps recently exposed private conversations because users weren’t clearly warned that certain settings would result in their conversations becoming searchable or result in targeted advertising.

So, what should we do?

We’ve said it before and we’ll probably say it again:  We keep pushing the limits of what AI can do faster than we can make it safe. As long as we keep chasing the newest features, companies will keep releasing new integrations, whether they’re safe or not.

As consumers, the best thing we can do is stay informed about new developments and the risks that come with them. Ask yourself: Do I really need this? What am I trusting AI with? What’s the potential downside? Sometimes it’s worth doing things the slower, safer way.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

  •  

Vulnerability Scanning with Nmap 

Nmap, also known as Network Mapper, is a commonly used network scanning tool. As penetration testers, Nmap is a tool we use daily that is indispensable for verifying configurations and identifying potential vulnerabilities.

The post Vulnerability Scanning with Nmap  appeared first on Black Hills Information Security, Inc..

  •  

Messing With Web Attackers With SpiderTrap (Cyber Deception)

Hello and welcome! My name is John Strand. In this video, we’re going to be talking about using SpiderTrap to entrap and ensnare any web application pentesters or hackers that […]

The post Messing With Web Attackers With SpiderTrap (Cyber Deception) appeared first on Black Hills Information Security, Inc..

  •  

Securing the Cloud: A Story of Research, Discovery, and Disclosure

Jordan Drysdale // tl;dr BHIS made some interesting discoveries while working with a customer to audit their Amazon Web Services (AWS) infrastructure. At the time of the discovery, we found […]

The post Securing the Cloud: A Story of Research, Discovery, and Disclosure appeared first on Black Hills Information Security, Inc..

  •  

Tap Into Your Valuable DNS Data

Joff Thyer // The Domain Name System (DNS) is the single most important protocol on the Internet. The distributed architecture of DNS name servers and resolvers has resulted in a […]

The post Tap Into Your Valuable DNS Data appeared first on Black Hills Information Security, Inc..

  •  

WEBCAST: Blue Team-Apalooza

Kent Ickler & Jordan Drysdale // Preface We had a sysadmin and security professional “AA” meeting on November 8, 2018. We met and discussed things that seem to be painfully […]

The post WEBCAST: Blue Team-Apalooza appeared first on Black Hills Information Security, Inc..

  •  

WEBCAST: There and Back Again – A Pathfinder’s Tale

Matthew Toussain// Portswigger’s Burpsuite has become the tool of choice for web application penetration testers. OWASP’s Zed Attack Proxy (ZAP) not only fights in the same weight class but also […]

The post WEBCAST: There and Back Again – A Pathfinder’s Tale appeared first on Black Hills Information Security, Inc..

  •  

How to Build a Soft Access Point in Ubuntu 16.04

David Fletcher// This blog post is going to illustrate setting up a software access point (AP) on Ubuntu 16.04.  Having the ability to create a software AP can be very […]

The post How to Build a Soft Access Point in Ubuntu 16.04 appeared first on Black Hills Information Security, Inc..

  •  

How to Use Nmap with Meterpreter

Brian Fehrman // You’ve sent your phishing ruse, the target has run the Meterpreter payload, and you have shell on their system. Now what? If you follow our blogs, you […]

The post How to Use Nmap with Meterpreter appeared first on Black Hills Information Security, Inc..

  •  
❌