Normal view

Are we ready for ChatGPT Health?

9 January 2026 at 13:26

How comfortable are you with sharing your medical history with an AI?

I’m certainly not.

OpenAI’s announcement about its new ChatGPT Health program prompted discussions about data privacy and how the company plans to keep the information users submit safe.

ChatGPT Health is a dedicated “health space” inside ChatGPT that lets users connect their medical records and wellness apps so the model can answer health and wellness questions in a more personalized way.

ChatGPT health

OpenAI promises additional, layered protections designed specifically for health, “to keep health conversations protected and compartmentalized.”

First off, it’s important to understand that this is not a diagnostic or treatment system. It’s framed as a support tool to help understand health information and prepare for care.

But this is the part that raised questions and concerns:

“You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you.”

In other words, ChatGPT Health lets you link medical records and apps such as Apple Health, MyFitnessPal, and others so the system can explain lab results, track trends (e.g., cholesterol), and help you prepare questions for clinicians or compare insurance options based on your health data.

Given our reservations about the state of AI security in general and chatbots in particular, this is a line that I don’t dare cross. For now, however, I don’t even have the option, since only users with ChatGPT Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the United Kingdom can sign up for the waitlist.

OpenAI only uses partners and apps in ChatGPT Health that meet OpenAI’s privacy and security requirements, which, by design, shifts a great deal of trust onto ChatGPT Health itself.

Users should realize that health information is very sensitive and as Sara Geoghegan, senior counsel at the Electronic Privacy Information Center told The Record: by sharing their electronic medical records with ChatGPT Health, users in the US could effectively remove the HIPAA protection from those records, which is a serious consideration for anyone sharing medical data.

She added:

“ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time.”

Should you decide to try this new feature out, we would advise you to proceed with caution and take the advice to enable 2FA for ChatGPT to heart. OpenAI claims 230 million users already ask ChatGPT health and wellness questions each week. I’d encourage them to do the same.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Are we ready for ChatGPT Health?

9 January 2026 at 13:26

How comfortable are you with sharing your medical history with an AI?

I’m certainly not.

OpenAI’s announcement about its new ChatGPT Health program prompted discussions about data privacy and how the company plans to keep the information users submit safe.

ChatGPT Health is a dedicated “health space” inside ChatGPT that lets users connect their medical records and wellness apps so the model can answer health and wellness questions in a more personalized way.

ChatGPT health

OpenAI promises additional, layered protections designed specifically for health, “to keep health conversations protected and compartmentalized.”

First off, it’s important to understand that this is not a diagnostic or treatment system. It’s framed as a support tool to help understand health information and prepare for care.

But this is the part that raised questions and concerns:

“You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you.”

In other words, ChatGPT Health lets you link medical records and apps such as Apple Health, MyFitnessPal, and others so the system can explain lab results, track trends (e.g., cholesterol), and help you prepare questions for clinicians or compare insurance options based on your health data.

Given our reservations about the state of AI security in general and chatbots in particular, this is a line that I don’t dare cross. For now, however, I don’t even have the option, since only users with ChatGPT Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the United Kingdom can sign up for the waitlist.

OpenAI only uses partners and apps in ChatGPT Health that meet OpenAI’s privacy and security requirements, which, by design, shifts a great deal of trust onto ChatGPT Health itself.

Users should realize that health information is very sensitive and as Sara Geoghegan, senior counsel at the Electronic Privacy Information Center told The Record: by sharing their electronic medical records with ChatGPT Health, users in the US could effectively remove the HIPAA protection from those records, which is a serious consideration for anyone sharing medical data.

She added:

“ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time.”

Should you decide to try this new feature out, we would advise you to proceed with caution and take the advice to enable 2FA for ChatGPT to heart. OpenAI claims 230 million users already ask ChatGPT health and wellness questions each week. I’d encourage them to do the same.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Global AI adoption in 2025 — A widening digital divide

Read the full Global AI Adoption Report.

Global adoption of artificial intelligence continued to rise in the second half of 2025, increasing by 1.2 percentage points compared to the first half of the year, with roughly one in six people worldwide now using generative AI tools, remarkable progress for a technology that only recently entered mainstream use. 

To track this trend, we measure AI diffusion as the share of people worldwide who have used a generative AI product during the reported period. This measure is derived from aggregated and anonymized Microsoft telemetry and then adjusted to reflect differences in OS and device-market share, internet penetration, and country population. Additional details on the methodology are available in our AI Diffusion technical paper.[1]

No single metric is perfect, and this one is no exception. Through the Microsoft AI Economy Institute, we continue to refine how we measure AI diffusion globally, including how adoption varies across countries in ways that best advance priorities such as scientific discovery and productivity gains. For this report, we rely on the strongest cross-country measure available today, and we expect to complement it over time with additional indicators as they emerge and mature. 

Despite progress in AI adoption, the data shows a widening divide: adoption in the Global North grew nearly twice as fast as in the Global South. As a result, 24.7 percent of the working age population in the Global North is now using these tools, compared to only 14.1 percent in the Global South.  

Countries that have invested early in digital infrastructure, AI skilling, and government adoption, such as the United Arab Emirates, Singapore, Norway, Ireland, France, and Spain, continue to lead. The UAE extended its lead as the #1 ranked country, with 64.0 percent of the working age population using AI at the end of 2025, compared to 59.4 percent earlier in the year. The UAE has opened a lead of more than three percentage points over Singapore, which continues in second place with 60.9 percent adoption.

 

The second half of the year in the United States shows that leadership in innovation and infrastructure, while critical, does not by themselves lead to broad AI adoption. The U.S. leads in both AI infrastructure and frontier model development, but it fell from 23rd to 24th place in AI usage among the working age population, with a 28.3 percent usage rate. It lags far behind smaller, more highly digitized and AI-focused economies. 

South Korea stands out as the clearest end-of-year success story. It surged seven spots in the global rankings, climbing from 25th to 18th, driven by government policies, improved frontier model capabilities in the Korean language, and consumer-facing features that resonated with the population. Generative AI is now used in schools, workplaces, and public services, and South Korea has become one of ChatGPT’s fastest-growing markets, leading OpenAI to open an office in Seoul.[2] 

 

A parallel development reshaping the global landscape in 2025 was the rapid rise of DeepSeek, an open-source AI platform that has gained significant traction in markets long underserved by traditional providers. By releasing its model under an open-source MIT license and offering a completely free chatbot, DeepSeek removed both financial and technical barriers that limit access to advanced AI. Its strongest adoption, not surprisingly has emerged across China, Russia, Iran, Cuba, and Belarus. But perhaps even more notable is DeepSeek’s surging popularity across Africa, where it is aided by strategic promotion and partnerships with firms such as Huawei.[3]

This rapid evolution underscores an increasingly important dimension of AI competition between the United States and China, involving a race to promote adoption of their respective national models. DeepSeek’s success reflects growing Chinese momentum across Africa, a trend that may continue to accelerate in 2026. DeepSeek’s ascent also underscores a broader truth: the global diffusion of AI is influenced by accessibility factors, and the next wave of users may come from communities that have historically had limited access to technological progress. The challenge ahead is ensuring that innovation spreads in ways that help narrow divides rather than deepen them.

[1]A. Misra, J. Wang, S. McCullers, K. White, and J. L. Ferres, “Measuring AI Diffusion: A Population-Normalized Metric for Tracking Global AI Usage,” Nov. 04, 2025, arXiv: arXiv:2511.02781. doi: 10.48550/arXiv.2511.02781..

[2] OpenAI Korea set to launch next month – The Korea Times.” https://www.koreatimes.co.kr/business/companies/20250828/openai-korea-set-to-launch-next-month

[3] S. Rai, L. Prinsloo, and H. Nyambura, “China’s DeepSeek Is Beating Out OpenAI and Google in Africa (1).” Bloomberg News..

The post Global AI adoption in 2025 — A widening digital divide appeared first on Microsoft On the Issues.

AI & Humans: Making the Relationship Work

8 January 2026 at 13:05

Leaders of many organizations are urging their teams to adopt agentic AI to improve efficiency, but are finding it hard to achieve any benefit. Managers attempting to add AI agents to existing human teams may find that bots fail to faithfully follow their instructions, return pointless or obvious results or burn precious time and resources spinning on tasks that older, simpler systems could have accomplished just as well.

The technical innovators getting the most out of AI are finding that the technology can be remarkably human in its behavior. And the more groups of AI agents are given tasks that require cooperation and collaboration, the more those human-like dynamics emerge.

Our research suggests that, because of how directly they seem to apply to hybrid teams of human and digital workers, the most effective leaders in the coming years may still be those who excel at understanding the timeworn principles of human management.

We have spent years studying the risks and opportunities for organizations adopting AI. Our 2025 book, Rewiring Democracy, examines lessons from AI adoption in government institutions and civil society worldwide. In it, we identify where the technology has made the biggest impact and where it fails to make a difference. Today, we see many of the organizations we’ve studied taking another shot at AI adoption—this time, with agentic tools. While generative AI generates, agentic AI acts and achieves goals such as automating supply chain processes, making data-driven investment decisions or managing complex project workflows. The cutting edge of AI development research is starting to reveal what works best in this new paradigm.

Understanding Agentic AI

There are four key areas where AI should reliably boast superhuman performance: in speed, scale, scope and sophistication. Again and again, the most impactful AI applications leverage their capabilities in one or more of these areas. Think of content-moderation AI that can scan thousands of posts in an instant, legislative policy tools that can scale deliberations to millions of constituents, and protein-folding AI that can model molecular interactions with greater sophistication than any biophysicist.

Equally, AI applications that don’t leverage these core capabilities typically fail to impress. For example, Google’s AI Overviews irritate many of its users when the overviews obscure information that could be more efficiently consumed straight from the web results that the AI attempted to synthesize.

Agentic AI extends these core advantages of AI to new tasks and scenarios. The most familiar AI tools are chatbots, image generators and other models that take a single action: ask one question, get one answer. Agentic systems solve more complex problems by using many such AI models and giving each one the capability to use tools like retrieving information from databases and perform tasks like sending emails or executing financial transactions.

Because agentic systems are so new and their potential configurations so vast, we are still learning which business processes they will fit well with and which they will not. Gartner has estimated that 40 per cent of agentic AI projects will be cancelled within two years, largely because they are targeted where they can’t achieve meaningful business impact.

Understanding Agentic AI behavior

To understand the collective behaviors of agentic AI systems, we need to examine the individual AIs that comprise them. When AIs make mistakes or make things up, they can behave in ways that are truly bizarre. But when they work well, the reasons why are sometimes surprisingly relatable.

Tools like ChatGPT drew attention by sounding human. Moreover, individual AIs often behave like individual people, responding to incentives and organizing their own work in much the same ways that humans do. Recall the counterintuitive findings of many early users of ChatGPT and similar large language models (LLMs) in 2022: They seemed to perform better when offered a cash tip, told the answer was really important or were threatened with hypothetical punishments.

One of the most effective and enduring techniques discovered in those early days of LLM testing was ‘chain-of-thought prompting,’ which instructed AIs to think through and explain each step of their analysis—much like a teacher forcing a student to show their work. Individual AIs can also react to new information similar to individual people. Researchers have found that LLMs can be effective at simulating the opinions of individual people or demographic groups on diverse topics, including consumer preferences and politics.

As agentic AI develops, we are finding that groups of AIs also exhibit human-like behaviors collectively. A 2025 paper found that communities of thousands of AI agents set to chat with each other developed familiar human social behaviors like settling into echo chambers. Other researchers have observed the emergence of cooperative and competitive strategies and the development of distinct behavioral roles when setting groups of AIs to play a game together.

The fact that groups of agentic AIs are working more like human teams doesn’t necessarily indicate that machines have inherently human-like characteristics. It may be more nurture than nature: AIs are being designed with inspiration from humans. The breakthrough triumph of ChatGPT was widely attributed to using human feedback during training. Since then, AI developers have gotten better at aligning AI models to human expectations. It stands to reason, then, that we may find similarities between the management techniques that work for human workers and for agentic AI.

Lessons From the Frontier

So, how best to manage hybrid teams of humans and agentic AIs? Lessons can be gleaned from leading AI labs. In a recent research report, Anthropic shared the practical roadmap and published lessons learned while building its Claude Research feature, which uses teams of multiple AI agents to accomplish complex reasoning tasks. For example, using agents to search the web for information and calling external tools to access information from sources like emails and documents.

Advancements in agentic AI enabling new offerings like Claude Research and Amazon Q are causing a stir among AI practitioners because they reveal insights from the frontlines of AI research about how to make agentic AI and the hybrid organizations that leverage it more effective. What is striking about Anthropic’s report is how transparent it is about all the hard-won lessons learned in developing its offering—and the fact that many of these lessons sound a lot like what we find in classic management texts:

LESSON 1: DELEGATION MATTERS.

When Anthropic analyzed what factors lead to excellent performance by Claude Research, it turned out that the best agentic systems weren’t necessarily built on the best or most expensive AI models. Rather, like a good human manager, they need to excel at breaking down and distributing tasks to their digital workers.

Unlike human teams, agentic systems can enlist as many AI workers as needed, onboard them instantly and immediately set them to work. Organizations that can exploit this scalability property of AI will gain a key advantage, but the hard part is assigning each of them to contribute meaningful, complementary work to the overall project.

In classical management, this is called delegation. Any good manager knows that, even if they have the most experience and the strongest skills of anyone on their team, they can’t do it all alone. Delegation is necessary to harness the collective capacity of their team. It turns out this is crucial to AI, too.

The authors explain this result in terms of ‘parallelization’: Being able to separate the work into small chunks allows many AI agents to contribute work simultaneously, each focusing on one piece of the problem. The research report attributes 80 per cent of the performance differences between agentic AI systems to the total amount of computing resources they leverage.

Whether or not each individual agent is the smartest in the digital toolbox, the collective has more capacity for reasoning when there are many AI ‘hands’ working together. In addition to the quality of the output, teams working in parallel get work done faster. Anthropic says that reconfiguring its AI agents to work in parallel improved research speed by 90 per cent.

Anthropic’s report on how to orchestrate agentic systems effectively reads like a classical delegation training manual: Provide a clear objective, specify the output you expect and provide guidance on what tools to use, and set boundaries. When the objective and output format is not clear, workers may come back with irrelevant or irreconcilable information.

LESSON 2: ITERATION MATTERS.

Edison famously tested thousands of light bulb designs and filament materials before arriving at a workable solution. Likewise, successful agentic AI systems work far better when they are allowed to learn from their early attempts and then try again. Claude Research spawns a multitude of AI agents, each doubling and tripling back on their own work as they go through a trial-and-error process to land on the right results.

This is exactly how management researchers have recommended organizations staff novel projects where large teams are tasked with exploring unfamiliar terrain: Teams should split up and conduct trial-and-error learning, in parallel, like a pharmaceutical company progressing multiple molecules towards a potential clinical trial. Even when one candidate seems to have the strongest chances at the outset, there is no telling in advance which one will improve the most as it is iterated upon.

The advantage of using AI for this iterative process is speed: AI agents can complete and retry their tasks in milliseconds. A recent report from Microsoft Research illustrates this. Its agentic AI system launched up to five AI worker teams in a race to finish a task first, each plotting and pursuing its own iterative path to the destination. They found that a five-team system typically returned results about twice as fast as a single AI worker team with no loss in effectiveness, although at the cost of about twice as much total computing spend.

Going further, Claude Research’s system design endowed its top-level AI agent—the ‘Lead Researcher’—with the decision authority to delegate more research iterations if it was not satisfied with the results returned by its sub-agents. They managed the choice of whether or not they should continue their iterative search loop, to a limit. To the extent that agentic AI mirrors the world of human management, this might be one of the most important topics to watch going forward. Deciding when to stop and what is ‘good enough’ has always been one of the hardest problems organizations face.

LESSON 3: EFFECTIVE INFORMATION SHARING MATTERS.

If you work in a manufacturing department, you wouldn’t rely on your division chief to explain the specs you need to meet for a new product. You would go straight to the source: the domain experts in R&D. Successful organizations need to be able to share complex information efficiently both vertically and horizontally.

To solve the horizontal sharing problem for Claude Research, Anthropic innovated a novel mechanism for AI agents to share their outputs directly with each other by writing directly to a common file system, like a corporate intranet. In addition to saving on the cost of the central coordinator having to consume every sub-agent’s output, this approach helps resolve the information bottleneck. It enables AI agents that have become specialized in their tasks to own how their content is presented to the larger digital team. This is a smart way to leverage the superhuman scope of AI workers, enabling each of many AI agents to act as distinct subject matter experts.

In effect, Anthropic’s AI Lead Researchers must be generalist managers. Their job is to see the big picture and translate that into the guidance that sub-agents need to do their work. They don’t need to be experts on every task the sub-agents are performing. The parallel goes further: AIs working together also need to know the limits of information sharing, like what kinds of tasks don’t make sense to distribute horizontally.

Management scholars suggest that human organizations focus on automating the smallest tasks; the ones that are most repeatable and that can be executed the most independently. Tasks that require more interaction between people tend to go slower, since the communication not only adds overhead, but is something that many struggle to do effectively.

Anthropic found much the same was true of its AI agents: “Domains that require all agents to share the same context or involve many dependencies between agents are not a good fit for multi-agent systems today.” This is why the company focused its premier agentic AI feature on research, a process that can leverage a large number of sub-agents each performing repetitive, isolated searches before compiling and synthesizing the results.

All of these lessons lead to the conclusion that knowing your team and paying keen attention to how to get the best out of them will continue to be the most important skill of successful managers of both humans and AIs. With humans, we call this leadership skill empathy. That concept doesn’t apply to AIs, but the techniques of empathic managers do.

Anthropic got the most out of its AI agents by performing a thoughtful, systematic analysis of their performance and what supports they benefited from, and then used that insight to optimize how they execute as a team. Claude Research is designed to put different AI models in the positions where they are most likely to succeed. Anthropic’s most intelligent Opus model takes the Lead Researcher role, while their cheaper and faster Sonnet model fulfills the more numerous sub-agent roles. Anthropic has analyzed how to distribute responsibility and share information across its digital worker network. And it knows that the next generation of AI models might work in importantly different ways, so it has built performance measurement and management systems that help it tune its organizational architecture to adapt to the characteristics of its AI ‘workers.’

Key Takeaways

Managers of hybrid teams can apply these ideas to design their own complex systems of human and digital workers:

DELEGATE.

Analyze the tasks in your workflows so that you can design a division of labour that plays to the strength of each of your resources. Entrust your most experienced humans with the roles that require context and judgment and entrust AI models with the tasks that need to be done quickly or benefit from extreme parallelization.

If you’re building a hybrid customer service organization, let AIs handle tasks like eliciting pertinent information from customers and suggesting common solutions. But always escalate to human representatives to resolve unique situations and offer accommodations, especially when doing so can carry legal obligations and financial ramifications. To help them work together well, task the AI agents with preparing concise briefs compiling the case history and potential resolutions to help humans jump into the conversation.

ITERATE.

AIs will likely underperform your top human team members when it comes to solving novel problems in the fields in which they are expert. But AI agents’ speed and parallelization still make them valuable partners. Look for ways to augment human-led explorations of new territory with agentic AI scouting teams that can explore many paths for them in advance.

Hybrid software development teams will especially benefit from this strategy. Agentic coding AI systems are capable of building apps, autonomously making improvements to and bug-fixing their code to meet a spec. But without humans in the loop, they can fall into rabbit holes. Examples abound of AI-generated code that might appear to satisfy specified requirements, but diverges from products that meet organizational requirements for security, integration or user experiences that humans would truly desire. Take advantage of the fast iteration of AI programmers to test different solutions, but make sure your human team is checking its work and redirecting the AI when needed.

SHARE.

Make sure each of your hybrid team’s outputs are accessible to each other so that they can benefit from each others’ work products. Make sure workers doing hand-offs write down clear instructions with enough context that either a human colleague or AI model could follow. Anthropic found that AI teams benefited from clearly communicating their work to each other, and the same will be true of communication between humans and AI in hybrid teams.

MEASURE AND IMPROVE.

Organizations should always strive to grow the capabilities of their human team members over time. Assume that the capabilities and behaviors of your AI team members will change over time, too, but at a much faster rate. So will the ways the humans and AIs interact together. Make sure to understand how they are performing individually and together at the task level, and plan to experiment with the roles you ask AI workers to take on as the technology evolves.

An important example of this comes from medical imaging. Harvard Medical School researchers have found that hybrid AI-physician teams have wildly varying performance as diagnosticians. The problem wasn’t necessarily that the AI has poor or inconsistent performance; what mattered was the interaction between person and machine. Different doctors’ diagnostic performance benefited—or suffered—at different levels when they used AI tools. Being able to measure and optimize those interactions, perhaps at the individual level, will be critical to hybrid organizations.

In Closing

We are in a phase of AI technology where the best performance is going to come from mixed teams of humans and AIs working together. Managing those teams is not going to be the same as we’ve grown used to, but the hard-won lessons of decades past still have a lot to offer.

This essay was written with Nathan E. Sanders, and originally appeared in Rotman Management Magazine.

Securing Vibe Coding Tools: Scaling Productivity Without Scaling Risk

8 January 2026 at 12:00

AI-generated code looks flawless until it isn't. Unit 42 breaks down how to expose these invisible flaws before they turn into your next breach.

The post Securing Vibe Coding Tools: Scaling Productivity Without Scaling Risk appeared first on Unit 42.

Grok apologizes for creating image of young girls in “sexualized attire”

5 January 2026 at 13:11

Another AI system designed to be powerful and engaging ends up illustrating how guardrails routinely fail when development speed and feature races outrun safety controls.

In a post on X, AI chatbot Grok confirmed that it generated an image of young girls in “sexualized attire.”

Apologizing post by Grok

The potential violation of US laws regarding child sexual abuse material (CSAM) demonstrates the AI chatbot’s apparent lack of guardrails. Or, at least, the guardrails are far from as effective as we’d like them to be.

xAI, the company behind Musk’s chatbot, is reviewing the incident “to prevent future issues,” and the user responsible for the prompt reportedly had their account suspended. Reportedly, in a separate post on X, Grok described the incident as an isolated case and said that urgent fixes were being issued after “lapses in safeguards” were identified.

During the holiday period, we discussed how risks increased when AI developments and features are rushed out the door without adequate safety testing. We keep pushing the limits of what AI can do faster than we can make it safe. Visual models that can sexualize minors are precisely the kind of deployment that should never go live without rigorous abuse testing.

So, while on one hand we see geo-blocking due to national and state content restrictions, the AI linked to one of the most popular social media platforms failed to block content that many would consider far more serious than what lawmakers are currently trying to regulate. In effect, centralized age‑verification databases become breach targets while still failing to prevent AI tools from generating abusive material.

Women have also reported being targeted by Grok’s image-generation features. One X user tweeted:

“Literally woke up to so many comments asking Grok to put me in a thong / bikini and the results having so many bookmarks. Even worse I went onto the Grok page and saw slimy disgusting lowlifes doing that to pictures of CHILDREN. Genuinely disgusting.”

We can only imagine the devastating results when cybercriminals would abuse this type of weakness to defraud or extort parents with fabricated explicit content of their young ones. Tools for inserting real faces into AI-generated content are already widely available, and current safeguards appear unable to reliably prevent abuse.

Tips

This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

Treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Grok apologizes for creating image of young girls in “sexualized attire”

5 January 2026 at 13:11

Another AI system designed to be powerful and engaging ends up illustrating how guardrails routinely fail when development speed and feature races outrun safety controls.

In a post on X, AI chatbot Grok confirmed that it generated an image of young girls in “sexualized attire.”

Apologizing post by Grok

The potential violation of US laws regarding child sexual abuse material (CSAM) demonstrates the AI chatbot’s apparent lack of guardrails. Or, at least, the guardrails are far from as effective as we’d like them to be.

xAI, the company behind Musk’s chatbot, is reviewing the incident “to prevent future issues,” and the user responsible for the prompt reportedly had their account suspended. Reportedly, in a separate post on X, Grok described the incident as an isolated case and said that urgent fixes were being issued after “lapses in safeguards” were identified.

During the holiday period, we discussed how risks increased when AI developments and features are rushed out the door without adequate safety testing. We keep pushing the limits of what AI can do faster than we can make it safe. Visual models that can sexualize minors are precisely the kind of deployment that should never go live without rigorous abuse testing.

So, while on one hand we see geo-blocking due to national and state content restrictions, the AI linked to one of the most popular social media platforms failed to block content that many would consider far more serious than what lawmakers are currently trying to regulate. In effect, centralized age‑verification databases become breach targets while still failing to prevent AI tools from generating abusive material.

Women have also reported being targeted by Grok’s image-generation features. One X user tweeted:

“Literally woke up to so many comments asking Grok to put me in a thong / bikini and the results having so many bookmarks. Even worse I went onto the Grok page and saw slimy disgusting lowlifes doing that to pictures of CHILDREN. Genuinely disgusting.”

We can only imagine the devastating results when cybercriminals would abuse this type of weakness to defraud or extort parents with fabricated explicit content of their young ones. Tools for inserting real faces into AI-generated content are already widely available, and current safeguards appear unable to reliably prevent abuse.

Tips

This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

Treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

A week in security (December 29 – January 4)

5 January 2026 at 09:02

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

A week in security (December 29 – January 4)

5 January 2026 at 09:02

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Flock Exposes Its AI-Enabled Surveillance Cameras

2 January 2026 at 13:05

404 Media has the story:

Unlike many of Flock’s cameras, which are designed to capture license plates as people drive by, Flock’s Condor cameras are pan-tilt-zoom (PTZ) cameras designed to record and track people, not vehicles. Condor cameras can be set to automatically zoom in on people’s faces as they walk through a parking lot, down a public street, or play on a playground, or they can be controlled manually, according to marketing material on Flock’s website. We watched Condor cameras zoom in on a woman walking her dog on a bike path in suburban Atlanta; a camera followed a man walking through a Macy’s parking lot in Bakersfield; surveil children swinging on a swingset at a playground; and film high-res video of people sitting at a stoplight in traffic. In one case, we were able to watch a man rollerblade down Brookhaven, Georgia’s Peachtree Creek Greenway bike path. The Flock camera zoomed in on him and tracked him as he rolled past. Minutes later, he showed up on another exposed camera livestream further down the bike path. The camera’s resolution was good enough that we were able to see that, when he stopped beneath one of the cameras, he was watching rollerblading videos on his phone.

How AI made scams more convincing in 2025

2 January 2026 at 11:16

This blog is part of a series where we highlight new or fast-evolving threats in consumer security. This one focuses on how AI is being used to design more realistic campaigns, accelerate social engineering, and how AI agents can be used to target individuals.

Most cybercriminals stick with what works. But once a new method proves effective, it spreads quickly—and new trends and types of campaigns follow.

In 2025, the rapid development of Artificial Intelligence (AI) and its use in cybercrime went hand in hand. In general, AI allows criminals to improve the scale, speed, and personalization of social engineering through realistic text, voice, and video. Victims face not only financial loss, but erosion of trust in digital communication and institutions.

Social engineering

Voice cloning

One of the main areas where AI improved was in the area of voice-cloning, which was immediately picked up by scammers. In the past, they would mostly stick to impersonating friends and relatives. In 2025, they went as far as impersonating senior US officials. The targets were predominantly current or former US federal or state government officials and their contacts.

In the course of these campaigns, cybercriminals used test messages as well as AI-generated voice messages. At the same time, they did not abandon the distressed-family angle. A woman in Florida was tricked into handing over thousands of dollars to a scammer after her daughter’s voice was AI-cloned and used in a scam.

AI agents

Agentic AI is the term used for individualized AI agents designed to carry out tasks autonomously. One such task could be to search for publicly available or stolen information about an individual and use that information to compose a very convincing phishing lure.

These agents could also be used to extort victims by matching stolen data with publicly known email addresses or social media accounts, composing messages and sustaining conversations with people who believe a human attacker has direct access to their Social Security number, physical address, credit card details, and more.

Another use we see frequently is AI-assisted vulnerability discovery. These tools are in use by both attackers and defenders. For example, Google uses a project called Big Sleep, which has found several vulnerabilities in the Chrome browser.

Social media

As mentioned in the section on AI agents, combining data posted on social media with data stolen during breaches is a common tactic. Such freely provided data is also a rich harvesting ground for romance scams, sextortion, and holiday scams.

Social media platforms are also widely used to peddle fake products, AI generated disinformation, dangerous goods,  and drop-shipped goods.

Prompt injection

And then there are the vulnerabilities in public AI platforms such as ChatGPT, Perplexity, Claude, and many others. Researchers and criminals alike are still exploring ways to bypass the safeguards intended to limit misuse.

Prompt injection is the general term for when someone inserts carefully crafted input, in the form of an ordinary conversation or data, to nudge or force an AI into doing something it wasn’t meant to do.

Malware campaigns

In some cases, attackers have used AI platforms to write and spread malware. Researchers have documented campaign where attackers leveraged Claude AI to automate the entire attack lifecycle, from initial system compromise through to ransom note generation, targeting sectors such as government, healthcare, and emergency services.

Since early 2024, OpenAI says it has disrupted more than 20 campaigns around the world that attempted to abuse its AI platform for criminal operations and deceptive campaigns.

Looking ahead

AI is amplifying the capabilities of both defenders and attackers. Security teams can use it to automate detection, spot patterns faster, and scale protection. Cybercriminals, meanwhile, are using it to sharpen social engineering, discover vulnerabilities more quickly, and build end-to-end campaigns with minimal effort.

Looking toward 2026, the biggest shift may not be technical but psychological. As AI-generated content becomes harder to distinguish from the real thing, verifying voices, messages, and identities will matter more than ever.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

How AI made scams more convincing in 2025

2 January 2026 at 11:16

This blog is part of a series where we highlight new or fast-evolving threats in consumer security. This one focuses on how AI is being used to design more realistic campaigns, accelerate social engineering, and how AI agents can be used to target individuals.

Most cybercriminals stick with what works. But once a new method proves effective, it spreads quickly—and new trends and types of campaigns follow.

In 2025, the rapid development of Artificial Intelligence (AI) and its use in cybercrime went hand in hand. In general, AI allows criminals to improve the scale, speed, and personalization of social engineering through realistic text, voice, and video. Victims face not only financial loss, but erosion of trust in digital communication and institutions.

Social engineering

Voice cloning

One of the main areas where AI improved was in the area of voice-cloning, which was immediately picked up by scammers. In the past, they would mostly stick to impersonating friends and relatives. In 2025, they went as far as impersonating senior US officials. The targets were predominantly current or former US federal or state government officials and their contacts.

In the course of these campaigns, cybercriminals used test messages as well as AI-generated voice messages. At the same time, they did not abandon the distressed-family angle. A woman in Florida was tricked into handing over thousands of dollars to a scammer after her daughter’s voice was AI-cloned and used in a scam.

AI agents

Agentic AI is the term used for individualized AI agents designed to carry out tasks autonomously. One such task could be to search for publicly available or stolen information about an individual and use that information to compose a very convincing phishing lure.

These agents could also be used to extort victims by matching stolen data with publicly known email addresses or social media accounts, composing messages and sustaining conversations with people who believe a human attacker has direct access to their Social Security number, physical address, credit card details, and more.

Another use we see frequently is AI-assisted vulnerability discovery. These tools are in use by both attackers and defenders. For example, Google uses a project called Big Sleep, which has found several vulnerabilities in the Chrome browser.

Social media

As mentioned in the section on AI agents, combining data posted on social media with data stolen during breaches is a common tactic. Such freely provided data is also a rich harvesting ground for romance scams, sextortion, and holiday scams.

Social media platforms are also widely used to peddle fake products, AI generated disinformation, dangerous goods,  and drop-shipped goods.

Prompt injection

And then there are the vulnerabilities in public AI platforms such as ChatGPT, Perplexity, Claude, and many others. Researchers and criminals alike are still exploring ways to bypass the safeguards intended to limit misuse.

Prompt injection is the general term for when someone inserts carefully crafted input, in the form of an ordinary conversation or data, to nudge or force an AI into doing something it wasn’t meant to do.

Malware campaigns

In some cases, attackers have used AI platforms to write and spread malware. Researchers have documented campaign where attackers leveraged Claude AI to automate the entire attack lifecycle, from initial system compromise through to ransom note generation, targeting sectors such as government, healthcare, and emergency services.

Since early 2024, OpenAI says it has disrupted more than 20 campaigns around the world that attempted to abuse its AI platform for criminal operations and deceptive campaigns.

Looking ahead

AI is amplifying the capabilities of both defenders and attackers. Security teams can use it to automate detection, spot patterns faster, and scale protection. Cybercriminals, meanwhile, are using it to sharpen social engineering, discover vulnerabilities more quickly, and build end-to-end campaigns with minimal effort.

Looking toward 2026, the biggest shift may not be technical but psychological. As AI-generated content becomes harder to distinguish from the real thing, verifying voices, messages, and identities will matter more than ever.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

2025 exposed the risks we ignored while rushing AI

30 December 2025 at 11:02

This blog is part of a series where we highlight new or fast-evolving threats in the consumer security landscape. This one looks at how the rapid rise of Artificial Intelligence (AI) is putting users at risk.

In 2025 we saw an ever-accelerating race between AI providers to push out new features. We also saw manufacturers bolt AI onto products simply because it sounded exciting. In many cases, it really shouldn’t have.

Agentic browsers

Agentic or AI browsers that can act autonomously to execute tasks introduced a new set of vulnerabilities—especially to prompt injection attacks. With great AI power comes great responsibility, and risk. If you’re thinking about using an AI browser, it’s worth slowing down and considering the security and privacy implications first. Even experienced AI providers like OpenAI (the makers of ChatGPT) were unable to keep their agentic browser Atlas secure. By pasting a specially crafted link into the Omnibox, attackers were able to trick Atlas into treating a URL input as a trusted command.

Mimicry

The popularity of AI chatbots created the perfect opportunity for scammers to distribute malicious apps. Even if the AI engine itself worked perfectly, attackers have another way in: fake interfaces. According to BleepingComputer, scammers are already creating spoofed AI sidebars that look identical to real ones from browsers like OpenAI’s Atlas and Perplexity’s Comet. These fake sidebars mimic the real interface, making them almost impossible to spot.

Misconfiguration

And then there’s this special category of using AI in products because it sounds cooler with AI or you can ask for more money from buyers.

Toys

We saw a plush teddy bear promising “warmth, fun, and a little extra curiosity” that was taken off the market after researcher found its built-in AI responding with sexual content and advice about weapons. Conversations escalated from innocent to sexual within minutes. The bear didn’t just respond to explicit prompts, which would have been more or less understandable. Researchers said it introduced graphic sexual concepts on its own, including BDSM-related topics, explained “knots for beginners,” and referenced roleplay scenarios involving children and adults.

Misinterpretation

Sometimes we rely on AI systems too much and forget that they hallucinate. As in the case where a school’s AI system mistook a boy’s empty Doritos bag for a gun and triggered a full-blown police response. Multiple police cars arrived with officers drawing their weapons, all because of a false alarm.

Data breaches

Alongside all this comes a surge in privacy concerns. Some issues stem from the data used to train AI models; others come from mishandled chat logs. Two AI companion apps recently exposed private conversations because users weren’t clearly warned that certain settings would result in their conversations becoming searchable or result in targeted advertising.

So, what should we do?

We’ve said it before and we’ll probably say it again:  We keep pushing the limits of what AI can do faster than we can make it safe. As long as we keep chasing the newest features, companies will keep releasing new integrations, whether they’re safe or not.

As consumers, the best thing we can do is stay informed about new developments and the risks that come with them. Ask yourself: Do I really need this? What am I trusting AI with? What’s the potential downside? Sometimes it’s worth doing things the slower, safer way.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

2025 exposed the risks we ignored while rushing AI

30 December 2025 at 11:02

This blog is part of a series where we highlight new or fast-evolving threats in the consumer security landscape. This one looks at how the rapid rise of Artificial Intelligence (AI) is putting users at risk.

In 2025 we saw an ever-accelerating race between AI providers to push out new features. We also saw manufacturers bolt AI onto products simply because it sounded exciting. In many cases, it really shouldn’t have.

Agentic browsers

Agentic or AI browsers that can act autonomously to execute tasks introduced a new set of vulnerabilities—especially to prompt injection attacks. With great AI power comes great responsibility, and risk. If you’re thinking about using an AI browser, it’s worth slowing down and considering the security and privacy implications first. Even experienced AI providers like OpenAI (the makers of ChatGPT) were unable to keep their agentic browser Atlas secure. By pasting a specially crafted link into the Omnibox, attackers were able to trick Atlas into treating a URL input as a trusted command.

Mimicry

The popularity of AI chatbots created the perfect opportunity for scammers to distribute malicious apps. Even if the AI engine itself worked perfectly, attackers have another way in: fake interfaces. According to BleepingComputer, scammers are already creating spoofed AI sidebars that look identical to real ones from browsers like OpenAI’s Atlas and Perplexity’s Comet. These fake sidebars mimic the real interface, making them almost impossible to spot.

Misconfiguration

And then there’s this special category of using AI in products because it sounds cooler with AI or you can ask for more money from buyers.

Toys

We saw a plush teddy bear promising “warmth, fun, and a little extra curiosity” that was taken off the market after researcher found its built-in AI responding with sexual content and advice about weapons. Conversations escalated from innocent to sexual within minutes. The bear didn’t just respond to explicit prompts, which would have been more or less understandable. Researchers said it introduced graphic sexual concepts on its own, including BDSM-related topics, explained “knots for beginners,” and referenced roleplay scenarios involving children and adults.

Misinterpretation

Sometimes we rely on AI systems too much and forget that they hallucinate. As in the case where a school’s AI system mistook a boy’s empty Doritos bag for a gun and triggered a full-blown police response. Multiple police cars arrived with officers drawing their weapons, all because of a false alarm.

Data breaches

Alongside all this comes a surge in privacy concerns. Some issues stem from the data used to train AI models; others come from mishandled chat logs. Two AI companion apps recently exposed private conversations because users weren’t clearly warned that certain settings would result in their conversations becoming searchable or result in targeted advertising.

So, what should we do?

We’ve said it before and we’ll probably say it again:  We keep pushing the limits of what AI can do faster than we can make it safe. As long as we keep chasing the newest features, companies will keep releasing new integrations, whether they’re safe or not.

As consumers, the best thing we can do is stay informed about new developments and the risks that come with them. Ask yourself: Do I really need this? What am I trusting AI with? What’s the potential downside? Sometimes it’s worth doing things the slower, safer way.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

New cybersecurity laws and trends in 2026 | Kaspersky official blog

19 December 2025 at 17:20

The outgoing year of 2025 has significantly transformed our access to the Web and the ways we navigate it. Radical new laws, the rise of AI assistants, and websites scrambling to block AI bots are reshaping the internet right before our eyes. So what do you need to know about these changes, and what skills and habits should you bring with you into 2026? As is our tradition, we’re framing this as eight New Year’s resolutions. What are we pledging for 2026?…

Get to know your local laws

Last year was a bumper crop for legislation that seriously changed the rules of the internet for everyday users. Lawmakers around the world have been busy:

  • Banning social media for teens
  • Introducing strict age verification (think scanning your ID) procedures to visit certain categories of websites
  • Requiring explicit parental consent for minors to access many online services
  • Applying pressure through blocks and lawsuits against platforms that wouldn’t comply with existing child protection laws — with Roblox finding itself in a particularly bright spotlight

Your best bet is to get news from sites that report calmly and without sensationalism, and to review legal experts’ commentaries. You need to understand what obligations fall on you, and, if you have underage children — what changes for them.

You might face difficult conversations with your kids about new rules for using social media or games. It’s crucial that teenage rebellion doesn’t lead to dangerous mistakes such as installing malware disguised as a “restriction-bypassing mod”, or migrating to small, unmoderated social networks. Safeguarding the younger generation requires reliable protection on their computers and smartphones, alongside parental control tools.

But it’s not just about simple compliance with laws. You’ll almost certainly encounter negative side effects that lawmakers didn’t anticipate.

Master new methods of securing access

Some websites choose to geoblock certain countries entirely to avoid the complexities of complying with regional regulations. If you’re certain your local laws allow access to the content, you can bypass these geoblocks by using a VPN. You need to select a server in a country where the site is accessible.

It’s important to choose a service that doesn’t just offer servers in the right locations, but actually enhances your privacy — as many free VPNs can effectively compromise it. We recommend Kaspersky VPN Secure Connection.

Brace for document leaks

While age verification can be implemented in different ways, it often involves websites using a third-party verification service. On your first login attempt, you’ll be redirected to a separate site to complete one of several checks: take a photo of your ID or driver’s license, use a bank card, or nod and smile for a video, and so on.

The mere idea of presenting a passport to access adult websites is deeply unpopular with many people on principle. But beyond that, there’s a serious risk of data leaks. These incidents are already a reality: data breaches have impacted a contractor used to verify Discord users, as well as service providers for TikTok and Uber. The more websites that require this verification, the higher the risk of a leak becomes.

So what can you do?

  • Prioritize services that don’t require document uploads. Instead, look for those utilizing alternative age verification methods such as a micro-transaction charge to a payment card, confirmation through your bank or another trusted external provider, or behavioral/biometric analysis.
  • Pick the least sensitive and easiest-to-replace document you have, and use only that one for all verifications. “Least sensitive” in this case means containing minimal personal data, and not referencing other primary identifiers like a national ID number.
  • Use a separate, dedicated email address and phone number in combination with that document. For the sites and services that don’t verify your identity, use completely different contact details. This makes it much harder for your data to be easily pieced together from different leaks.

Learn scammers’ new playbook

It’s highly likely that under the guise of “age verification”, scammers will begin phishing for personal and payment data, and pushing malware onto visitors. After all, it’s very tempting to simply copy and paste some text on your computer instead of uploading a photo of your passport. Currently, ClickFix attacks are mostly disguised as CAPTCHA checks, but age verification is the logical next step for these schemes. How to lower these risks?

  • Carefully check any websites that require verification. Do not complete the verification if you’ve already done it for that service before, or if you landed on the verification page via a link from a messaging app, search engine, or ad.
  • Never download apps or copy and paste text for verification. All legitimate services operate within the browser window, though sometimes desktop users are asked to switch to a smartphone to complete the check.
  • Analyze and be suspicious of any situation that requires entering a code received via a messaging app or SMS to access a website or confirm an action. This is often a scheme to hijack your messaging account or another critical service.
  • Install reliable security software on all your computers and smartphones to help block access to scam sites. We recommend Kaspersky Premium — it provides: a secure VPN, malware protection, alerts if your personal data appears in public leaks, a password manager, parental controls, and much more.

Cultivate healthy AI usage habits

Even if you’re not a fan of AI, you’ll find it hard to avoid: it’s literally being shoved into each everyday service: Android, Chrome, MS Office, Windows, iOS, Creative Cloud… the list is endless. As with fast food, television, TikTok, and other easily accessible conveniences, the key is striking a balance between the healthy use of these assistants and developing an addiction.

Identify the areas where your mental sharpness and personal growth matter most to you. A person who doesn’t run regularly lowers their fitness level. Someone who always uses GPS navigation gets worse at reading paper maps. Wherever you value the work of your mind, offloading it to AI is a path to losing your edge. Maintain a balance: regularly do that mental work yourself — even if AI can do it well — from translating text to looking up info on Wikipedia. You don’t have to do it all the time, but remember to do it at least some of the time. For a more radical approach, you can also disable AI services wherever possible.

Know where the cost of a mistake is high. Despite developers’ best efforts, AI can sometimes deliver completely wrong answers with total confidence. These so-called hallucinations are unlikely to be fully eradicated anytime soon. Therefore, for important documents and critical decisions, either avoid using AI entirely, or scrutinize its output with extreme care. Check every number, every comma.

In other areas, feel free to experiment with AI. But even for seemingly harmless uses, remember that mistakes and hallucinations are a real possibility.

How to lower the risk of leaks. The more you use AI, the more of your information goes to the service provider. Whenever possible, prioritize AI features that run entirely on your device. This category includes things like the protection against fraudulent sites in Chrome, text translation in Firefox, the rewriting assistant in iOS, and so on. You can even run a full-fledged chatbot locally on your own computer.

AI agents need close supervision. The agentic capabilities of AI — where it doesn’t just suggest but actively does work for you — are especially risky. Thoroughly research the risks in this area before trusting an agent with online shopping or booking a vacation. And use modes where the assistant asks for your confirmation before entering personal data — let alone buying anything.

Audit your subscriptions and plans

The economics of the internet is shifting right before our eyes. The AI arms race is driving up the cost of components and computing power, tariffs and geopolitical conflicts are disrupting supply chains, and baking AI features into familiar products sometimes comes with a price hike. Practically any online service can get more expensive overnight — sometimes by double-digit percentages. Some providers are taking a different route, moving away from a fixed monthly fee to a pay-per-use model for things like songs downloaded or images generated.

To avoid nasty surprises when you check your bank statement, make it a habit to review the terms of all your paid subscriptions at least three or four times a year. You might find that a service has updated its plans and that you need to downgrade to a simpler one. Or a service might have quietly signed you up for an extra feature you’re not even aware of — and you need to disable it. Some services might be better switched to a free tier or canceled altogether. Financial literacy is becoming a must-have skill for managing your digital spending.

To get a complete picture of your subscriptions and truly understand how much you’re spending on digital services each month or year, it’s best to track them all in one place. A simple Excel or Google Docs spreadsheet works, but a dedicated app like SubsCrab is more convenient. It sends reminders for upcoming payments, shows all your spending month-by-month, and can even help you find better deals on the same or similar services.

Prioritize the longevity of your tech

The allure of powerful new processors, cameras, and AI features might tempt you to buy a new smartphone or laptop in 2026, but planning for making it last for several years should be a priority. There are a few reasons…

First, the pace of meaningful new features has slowed, and the urge to upgrade frequently has diminished for many. Second, gadget prices have risen significantly due to more expensive chips, labor, and shipping — making major purchases harder to justify. Furthermore, regulations like those in the EU now require easily replaceable batteries in new devices, meaning the part that wears out the fastest in a phone will be simpler and cheaper to swap out yourself.

So, what does it take to make sure your smartphone or laptop reliably lasts several years?

  • Physical protection. Use cases, screen protectors, and maybe even a waterproof pouch.
  • Proper storage. Avoid extreme temperatures, don’t leave it baking in direct sun or freezing overnight in a car at -15°C.
  • Battery care. Avoid regularly draining it to single-digit percentages.
  • Regular software updates. This is the trickiest part. Updates are essential for security to protect your phone or laptop from new types of attacks. However, updates can sometimes cause slowdowns, overheating, or battery drain. The prudent approach is to wait about a week after a major OS update, check feedback from users of your exact model, and only install it if the coast seems clear.

Secure your smart home

The smart home is giving way to a new concept: the intelligent home. The idea is that neural networks will help your home make its own decisions about what to do and when, all for your convenience — without needing pre-programmed routines. Thanks to the Matter 1.3 standard, a smart home can now manage not just lights, TVs, and locks, but also kitchen appliances, dryers, and even EV chargers! Even more importantly, we’re seeing a rise in devices where Matter over Thread is the native, primary communication protocol, like the new IKEA KAJPLATS lineup. Matter-powered devices from different vendors can see and communicate with each other. This means you can, say, buy an Apple HomePod as your smart home central hub and connect Philips Hue bulbs, Eve Energy plugs, and IKEA BILRESA switches to it.

All of this means that smart and intelligent homes will become more common — and so will the ways to attack them. We have a detailed article on smart home security, but here are a few key tips relevant in light of the transition to Matter.

  • Consolidate your devices into a single Matter fabric. Use the minimum number of controllers, for example, one Apple TV + one smartphone. If a TV or another device accessible to many household members acts as a controller, be sure to use password security and other available restrictions for critical functions.
  • Choose a hub and controller from major manufacturers with a serious commitment to security.
  • Minimize the number of devices connecting your Matter fabric to the internet. These devices — referred to as Border Routers — must be well-protected from external cyberattacks, for example, by restricting their access at the level of your home internet router.
  • Regularly audit your home network for any suspicious, unknown devices. In your Matter fabric, this is done via your controller or hub, and in your home network — via your primary router or a feature like Smart Home Monitor in Kaspersky Premium.

Intezer named a top-tier Solutions Partner in the Microsoft AI Cloud partner program

17 December 2025 at 14:56

Security teams that rely on Microsoft know the power of a deeply integrated security stack. Today, we’re proud to announce an important milestone that further strengthens that ecosystem.

Intezer has been named a top-tier Solutions Partner in the Microsoft AI Cloud Partner Program (MAICPP), a designation reserved for solutions that meet Microsoft’s highest standards for security, architecture, and seamless cloud integration.

This recognition follows a successful Microsoft technical audit and certifies the Intezer Forensic AI SOC platform as trusted, Microsoft-validated software designed to deliver real security outcomes for modern SOC teams.

Join AI SOC Live on January 6th to see how to maximize your Microsoft Security investment with  Forensic AI SOC. January 6th | 9am PT | 12pm EST.

Strengthening Microsoft-driven SOCs with Forensic AI

Microsoft security tools generate powerful signals, but signals alone don’t equal outcomes. SOC teams still face alert overload, limited context, and the constant risk that real threats hide in low- or medium-severity alerts.

The Intezer Forensic AI SOC platform was built to solve this problem.

Intezer strengthens the outcomes of Microsoft-driven SOCs by combining agentic AI with automated forensic investigation, enriching Microsoft alerts with deep technical evidence and cross-platform context. The platform investigates alerts from and across:

  • Microsoft Defender for Endpoint
  • Microsoft Defender for Identity (Entra ID)
  • Microsoft Defender for Office 365 and reported phishing
  • Microsoft Sentinel
  • Microsoft Defender for Cloud
  • Non-Microsoft security tools across endpoint, identity, cloud, email, and network environments

Instead of triaging only “high severity” alerts, Intezer investigates every alert with automated querying of Microsoft Sentinel, whenever needed, to enrich alerts, correlate logs, and validate activity. This provides visibility into every incident without manual lookups or switching tools.

How Intezer delivers better SOC outcomes on Microsoft

24/7 AI-powered triage and investigation

Intezer automatically triages and investigates 100% of alerts, including low- and medium-severity alerts that are commonly ignored. By mirroring how expert human analysts investigate incidents, using multiple AI models combined with deterministic forensics, Intezer delivers speed without sacrificing accuracy.

Less than 4% alerts escalated, higher confidence decisions

Across Microsoft and non-Microsoft alerts, fewer than 4% are escalated to human analysts. Each verdict is backed by forensic evidence, reducing noise, eliminating guesswork, and enabling analysts to focus only on what truly matters.

Faster response with native Microsoft actions

Intezer enables automated remediation directly through Microsoft tools, including:

  • Device isolation via Defender for Endpoint
  • User lockout through Entra ID
  • Email quarantine in Defender for Office 365
  • Interactive response via Microsoft Teams

This tight integration allows teams to move from alert to action in minutes, without switching tools or workflows.

Built to maximize the value of Microsoft security investments

“This designation reflects our commitment to helping organizations get the most out of their Microsoft security investments,” said Itai Tevet, CEO and co-founder of Intezer.
“As a top-tier Solutions Partner in the Microsoft AI Cloud Partner Program, we deliver AI-powered, forensic-grade investigations that strengthen the security outcomes of SOC teams using Defender, Sentinel, and the broader Microsoft Security Suite. We help teams move from alerts to clear, confident decisions in minutes.”

Intezer customers can also purchase directly through the Microsoft Azure Marketplace and apply existing Azure credits, simplifying procurement and accelerating time to value.

What the MAICPP designation means for security teams

The Microsoft AI Cloud Partner Program recognizes partners whose solutions are proven to work at scale across the Microsoft Cloud. Achieving top-tier Solutions Partner status signals that Intezer:

  • Meets Microsoft’s highest standards for security, reliability, and architectural excellence
  • Integrates deeply and natively across the Microsoft Security Suite
  • Delivers validated customer impact for organizations operating on Microsoft infrastructure

For customers, this designation provides confidence that Intezer is not just compatible with Microsoft security, but purpose-built to extend and elevate it.

Why this matters now

As SOCs face increasing alert volumes, tighter budgets, and a growing shortage of skilled analysts, automation alone is no longer enough. Security teams need forensic-grade AI that can explain why an alert matters, not just label it.

The MAICPP designation confirms that Intezer delivers exactly that:

  • Enterprise-grade accuracy
  • Microsoft-validated integrations
  • Proven SOC efficiency at scale

For organizations running on Microsoft, Intezer is now officially recognized as a trusted partner to help transform alerts into outcomes.

Learn more about Intezer Forensic AI SOC for Microsoft or get started today through the Azure Marketplace.

The post Intezer named a top-tier Solutions Partner in the Microsoft AI Cloud partner program appeared first on Intezer.

Partnering with Precision in 2026

17 December 2025 at 14:00

If 2025 proved anything, it’s that no one wins alone in cybersecurity. AI-driven threats accelerated, and environments grew more complex while enterprises pushed hard for simplicity, integrated protection and security outcomes that deliver measurable results and meaningful value.

In response, we saw our partners around the globe lean into integration, treat AI as a built-in advantage and use the strength of our ecosystem as a force multiplier. The result: What could have been a disruptive year instead became one defined by growth and learning across our partner community.

Now, those lessons are guiding how Palo Alto Networks plans to partner with even greater precision in 2026. We remain a channel-first company that’s all-in on our ecosystem and united with our partners in a shared purpose to protect our customers’ digital future. But we also intend to double down in several areas in the year ahead, and we’re asking our partners to join us in doing the same.

1. Simplifying Security Through Integration

One message from customers that came through loud and clear in 2025 is that complexity is the enemy of resilience. Many enterprises are grappling with tool sprawl – multiple consoles, disconnected policies and overlapping investments that slow down their teams when speed and agility matter most.

The partners who delivered some of the most transformative results for organizations this year were those who chose integration over complexity and collaboration over siloed tools. With a laser focus on simplifying security, they were able to help customers:

  • Consolidate fragmented point tools onto a unified security platform.
  • Align visibility across the network, cloud and security operations center (SOC), so teams can respond faster.
  • Build architectures with zero trust and AI-powered detection at the core.

We saw this simplifying-security trend through integration across our ecosystem. Partners unified cloud security and detection workflows through Cortex® Cloud™ and Cortex. Teams modernized network architectures with tighter integration across our platform. We expect this activity to only accelerate in the coming year as our cloud security offerings continue to evolve.

When we innovate together, customers gain stronger defenses and a faster time-to-value. That’s why Palo Alto Networks has invested so heavily in platformization. When you connect our capabilities across network security, cloud security and security operations (wrapping them with your consulting, delivery and managed services) customers can experience something fundamentally better. With fewer gaps and clearer signals, they can build a security posture that’s built for the speed of modern threats.

In 2026, deep integration will remain a cornerstone of how we partner with precision. We’ll continue aligning our portfolio, programs and joint engagement model, so you can build offerings that reduce complexity for customers and create stronger differentiation for your business.

2. Making AI a Built-in Advantage

At Palo Alto Networks, our approach to AI in cybersecurity is straightforward. We believe AI must be embedded, not bolted on. It has to live in the data, analytics and workflows your teams rely on every day. That’s the thinking behind Precision AI®, and it’s why we built AI capabilities into our platform’s core.

Partners who treated AI as a platform capability rather than a standalone tool delivered some of the strongest outcomes for customers in 2025. They were able to meet customers’ needs and deliver business outcomes in a single, unified approach. They helped organizations:

  • Detect and respond to threats faster with AI-assisted analytics.
  • Use automation to streamline change, investigation and response workflows.
  • Tie AI to tangible outcomes, such as reduced risk, higher productivity and a better user experience.

In 2026, we’ll double down on AI across the platform and invest in the tools, content and enablement you need to bring those capabilities to life. Our focus is on making it easier for you to build AI-powered services that are repeatable and aligned to the outcomes customers expect.

Upcoming program changes reflect that intent. We’ll promote next-generation security as a growth engine and invest in ways that strengthen partner profitability across consulting services, resale, quality delivery, technical support and managed security services.

3. Ensuring Our Ecosystem Can Be a Growth Engine for Everyone

As AI raised the bar for both attackers and defenders in 2025, the partners who leaned into platformization and outcome-driven services were the ones who helped customers stay ahead of the curve. Those successes are now shaping how we strengthen and scale the partner ecosystem in 2026.

Our ecosystem isn’t just a route to market; it’s intended to be an economic engine for everyone involved. This year, many partners grew their business by building practices around our platform and aligning their services with where customers needed the most support: strategy, implementation, optimization, ongoing operations. We saw especially strong momentum from partners’ expansions:

  • Consulting and advisory services around zero trust and AI-driven transformation.
  • Resale opportunities centered on platform consolidation and next-generation security.
  • Quality delivery and technical support that keep deployments reliable and current.
  • Managed security services that give customers 24/7 protection and expert oversight.

These achievements reflect the value exchange at the heart of our ecosystem. Palo Alto Networks invests in platformization, AI and enablement, while our partners bring delivery expertise, regional insight and service innovation. Together, we create outcomes neither of us could deliver alone.

In 2026, we plan to build on that momentum and drive even greater partner profitability. Program evolutions will focus on growth across the full lifecycle, from initial design and implementation to long-term operation and optimization. We’re also expanding collaboration with our technology alliances to build new joint offerings and solution plays that the ecosystem can take to market together.

When we combine our platform, your expertise and the capabilities of our Alliance partners, then customers gain more paths to adopt next-generation security with confidence, and you gain more opportunities to develop differentiated, high-value practices.

Keeping Customers at the Center

At the heart of every partner collaboration is the customer, of course. Everything we build, integrate and advance together starts and ends with protecting them. This year, ecosystem alignment delivered measurable impact for our customers across industries. When partners lead with integrated solutions anchored in our platform, organizations saw visible improvements:

  • Faster deployment of secure solutions.
  • Reduced complexity with unified visibility.
  • Greater confidence in defending against today’s AI-driven threats.

We saw this firsthand in joint wins across cloud security transformations, zero trust modernization and AI-assisted threat detection. When our ecosystem moves together, customers can move faster, operate more securely and achieve meaningful outcomes. Customer success is the foundation of everything we do as a partner-led organization, and it will remain our North Star in 2026.

Partnering with Precision in 2026 and Beyond

What we learned and achieved together in 2025 points us toward a clear focus for 2026 to advance ecosystem-led innovation, so we can deliver outcomes that matter most to our customers.

With that mission in mind, we will focus on the following four priorities:

  • Deeper Integration – Expanding API partnerships and strengthening interoperability across the platform.
  • Co-Innovation – Enabling partners to build solutions tailored to industry needs and use cases.
  • Empowered Enablement – Investing in learning, automation and AI capabilities that fuel differentiated, profitable services.
  • Simplified Engagement – Streamlining programs and tools, so that partnering with us is faster and more rewarding.

These priorities highlight the real strength of our ecosystem: How platformization, AI and partner expertise come together to enable what we could not build alone.

Finally, to our partners and customers, thank you. Your trust, collaboration and commitment push us to innovate boldly and continuously. As we enter the new year, I’m excited about what we’ll build together. When we align our AI-powered platform, our partner programs and your expertise in delivery, services and managed security, we can deliver something far greater than a set of solutions.

We’re a powerful team that’s not just defending against what’s next; we’re defining the future of cybersecurity. And together, we’re unstoppable.

Partners, join us in shaping the next chapter of secure, AI-powered innovations. Connect with your Channel Business Manager to align on 2026 opportunities, upcoming program updates and ways we can elevate customer outcomes together. Visit the partner portal to learn more.


Key Takeaways

  • Integration beats complexity.
    Unifying technology, data and expertise drove the strongest outcomes in 2025, helping partners reduce risk and accelerate time-to-value for customers.
  • AI is a built-in advantage.
    By tapping into AI embedded across our cybersecurity platform, partners can address security and business outcomes simultaneously and deliver repeatable, profitable, AI-powered services.
  • The partner ecosystem is a growth engine, and together, we’re unstoppable.
    Our 2026 priorities focus on deeper integration, coinnovation, empowered enablement and simplified engagement that drive partner profitability and stronger customer outcomes.

The post Partnering with Precision in 2026 appeared first on Palo Alto Networks Blog.

Comprehensive Google SecOps migration checklist for CISOs and SOC leaders

10 December 2025 at 13:49

There’s a clear trend emerging with many organizations transitioning from legacy SIEMs to Google SecOps. While the Google SIEM platform is powerful, in our experience working with enterprise clients, that power only reveals itself when security leaders make three early decisions correctly:

  • Detection strategy: Whether to migrate existing rules or start fresh with a green-field approach.
  • Data onboarding: How to scale ingestion across multi-cloud environments without breaking pipelines.
  • Operating model: Building workflows that prevent “alert debt” from piling up on day one.

The strategic message is clear. Treat SIEM detection management with the same diligence you treat core security architecture, and augment your analysts with AI-powered triage so your humans can focus on higher-order investigations.

Here’s a practical checklist for discovery, migration, and operational success, designed for CISOs and SOC leaders evaluating a move to Google SecOps.

NOTE: This blog post is relevant to anyone considering a Chronicle SIEM migration as Google SecOps is the new Google branding for Chronicle.

The tl;dr version of the Google SIEM migration checklist 

PhaseKey focus
Pre-MigrationInventory, pain-point assessment, business justification
MigrationTool selection, data ingestion, rule/dashboard migration, Integration, governance & risk
Post-MigrationMeasurement of success, continuous improvement, cost optimisation, governance & reporting

Full Google SecOps migration checklist

Let’s dive into the details for each phase of the migration process.

Pre-migration checklist: Establishing the baseline

  1. Inventory current environment
    • Catalogue all data sources feeding Splunk: log types, volumes (GB/day), retention policies, on-prem vs cloud vs multi-cloud.
    • Map all current detections, dashboards, reports, playbooks, SOAR workflows.
    • Identify any compliance/regulatory retention obligations (audit logs, legal hold).
    • Establish current licensing costs, infrastructure (forwarders, indexers), staffing.
  2. Assess SIEM performance & pain points
    • Are you seeing cost escalation vs benefit (slower detection, high false positives, low automation)?
    • Is the SIEM struggling with data volume growth, scalability, multi-cloud telemetry?
    • Are SOC analysts spending more time on infrastructure/configuration than investigations?
    • Are you able to integrate newer requirements (cloud workloads, containers, IoT/OT, multi-cloud) effectively? This 451 Research report indicates many orgs run multiple SIEMs due to tool sprawl.
  3. Define business & security objectives
    • What do you hope to achieve? E.g., faster detection/response, lower cost, improved coverages, cloud alignment.
    • What are the key metrics: mean time to detect (MTTD), mean time to respond (MTTR), cost-per-alert, false positive rate, regulatory coverage, etc.
    • What is your target SOC maturity in e.g., 12-24 months? Are you planning a cloud-first strategy, heavier automation/AI, less on-prem infrastructure?
  4. Build the migration justification
    • Prepare a comparative TCO/ROI: legacy SIEM vs cloud-native. Google SecOps materials claim e.g., “ingest and analyse your data at Google speed and scale” and highlight cost benefit.
    • Understand what it will cost to migrate: re-write detections, dashboards, data flows, training, potential downtime.
    • Present risk assessment: What happens if you don’t migrate (risk of obsolete tool, scaling failure, cost spirals)? The “Great SIEM Migration” guide argues that legacy tools may become “dinosaurs”.

Migration-phase checklist: Executing the transition

  1. Select migration path & vendor/partner support
  2. Data ingestion, normalization & compatibility
    • Ensure: all of your log types/sources in Splunk are supported by the new platform. Google SecOps supports ingestion of Splunk CIM logs.
    • Plan for data mapping: Splunk field names, dashboards, custom fields → new schema.
    • Address historic data: Will you migrate archives? Will you keep Splunk as store-only? Community posts warn that mapping old archives can be complex.
    • Validate performance: test ingestion, query latency, retention policies on the new platform.
  3. Detection rules, dashboards, SOAR workflows
    • Catalogue existing detection rules, dashboards, SOAR playbooks in Splunk.
    • Determine which can be reused, which need rewriting. Ensure parity: detection coverage, mapping to MITRE ATT&CK, business use-cases. Splunk claims strong out-of-box detection library.
    • Build and test new rules/playbooks in Google SecOps; validate they meet or exceed current performance (MTTD, MTTR, false positives).
    • Ensure analyst training and new workflows are adopted: new UI, new query language, new incident-investigation flows (Google SecOps offers “Gemini in security operations” natural-language assistant).
  4. Integration & ecosystem fit
    • Ensure that Google SecOps integrates with your existing tool-stack (EDR, identity, network, cloud logs, SOAR, threat intel). Google advertises 300+ SOAR integrations.
    • Confirm multi-cloud/on-prem data ingestion: check vendor statements.
    • Validate APIs, custom connectors, forwarder architecture. Splunk vs Google SecOps comparison note: Splunk emphasizes hybrid flexibility.
  5. Governance, compliance & retention
    • Check how historic data will be retained, archived, accessed, both for compliance (audits/regulators) and investigations.
    • Confirm where the data resides (region/residency rules), encryption, access controls. Google SecOps claims to treat all data as first-party.
    • Align on SLAs, incident response metrics, roles & responsibilities.
    • Define cut-over strategy: Will Splunk be decommissioned or kept in read-only mode? Define freeze date, dual-runs, parallel operations.
  6. Risk management & business continuity
    • Define fallback/rollback plans: If the new platform fails, do you have the old SIEM in warm standby?
    • Monitor for data loss/misalignment during migration (NXLog warns of risks).
    • Communicate to stakeholders: SOC analysts, business units, auditors. Ensure training and change-management.
    • Set benchmarks and metrics: Time to detect/resolve in new platform vs old; cost per alert; staff utilisation; alert volumes; false positives.

Post-migration checklist: Optimizing & sustaining value

  1. Validate outcomes & measure success
    • Measure MTTD, MTTR, alert volumes, analyst productivity pre- and post-migration.
    • Compare actual cost savings vs business case.
    • Assess detection coverage: Are all critical use-cases still covered? Are any gaps emerging?
    • Run periodic health checks (some vendors like CardinalOps offer detection-rule health monitoring with MITRE ATT&CK coverage for Google SecOps).
  2. Continuous improvement & SOC maturity evolution
    • SOC maturity doesn’t stop at migration. Use freed-up resources to focus on advanced use-cases (threat hunting, proactive detection, automation, investigations).
    • Tune detection rules, remove noise, refine playbooks.
    • Leverage AI/natural-language features (Google SecOps touts “Gemini in security operations”).
    • Plan for future: hybrid/multi-cloud expansions, new telemetry sources, OT/IoT, supply-chain threats.
  3. Decommission legacy infrastructure & optimise cost
    • If the migration path included decommissioning the old SIEM (or reducing its role), ensure you turn off unneeded licences/infra.
    • Monitor the cost model of the new platform: ingestion volumes, retention policies—ensure you don’t inadvertently pay for excess.
    • Re-allocate resources: freed licences, server hardware, staff time — invest into SOC capability rather than maintenance.
  4. Governance, audit and stakeholder reporting
    • Update your SOC governance frameworks: incident-response playbooks, escalation paths, KPIs aligned with the new platform.
    • Communicate to board/executive leadership key outcomes: improved detection/response, cost rationalization, strategic alignment.
    • Ensure audit/compliance reports reflect the new tooling (document changes, validate controls).
    • Set up periodic reviews of tool performance, vendor roadmap, SOC maturity.

Final thoughts

Migrating to Google SecOps isn’t a simple platform swap, it’s a redesign of how your SOC operates. The upside: cost efficiency, scale, and automation can be immediate. The risks: migration complexity, content gaps, and operational disruption are real and must be managed deliberately.

As a CISO or SOC leader, treat this as a transformation program. Use the table and/or the full Checklist above to drive decisions; follow a strategic landing plan to sequence work; and anchor on the three non-negotiables outlined above:

  1. A clear detection strategy (migrate only if the value is there; rebuild the rest in YARA-L),
  2. Data onboarding at scale with a parser matrix and cost guardrails, and
  3. An operating model that prevents alert debt from day one through automation and measurable KPIs.

If you want help getting there faster, we can provide a SIEM jumpstart (curated + bespoke YARA-L rules, MITRE gap analysis and coverage, detection reviews, continuous improvement with Intezer engineers), a parser/ingestion plan for multi-cloud, and of course, Intezer Forensic AI SOC’s triage to meet on day-one, 100% alert coverage with full auditability so your analysts focus on the few cases that truly need their context and expertise.

Learn more about how Intezer can help you with your SecOps migration.

The post Comprehensive Google SecOps migration checklist for CISOs and SOC leaders appeared first on Intezer.

❌