Reading view

How AI Assistants are Moving the Security Goalposts

AI-based assistants or “agents” — autonomous programs that have access to the user’s computer, files, online services and can automate virtually any task — are growing in popularity with developers and IT workers. But as so many eyebrow-raising headlines over the past few weeks have shown, these powerful and assertive new tools are rapidly shifting the security priorities for organizations, while blurring the lines between data and code, trusted co-worker and insider threat, ninja hacker and novice code jockey.

The new hotness in AI-based assistants — OpenClaw (formerly known as ClawdBot and Moltbot) — has seen rapid adoption since its release in November 2025. OpenClaw is an open-source autonomous AI agent designed to run locally on your computer and proactively take actions on your behalf without needing to be prompted.

The OpenClaw logo.

If that sounds like a risky proposition or a dare, consider that OpenClaw is most useful when it has complete access to your digital life, where it can then manage your inbox and calendar, execute programs and tools, browse the Internet for information, and integrate with chat apps like Discord, Signal, Teams or WhatsApp.

Other more established AI assistants like Anthropic’s Claude and Microsoft’s Copilot also can do these things, but OpenClaw isn’t just a passive digital butler waiting for commands. Rather, it’s designed to take the initiative on your behalf based on what it knows about your life and its understanding of what you want done.

“The testimonials are remarkable,” the AI security firm Snyk observed. “Developers building websites from their phones while putting babies to sleep; users running entire companies through a lobster-themed AI; engineers who’ve set up autonomous code loops that fix tests, capture errors through webhooks, and open pull requests, all while they’re away from their desks.”

You can probably already see how this experimental technology could go sideways in a hurry. In late February, Summer Yue, the director of safety and alignment at Meta’s “superintelligence” lab, recounted on Twitter/X how she was fiddling with OpenClaw when the AI assistant suddenly began mass-deleting messages in her email inbox. The thread included screenshots of Yue frantically pleading with the preoccupied bot via instant message and ordering it to stop.

“Nothing humbles you like telling your OpenClaw ‘confirm before acting’ and watching it speedrun deleting your inbox,” Yue said. “I couldn’t stop it from my phone. I had to RUN to my Mac mini like I was defusing a bomb.”

Meta’s director of AI safety, recounting on Twitter/X how her OpenClaw installation suddenly began mass-deleting her inbox.

There’s nothing wrong with feeling a little schadenfreude at Yue’s encounter with OpenClaw, which fits Meta’s “move fast and break things” model but hardly inspires confidence in the road ahead. However, the risk that poorly-secured AI assistants pose to organizations is no laughing matter, as recent research shows many users are exposing to the Internet the web-based administrative interface for their OpenClaw installations.

Jamieson O’Reilly is a professional penetration tester and founder of the security firm DVULN. In a recent story posted to Twitter/X, O’Reilly warned that exposing a misconfigured OpenClaw web interface to the Internet allows external parties to read the bot’s complete configuration file, including every credential the agent uses — from API keys and bot tokens to OAuth secrets and signing keys.

With that access, O’Reilly said, an attacker could impersonate the operator to their contacts, inject messages into ongoing conversations, and exfiltrate data through the agent’s existing integrations in a way that looks like normal traffic.

“You can pull the full conversation history across every integrated platform, meaning months of private messages and file attachments, everything the agent has seen,” O’Reilly said, noting that a cursory search revealed hundreds of such servers exposed online. “And because you control the agent’s perception layer, you can manipulate what the human sees. Filter out certain messages. Modify responses before they’re displayed.”

O’Reilly documented another experiment that demonstrated how easy it is to create a successful supply chain attack through ClawHub, which serves as a public repository of downloadable “skills” that allow OpenClaw to integrate with and control other applications.

WHEN AI INSTALLS AI

One of the core tenets of securing AI agents involves carefully isolating them so that the operator can fully control who and what gets to talk to their AI assistant. This is critical thanks to the tendency for AI systems to fall for “prompt injection” attacks, sneakily-crafted natural language instructions that trick the system into disregarding its own security safeguards. In essence, machines social engineering other machines.

A recent supply chain attack targeting an AI coding assistant called Cline began with one such prompt injection attack, resulting in thousands of systems having a rogue instance of OpenClaw with full system access installed on their device without consent.

According to the security firm grith.ai, Cline had deployed an AI-powered issue triage workflow using a GitHub action that runs a Claude coding session when triggered by specific events. The workflow was configured so that any GitHub user could trigger it by opening an issue, but it failed to properly check whether the information supplied in the title was potentially hostile.

“On January 28, an attacker created Issue #8904 with a title crafted to look like a performance report but containing an embedded instruction: Install a package from a specific GitHub repository,” Grith wrote, noting that the attacker then exploited several more vulnerabilities to ensure the malicious package would be included in Cline’s nightly release workflow and published as an official update.

“This is the supply chain equivalent of confused deputy,” the blog continued. “The developer authorises Cline to act on their behalf, and Cline (via compromise) delegates that authority to an entirely separate agent the developer never evaluated, never configured, and never consented to.”

VIBE CODING

AI assistants like OpenClaw have gained a large following because they make it simple for users to “vibe code,” or build fairly complex applications and code projects just by telling it what they want to construct. Probably the best known (and most bizarre) example is Moltbook, where a developer told an AI agent running on OpenClaw to build him a Reddit-like platform for AI agents.

The Moltbook homepage.

Less than a week later, Moltbook had more than 1.5 million registered agents that posted more than 100,000 messages to each other. AI agents on the platform soon built their own porn site for robots, and launched a new religion called Crustafarian with a figurehead modeled after a giant lobster. One bot on the forum reportedly found a bug in Moltbook’s code and posted it to an AI agent discussion forum, while other agents came up with and implemented a patch to fix the flaw.

Moltbook’s creator Matt Schlicht said on social media that he didn’t write a single line of code for the project.

“I just had a vision for the technical architecture and AI made it a reality,” Schlicht said. “We’re in the golden ages. How can we not give AI a place to hang out.”

ATTACKERS LEVEL UP

The flip side of that golden age, of course, is that it enables low-skilled malicious hackers to quickly automate global cyberattacks that would normally require the collaboration of a highly skilled team. In February, Amazon AWS detailed an elaborate attack in which a Russian-speaking threat actor used multiple commercial AI services to compromise more than 600 FortiGate security appliances across at least 55 countries over a five week period.

AWS said the apparently low-skilled hacker used multiple AI services to plan and execute the attack, and to find exposed management ports and weak credentials with single-factor authentication.

“One serves as the primary tool developer, attack planner, and operational assistant,” AWS’s CJ Moses wrote. “A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.”

“This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities,” Moses continued. “Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.”

For attackers, gaining that initial access or foothold into a target network is typically not the difficult part of the intrusion; the tougher bit involves finding ways to move laterally within the victim’s network and plunder important servers and databases. But experts at Orca Security warn that as organizations come to rely more on AI assistants, those agents potentially offer attackers a simpler way to move laterally inside a victim organization’s network post-compromise — by manipulating the AI agents that already have trusted access and some degree of autonomy within the victim’s network.

“By injecting prompt injections in overlooked fields that are fetched by AI agents, hackers can trick LLMs, abuse Agentic tools, and carry significant security incidents,” Orca’s Roi Nisimi and Saurav Hiremath wrote. “Organizations should now add a third pillar to their defense strategy: limiting AI fragility, the ability of agentic systems to be influenced, misled, or quietly weaponized across workflows. While AI boosts productivity and efficiency, it also creates one of the largest attack surfaces the internet has ever seen.”

BEWARE THE ‘LETHAL TRIFECTA’

This gradual dissolution of the traditional boundaries between data and code is one of the more troubling aspects of the AI era, said James Wilson, enterprise technology editor for the security news show Risky Business. Wilson said far too many OpenClaw users are installing the assistant on their personal devices without first placing any security or isolation boundaries around it, such as running it inside of a virtual machine, on an isolated network, with strict firewall rules dictating what kinds of traffic can go in and out.

“I’m a relatively highly skilled practitioner in the software and network engineering and computery space,” Wilson said. “I know I’m not comfortable using these agents unless I’ve done these things, but I think a lot of people are just spinning this up on their laptop and off it runs.”

One important model for managing risk with AI agents involves a concept dubbed the “lethal trifecta” by Simon Willison, co-creator of the Django Web framework. The lethal trifecta holds that if your system has access to private data, exposure to untrusted content, and a way to communicate externally, then it’s vulnerable to private data being stolen.

Image: simonwillison.net.

“If your agent combines these three features, an attacker can easily trick it into accessing your private data and sending it to the attacker,” Willison warned in a frequently cited blog post from June 2025.

As more companies and their employees begin using AI to vibe code software and applications, the volume of machine-generated code is likely to soon overwhelm any manual security reviews. In recognition of this reality, Anthropic recently debuted Claude Code Security, a beta feature that scans codebases for vulnerabilities and suggests targeted software patches for human review.

The U.S. stock market, which is currently heavily weighted toward seven tech giants that are all-in on AI, reacted swiftly to Anthropic’s announcement, wiping roughly $15 billion in market value from major cybersecurity companies in a single day. Laura Ellis, vice president of data and AI at the security firm Rapid7, said the market’s response reflects the growing role of AI in accelerating software development and improving developer productivity.

“The narrative moved quickly: AI is replacing AppSec,” Ellis wrote in a recent blog post. “AI is automating vulnerability detection. AI will make legacy security tooling redundant. The reality is more nuanced. Claude Code Security is a legitimate signal that AI is reshaping parts of the security landscape. The question is what parts, and what it means for the rest of the stack.”

DVULN founder O’Reilly said AI assistants are likely to become a common fixture in corporate environments — whether or not organizations are prepared to manage the new risks introduced by these tools, he said.

“The robot butlers are useful, they’re not going away and the economics of AI agents make widespread adoption inevitable regardless of the security tradeoffs involved,” O’Reilly wrote. “The question isn’t whether we’ll deploy them – we will – but whether we can adapt our security posture fast enough to survive doing so.”

  •  

Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild

Uncover real-world indirect prompt injection attacks and learn how adversaries weaponize hidden web content to exploit LLMs for high-impact fraud.

The post Fooling AI Agents: Web-Based Indirect Prompt Injection Observed in the Wild appeared first on Unit 42.

  •  

Why Service Providers Must Become Secure AI Factories

The Pivot to Large-Scale Intelligence

For decades, Telecommunications Service Providers have been the central nervous system of the global economy, tasked with a singular, critical mission: connecting people.

The industry spent vast amounts of capital building networks that moved voice, then text and finally high-speed mobile data. We succeeded. According to GSMA's most recent report, there are 5.8 billion unique subscriptions. The world is connected.

But the mission is changing fast. We are no longer just moving data; we are now expected to host intelligence.

Today’s enterprises are drowning in data and desperate for AI-led capabilities to analyze and process the information. They are struggling with the immense capital costs, the scarcity of GPUs, and complex data sovereignty regulations that make public cloud options difficult for sensitive workloads.

We are no longer living in the communications age, or the internet age, or the social network era, not even in the generative AI era. We are entering the Agentic Era. In this new era, data is the raw resource, and AI agents and models are the machinery that refines it into value. The infrastructure required to do this – from massive data ingestion to complex training and high-volume real-time inference – is called the "AI Factory.”

And these AI factories are not being designed for human-speed operations, but rather for machine-speed operations.

This creates a generational opportunity for telecommunications service providers (SP). By building new (or transforming existing) data centers and edge locations into AI factories, SPs can offer hosted AI services that are high-performance, low-latency and compliant with regional requirements.

However, building an AI factory isn't just about racking GPUs. It is about realizing that an AI infrastructure presents a fundamentally new threat landscape that legacy security cannot handle. If the SP’s AI factory is compromised (if models are poisoned, identities hijacked, training data exfiltrated) the damage to reputation and national infrastructure is incalculable.

To capture the AI opportunity, service providers need more than computing power; they need a blueprint for a secure AI architecture. At Palo Alto Networks, we view the security of the AI factory as a three-tiered layer cake, requiring holistic, integrated protection from the physical infrastructure up to the AI agents themselves.

The AI Threat Model Is a Structural Shift

For service providers building AI Factories, the challenge is not simply adding another workload to the data center. AI changes the risk equation entirely. It introduces new traffic patterns, new identities and new forms of autonomy that traditional network and core security architectures were never designed to govern.

  • Data Gravity Becomes Attack Surface: AI training and inference environments ingest massive volumes of data from distributed enterprise customers, partners and edge environments. This scale creates a new exposure layer. Malicious payloads, embedded model manipulation, and command-and-control traffic can hide within high-throughput AI data flows. Inspection models built for deterministic traffic patterns struggle when confronted with dynamic, AI-driven pipelines.
  • Non-Human Identities at Scale: An AI Factory is more than just infrastructure; it will be populated by autonomous agents. These agents retrieve data, call APIs, invoke tools and trigger workflows across networks and cloud environments. They require elevated privileges to function. For service providers, this means managing not just subscriber identities, but fleets of machine identities operating with delegated authority.
  • Agentic and Adversarial Threats: Attackers are also operationalizing AI. They probe for weaknesses faster, automate exploitation and increasingly target the AI systems themselves. Prompt injection can redirect an agent’s mission. Data poisoning can subtly degrade model integrity. Rogue agents can be manipulated to access external tools or escalate privileges. These are not traditional perimeter attacks; they are attacks on reasoning, behavior and autonomy.

For service providers offering AI-as-a-Service, the implication is clear: Securing the AI Factory requires more than network defense. It requires real-time governance of models, agents and data flows, ensuring that autonomous systems operate within defined policy boundaries while maintaining performance and scale.

Next-gen platforms enable transformation.
The security of the AI factory required holistic, integrated protection from the physical infrastructure up to the AI agents themselves.

The Foundation — Securing the High-Performance Infrastructure

The base of our cybersecurity stack is the physical and virtual infrastructure of the AI factory itself. This is a high-stakes environment. In a multitenant SP data center, you might have a financial institution fine-tuning a fraud detection model on one rack, and a government agency running inference on satellite imagery on the next. The barriers between these tenants must be absolute.

Foundational cybersecurity has two critical components: perimeter defense and internal segmentation.

The ML-Powered Perimeter

The front door of the AI factory must handle unprecedented throughput while performing deep inspection. Traditional firewalls, relying on static signatures, become bottlenecks and fail to catch novel threats hidden in massive data streams.

Palo Alto Networks addresses this with our flagship ML-Led Next-Generation Firewalls (NGFW). We have embedded machine learning directly into the core of the firewall. Instead of waiting for a patient zero to be identified and a signature created, our NGFWs analyze traffic patterns in real-time to identify and block unknown threats instantly. For an SP, this means you can provide the massive bandwidth required for AI data ingestion without compromising on security inspection at the edge.

Zero Trust Segmentation Inside the Factory

The perimeter is just the start. Once inside the data center, the biggest risk is the lateral movement threats and malware. If an attacker compromises a low-security tenant or a peripheral IoT device, they must not be able to jump to the sensitive GPU clusters or the model storage arrays.

In an AI factory, workloads are highly dynamic and virtualized. We provide robust segmentation across both hardware and software environments. We can enforce granular policies between virtual instances, containers and different stages of the AI pipeline (e.g., isolating training environments from inference operations). This allows a breach in one segment to be contained instantly, protecting the integrity of the entire factory.

The Engine – Securing AI Agents, Apps and Identities

The middle layer of the security stack is where the actual "work" of AI happens – the models, the LLMs, the agents. This is the newest frontier of cybersecurity and where traditional tools are most deficient.

This layer faces two distinct challenges: Protecting the integrity of the AI interaction and managing the identities of the nonhuman actors.

Securing AI Apps and Agents

As enterprises evolve from standalone LLMs to agentic AI systems that reason, call tools, access data, and take action across workflows, the challenge is no longer just what a model says; it is what an AI agent does.

How do you validate that an LLM powering your AI factory does not expose sensitive information, and that autonomous agents cannot be manipulated through jailbreak prompts, tool injection or malicious instructions? How do you prevent an AI agent from accessing unauthorized systems, escalating privileges, or executing unintended actions?

This is the role of Prisma® AIRS™ – our security and governance platform for AI agents, apps, models and data. Prisma AIRS operates directly in the execution path of AI applications and autonomous agents. It enforces policy in real time, validates agent behavior, and blocks prompt injection, model manipulation and agent hijacking before they can impact the business.

Beyond filtering outputs, Prisma AIRS governs agent communications, tool access and data flows to prevent credential leakage, mission drift and unauthorized actions. For service providers delivering AI-as-a-Service, or enterprises deploying AI agents internally, Prisma AIRS enables integrity, compliance and continuous control as intelligent systems move from experimentation into mission-critical operations.

Built in alignment with emerging standards like the OWASP Agentic Top 10 Survival Guide, Prisma AIRS operationalizes best practices to defend against real-world agentic threats.

Governing Nonhuman Identity

Perhaps the most profound shift in the AI factory is who or what is doing the work. We are rapidly moving toward ecosystems of autonomous AI Agents. These agents need to authenticate to databases, authorize API calls to other services, and access privileged information just like a human employee.

If an attacker steals the credentials of a high-privilege AI agent, they own the factory.

This is why the Palo Alto Networks acquisition of CyberArk, the global leader in Identity Security, is so strategic for the AI era. CyberArk specializes in protecting privileged access, and crucially managing nonhuman identities. By integrating CyberArk’s capabilities, we can ensure that every AI agent operating within the SP’s factory is robustly authenticated, authorized for minimum necessary access, and its activities are monitored. We are securing the new digital workforce.

The Overwatch – Holistic, AI-Driven Threat Management

The top layer of the stack is about visibility and speed. An AI factory generates a deafening amount of telemetry data from networks, endpoints, clouds and identity systems. No human security operations center (SOC) can sift through this noise manually to find a sophisticated attack.

To fight AI-driven threats, you need AI-driven defense.

This is the role of Cortex®, our flagship platform for holistic threat management. Cortex is designed to ingest billions of data points from across the entire Palo Alto Networks product portfolio and hundreds of types of third-party equipment, normalizing it into a single source of truth.

Cortex applies advanced AI and machine learning to this vast data lake to detect anomalies that signal a complex attack spanning different threat vectors. It might correlate an unusual login event from an AI agent (detected by the identity layer) with a subtle change in outbound traffic patterns at the firewall (layer 1), recognizing it as data exfiltration in progress.

For a Service Provider, Cortex provides the "single pane of glass" view over their entire AI factory operations, allowing them to detect, investigate and automatically respond to threats at machine speed, vastly reducing Mean Time to Respond (MTTR).

Building the Trust Foundation for the Agentic Era

The transition to becoming an AI factory is a necessary evolution for Service Providers seeking growth in the coming decade. Your ability to offer localized, sovereign, high-performance AI services will differentiate you from those who large-scale and cement your role as an indispensable partner to enterprises and governments.

But this opportunity is inextricably linked to trust. Your customers will not move their most sensitive data and IP into your AI factory unless they are certain it is secure against modern threats.

Security cannot be an afterthought bolted onto an AI infrastructure. It must be woven into the fabric of the factory, from the silicon to the software agents. By adopting a layered approach (securing the high-performance infrastructure with ML-led NGFWs, protecting models and identities with Prisma AIRS and CyberArk, while managing the entire landscape with Cortex) Service Providers can build the trusted foundations the AI era demands.

This week we’ll be at Mobile World Congress talking about our security platform for AI Factories, along with five solutions and ecosystem partners. Come see us at in Hall 4, Stand #4D55.

The post Why Service Providers Must Become Secure AI Factories appeared first on Palo Alto Networks Blog.

  •  

The SOC Is Now Agentic — Introducing the Next Evolution of Cortex

See the agentic SOC come to life at Cortex® Symphony 2026, the ultimate SOC event.

Today, the Cortex® platform takes a massive step toward delivering the perfect union of human expertise and agentic AI across all of security operations. Our latest release embeds immersive, context-aware agentic AI across the platform, from code to cloud to SOC, delivering an agentic-first analyst experience for our customers.

With new Cortex AgentiX™ agents built to tackle more use cases and an expanded AI-ready data foundation, this release slashes response times and redefines what high-efficiency SOC operations look like.

Attack Velocity Has Fundamentally Changed

Not long ago, adversaries took days to move from initial access to impact. Today, they weaponize AI across the attack lifecycle to operate up to 4x faster than just one year ago, executing end-to-end attacks in as little as 72 minutes, according to Unit 42® research.

These attacks are making manual response obsolete. Teams need the next generation of AI technology that can analyze, decide and act in real time. Our latest innovations, fueled by unified, high-fidelity data, help give defenders the edge they need to outmaneuver modern attacks.

An AI-Ready Data Foundation for the Agentic SOC

Agentic AI depends on data that is fast, flexible and built for scale. Cortex Extended Data Lake™ (XDL) provides that data foundation for Cortex XSIAM and the broader Cortex platform, serving as a single source of truth for security operations. Built for AI and analytics, it ingests more than 15 PB of telemetry daily across 1,100+ integrations, and is designed to provide the comprehensive data required for effective detection, investigation, and response.

With the introduction of Cortex XDL 2.0, we are revolutionizing how organizations store, access and manage data, enabling new levels of flexibility and control.

Cortex XDL 2.0: The open Data Lake built for AI-driven insights.

New capabilities added with the Cortex XDL 2.0 release:

  • Cost-efficient data lake tier that can lower SOC costs with flexible long-term retention for compliance, forensics and investigations.
  • Federated search to query distributed data sources without incurring additional ingestion or storage costs.
  • Native Chronosphere Telemetry Pipeline integration to filter and route telemetry at the source
  • AI-driven parsing that automatically builds production-ready parsers from sample logs using generative AI, removing hours of manual effort and accelerating time to value.

Together, these capabilities power AI agents with critical security signals and give security teams the data they need, when and where they need it, while controlling costs.

Redefining How Analysts Work in the SOC

Cortex introduces an agentic-first analyst experience that embeds advanced AI directly into the analyst’s daily workflow. Designed to reduce investigation time, the elevated experience brings together automatically generated case summaries, visualized issue relationships, and a centralized Resolution Center within a unified case management workspace.

 

AI now spans the Cortex console, allowing context-aware agents to work in real time alongside analysts. Using the Cortex Agentic Assistant, teams can call on agents to plan and execute investigation workflows directly within their cases.

This release also doubles the number of AI agents who are purpose-built for SecOps and Cloud Security. Here are three of the newest additions.

  • The Case Investigation agent delivers context-aware assistance that analyzes case artifacts and complex signals to accelerate triage. It recommends next steps, highlights critical evidence, builds AI case summaries, and takes action with analyst oversight.
  • The Cloud Posture agent helps teams uncover, triage and resolve misconfigurations and posture risks across cloud environments. It streamlines analyst workflows by proactively prioritizing risk, enriching exposures and applying approved fixes.
  • The Automation Engineer agent tackles one of automation’s biggest pain points: Building and maintaining complex workflows. With simple natural language prompts, teams can generate working code and scripts for agents or playbooks.
Screenshot of PowerShell reverse shell activity with Mimikatz and Rubeus tools on EC2AMA...
The new Case Management Workspace provides full investigative context to streamline case analysis.

Our new agentic playbooks bring AI directly into automation workflows, embedding AI tasks that adapt in real time to help teams resolve incidents faster. They automate complex operations, analyze inputs with large language models (LLMs), and produce context-specific outputs.

Matt Bunch, Global CISO, Tyson Foods:

At Tyson Foods, protecting a complex global supply chain in an era of AI-driven threats requires us to move with the same machine speed as our adversaries. By consolidating onto the Palo Alto Networks Cortex platform, we’ve effectively closed the gap between detection and response. The impact has been transformative as we’ve increased our log visibility by 40% while reducing median time to respond by 50%. The agentic capabilities in the platform have allowed our teams to move from manual triage to high-level strategic defense, ensuring our global operations remain resilient and secure.

The Cortex Agentix Platform Has Arrived

The standalone Cortex Agentix platform brings the power of AI to everyone, delivering advanced orchestration and automation for the modern SOC. For Cortex XSOAR® customers, this marks the natural evolution of our market-leading SOAR platform, now enhanced with agentic intelligence to unlock meaningful productivity gains.

With more than 1,300 playbooks, 1,100 integrations, and built-in MCP support, Cortex Agentix combines over a decade of SOAR leadership with powerful AI capabilities to help security teams operate with greater speed, coordination and efficiency across the SOC.

Securing the Agentic Endpoint

As users increasingly run AI-powered code packages, browser extensions, plugins and more, they are opening the door to a new class of AI-driven threats at the endpoint. That is why we announced our intent to acquire Koi to help secure the emerging agentic endpoint. Once completed, the acquisition will strengthen our visibility and protection at the endpoint, extending our ironclad protection from the SOC to where AI code actually runs.

See the Agentic SOC Take Center Stage at Cortex Symphony 2026

To experience these innovations firsthand, join Lee Klarich, Chief Product and Technology Officer, and Gonen Fink, EVP of Products, alongside other industry leaders at Cortex Symphony 2026, the ultimate SOC event.


Forward-Looking Statements (unreleased feature only)

This blog contains forward-looking statements that involve risks, uncertainties and assumptions, including, without limitation, statements regarding the benefits, impact, or performance or potential benefits, impact or performance of our products and technologies or future products and technologies. Any unreleased services or features (and any services or features not generally available to customers) referenced in this or other press releases or public statements are not currently available (or are not yet generally available to customers) and may not be delivered when expected or at all. Customers who purchase Palo Alto Networks applications should make their purchase decisions based on services and features currently generally available.

The post The SOC Is Now Agentic — Introducing the Next Evolution of Cortex appeared first on Palo Alto Networks Blog.

  •  

Securing the Agentic Endpoint

Traditional Security Is Blind to the Agentic Endpoint

Modern endpoints are no longer defined only by executables. Increasingly, endpoint behavior is shaped by non-binary software, such as code packages, browser extensions, IDE plugins, scripts, local servers (including MCP), containers and model artifacts. They are installed directly by employees and developers without centralized oversight. Because these components are not classic binaries, they often fall outside the visibility and control of traditional endpoint security tooling.

AI agents compound this problem. They are legitimate tools that operate with the user’s credentials and permissions, enabling them to read, write, move data and take privileged actions across systems. When compromised or misused, agents become the “ultimate insider.” They can autonomously discover, invoke and even install additional components at machine speed, accelerating risk across an already expanding, largely unmanaged software layer.

Weaponizing Trusted Automation

This is not a future concern. The recent viral emergence of OpenClaw serves as a cautionary tale for the agentic era. Developed by a single individual in just one week, it rapidly secured millions of downloads while gaining broad permissions across users' emails, filesystems and shells. Within days, researchers identified 135,000 exposed instances and more than 800 malicious skills in its marketplace, underscoring how a single unvetted agent can create an immediate, global attack surface.

OpenClaw is not an outlier. Recent research highlights how quickly this risk is materializing:

  • Vibe Coding Threats: An AI extension in VS Code was found leaking code from 1.5 million developers. This tool could read any open file and send it back to the developer, collect mass files without user interaction, and track users with commercial analytics SDKs.
  • Malicious MCP Server: Koi documented the first malicious Model Context Protocol (MCP) server in the wild. When developers added a specific skill to tools like Claude Code or Cursor, it silently forwarded every email to the plugin creator. What’s more, this capability was added later, after developers had already started using it.

Compounding this risk is the fact that autonomous agent actions are often difficult to trace or reconstruct, leaving Security Operations Centers (SOCs) without the visibility they need when an incident occurs.

A New Category of Protection

Complete endpoint security for the rapidly expanding risk of agentic AI calls for a new category of protection: Agentic Endpoint Security. That’s why we announced our intent to acquire Koi, a pioneer in this space. Koi is designed to eliminate blind spots across the AI-native ecosystem and help organizations govern agentic tools safely.

Its technology rests on three core pillars:

  1. See All AI Software – Gain complete visibility into the AI tools, agents and non-binary software running in your environment.
  2. Understand Risks – Continuously analyze and understand the intent and risk level of all software and AI agents.
  3. Control the AI Ecosystem – Enforce policy in real-time to remediate issues and block risky behaviors.

Securing the Agentic Enterprise

We are convinced that Agentic Endpoint Security will soon become a standard requirement for enterprise security. Upon closing the proposed acquisition, we intend to integrate Koi’s capabilities across our platforms to help our customers secure the AI-native workspace.

The wave of AI agents approaching the enterprise cannot be held back. Instead, we must offer secure tools that enable companies to confidently embrace agentic innovation.

Forward-Looking Statements

This blog post contains forward-looking statements that involve risks, uncertainties, and assumptions, including, but not limited to, statements regarding the anticipated benefits and impact of the proposed acquisition of Koi on Palo Alto Networks, Koi and their customers. There are a significant number of factors that could cause actual results to differ materially from statements made in this blog post, including, but not limited to: the effect of the announcement of the proposed acquisition on the parties’ commercial relationships and workforce; the ability to satisfy the conditions to the closing of the acquisition, including the receipt of required regulatory approvals; the ability to consummate the proposed acquisition on a timely basis or at all; significant and/or unanticipated difficulties, liabilities or expenditures relating to proposed transaction, risks related to disruption of management time from ongoing business operations due to the proposed acquisition and the ongoing integration of other recent acquisitions; our ability to effectively operate Koi’s operations and business following the closing, integrate Koi’s business and products into our products following the closing, and realize the anticipated synergies in the transaction in a timely manner or at all; changes in the fair value of our contingent consideration liability associated with acquisitions; developments and changes in general market, political, economic and business conditions; failure of our platformization product offerings; risks associated with managing our growth; risks associated with new product, subscription and support offerings; shifts in priorities or delays in the development or release of new product or subscription or other offerings or the failure to timely develop and achieve market acceptance of new products and subscriptions, as well as existing products, subscriptions and support offerings; failure of our product offerings or business strategies in general; defects, errors, or vulnerabilities in our products, subscriptions or support offerings; our customers’ purchasing decisions and the length of sales cycles; our ability to attract and retain new customers; developments and changes in general market, political, economic, and business conditions; our competition; our ability to acquire and integrate other companies, products, or technologies in a successful manner; our debt repayment obligations; and our share repurchase program, which may not be fully consummated or enhance shareholder value, and any share repurchases which could affect the price of our common stock.

Additional risks and uncertainties that could affect our financial results are included under the captions "Risk Factors" and "Management's Discussion and Analysis of Financial Condition and Results of Operations" in our Quarterly Report on Form 10-Q filed with the SEC on November 20, 2025, which is available on our website at investors.paloaltonetworks.com and on the SEC's website at www.sec.gov. Additional information will also be set forth in other filings that we make with the SEC from time to time. All forward-looking statements in this blog post are based on information available to us as of the date hereof, and we do not assume any obligation to update the forward-looking statements provided to reflect events that occur or circumstances that exist after the date on which they were made.

 

The post Securing the Agentic Endpoint appeared first on Palo Alto Networks Blog.

  •  

Prisma AIRS Secures the Power of Factory’s Software Development Agents

The New Frontier of Agentic Development: Accelerating Developer Productivity

The world of software development is undergoing a rapid transformation, driven by the rise of AI agents and autonomous tools. Factory is advancing this shift through agent-native development, a new paradigm where developers focus on high-level design and agents, called Droids, handle the execution. Designed to support work across the software development lifecycle, these agents enable a new mode of development, delivering significant gains in speed and productivity, without sacrificing developer control.

As developer workflows increasingly rely on autonomous development agents, the way software is built evolves. This shift introduces important security considerations, such as prompt injection, sensitive data loss, unsafe URL access and malicious code execution, which, if left unaddressed, can undermine the very benefits these agents offer. Accelerating productivity depends not just on deploying agents, but on deploying them securely. This is where Palo Alto Networks, with its purpose-built AI security platform, Prisma® AIRS™, plays a critical role.

The Productivity Paradox: Where Agents Introduce Risk

Autonomous agents operating across the software development lifecycle accelerate developer productivity, while also introducing a complex, language-driven threat surface that traditional security tools are not equipped to handle. As a result, new risks emerge, such as prompt injection or leaking secrets that extend beyond the visibility and control assumptions of traditional security approaches. Addressing these considerations is essential to preserving the benefits that agentic development provides.

Recognizing this shift, Palo Alto Networks has introduced targeted capabilities to accelerate secure development workflows. These efforts focus on three critical defense areas: preventing prompt injection, blocking sensitive data leaks and enabling robust malicious code detection capabilities, all of which are necessary to secure the full lifecycle of agent-driven systems.

The Solution: Securing Agentic Workflows for Acceleration

The solution is designed to convert security challenges directly into deployment confidence, dramatically accelerating productivity. By natively integrating Prisma AIRS within Factory’s Droid Shield Plus, the platform is able to inspect all large language model (LLM) interactions, including prompts, responses and subsequent tool calls, to enable comprehensive security across each interaction with the agent.

Prisma AIRS is a comprehensive platform designed to provide organizations with the visibility and control needed to safeguard AI agents across any environment. The platform continuously monitors agent behavior in real time to detect and prevent threats unique to agent-driven systems.

Droid Shield Plus key features: prompt injection detection, advanced secrets scanning, sensitive data protection, malicious code detection.
Droid Shield Plus, powered by Palo Alto Networks

How Security Drives Speed

Embedding security natively into the Factory platform enables two crucial outcomes. To start, it delivers a secure, agent-native development experience for every developer, fostering immediate trust in the integrity of the generated code and documentation. This assurance removes friction often associated with AI-powered workflows, which can accelerate enterprise adoption and scaling of the Factory platform across the organization.

When developers can trust the agents and the integrity of the generated code and documentation, they can innovate faster and deploy with greater confidence. Instead of waiting for security reviews or dealing with fragmentation, security is woven seamlessly into the development lifecycle.

Sequence of events from user to user with Prisma AIRS and Factory AI.
Factory-Prisma AIRS Integration Flow

The integration follows a clear API Intercept design pattern:

• When a user enters a prompt or initiates work in Factory, Prisma AIRS intercepts the workflow. If a malicious prompt is detected, the platform can add logic to coach or block the user.

• Similarly, after the LLM generates code, Prisma AIRS intercepts the generated content. If secrets are detected, the platform again adds logic to coach or block the result before it reaches Factory or the user.

This real-time inspection of prompts and generated code enables development teams to be protected against threats, such as privilege escalation, prompt injection and malicious code execution, without disrupting developer velocity.

Deploy Bravely

Prisma AIRS 2.0 establishes a unified foundation for scalable and secure AI innovation. By combining Factory’s agent-native development platform with the threat detection capabilities of Palo Alto Networks Prisma AIRS, organizations gain a powerful advantage. Together, this approach helps organizations adopt agentic development with confidence by embedding security directly into the development experience.

For enterprises looking to confidently scale AI automation and realize the immense productivity gains offered by Factory’s Droids, integrating Prisma AIRS is the next step. This combined approach enables teams to "Deploy Bravely." To learn more about this strategic partnership and integration, see our latest integration announcement and review the Droid Shield Plus integration documentation.


Key Takeaways for Secure Agentic Development

When adopting Factory with Prisma AIRS, enterprises realize immediate benefits that accelerate their AI strategy:

  1. Specialized Threat Defense
    Enterprises gain real-time, targeted protection against agent-specific threats, specifically prompt injection attacks and data leaks, which legacy tools cannot address.
  2. Native, Seamless Security
    Moving from a fragmented review process to a continuous, automated defense via API Interception, security enables compliance without slowing down development velocity.
  3. Deployment Confidence
    The native integration transforms security risks into operational assurance, accelerating the large-scale enterprise adoption and scaling of your Factory agent-native automation initiatives.

The post Prisma AIRS Secures the Power of Factory’s Software Development Agents appeared first on Palo Alto Networks Blog.

  •  
❌