[in Google Cloud] βsoftware exploitation overtook credentials as the primary initial access vector for the first time.β and βThreat actors exploited third-party software-based entry (44.5%) more frequently than weak credentials.β [A.C.βββsome of you may say this is because AI is making more zero days, but a dozen more mundane answers may be correctΒ instead]
THR H1 2026 imageΒ 1
βWhile threat actors continued to use brute-force attacks against weak credentials, the increase in RCE represents a pivot toward more automated exploitation of unpatched application-layer vulnerabilities.β [A.C.βββto some extent βcreds or vulnsβ debate is rather pointless as the real answer is βbothβ, and it varies by environment too, seeΒ below]
βThreat actors continued to transition from traditional phishing to voice-based social engineering (vishing), and credential harvesting from third-party SaaS tokens to facilitate large-scale, silent data exfiltration.β [A.C.βββagain, this means βANDβ not βORβ because classic phishing still works well in many cases, but yes βcredential harvesting from third-party SaaSβ has become very fruitfulΒ too]
[overall] Still βIdentity compromise underpinned 83% of compromises.[A.C.βββso, yes, βcredsβ still beat βvulnsβ on many environments]
THR H1 2026 imageΒ 2
βHigh-volume data theft operationsβββexecuted through compromised but legitimate access channelsβββremained the primary goal for threat actors, with our metrics showing they targeted data in 73% of cloud-related incidents.β [A.C.βββagain, not new, but very useful data confirming the running trend.Β Beware!]
βThe window between vulnerability disclosure and mass exploitation collapsed by an order of magnitude, from weeks to days.β [A.C.βββagain, some of you may see the invisible robot hand of an AI here, but, as usual, the reality is more complicatedβ¦]
βTrend analysis from 2008β2025 indicates cloud services will soon surpass email as the primary data exfiltration pathway.β [A.C.βββ$32B reasons to finally get serious about it across allΒ clouds?]
β45% of intrusions resulted in data theft without immediate extortion attempts at the time of the engagement, and these were often characterized by prolonged dwell times and stealthy persistence.β
βThe traditional incident response model is no longer viable when dealing with containerized workloads and serverless architectures where data can vanish in seconds.β [A.C.βββa very useful reminder here! Cloud is cloudy! Donβt be that guy who thinks that cloud is a rented colo. Cloud is not JUST somebody elseβs computer.]
βThreat actors used large language models (LLM) to automate credential harvesting and transition from a developerβs local environment to full cloud administration access.β [A.C.βββthis really should not be news for anybody in 2026, but if it is, HERE IS SOME NEWS: BAD GUYS USEΒ AI!]
Thus βPrevent LLM exploitation as an extension of living-off-the-land (LOTL) by treating LLM activity with the same scrutiny as administrative command-line tools.β [A.C.βββor, as I say, βwith AI agents, every prompt injection is anΒ RCEβ]
This blog is perhaps a little bit more like an ad, so if you donβt want to check the ads, consider not readingΒ it.
a very cyber imageΒ (Gemini)
But this year at RSA 2026, Iβm speaking on three topics: securing AI, using AI for SOC, and sharing lessons about how Google applies AI and other technologies toΒ D&R.
Here are these 3 funΒ things!
First, Iβm doing a presentation on governing shadow AI agents. Believe it or not, this presentation was created mostly before OpenClaw became a thing (but updated for it!). So you may be surprised how well the content aged (think wine!) Attend this if you are struggling with shadow AI, specifically shadow agents atΒ work.
It is not the APT! The new threat is the βshadow AI agentsβ employees already use for work, leaking data and making decisions. Banning them is a losing game. This session will offer a better way: turn this organic behavior into a catalyst for secure progress. Learn to discover, assess, and channel unsanctioned agents into a formal strategy that empowers a team rather than force it underground.
The second is probably the most detailed discussion about how we use AI for detection and response at Google. You probably read our blogs and listen to our talks (especially this), but this time we are revealing a lot more interesting details about the machinery and also how we arrived at the state weβre in. I promise you this will be fun! And detailedΒ too.
Presenters will share the playbook for building and scaling AI agents in cybersecurity. Attendees will learn four core lessons: Building trust with the team, prioritizing real problems, measuring value, and establishing solid governance foundations for the agenticΒ SOC.
Finally, the third isnβt a presentation but a discussion that would help you understand the real state of AI in security operations / SOC. This would not be about the slides, but about sharing lessons on what works and whatΒ doesnβt.
Attendees in this peer-led discussion will share stories from the AI-powered SOC trenches. Explore real adoption journeys from manual processes to autonomous agents. Share practical use cases on analyst retraining, workflow auditing, malware analysis, remediation automation, RAG pipelines and more. Trade notes on whatβs working, whatβs breaking, trust gaps, AI hallucinations, and career redesign.
All in all, join me for securing AI and Shadow Agents, learning from Google about detection and response, and comparing the state of practice of AI in theΒ SOC.
African authorities arrested 651 suspects and recovered over $4.3 million in a joint operation targetingΒ investment fraud, mobile money scams, and fake loan applications. [...]
The U.S. Cybersecurity and Infrastructure Security Agency (CISA) is warning of a critical vulnerability in multiple Honeywell CCTV products that allows unauthorized access to feeds or account hijacking. [...]
AI assistants like Grok and Microsoft Copilot with web browsing and URL-fetching capabilities can be abused to intermediate command-and-control (C2) activity. [...]
Underground Telegram channels shared SmarterMail exploit PoCs and stolen admin credentials within days of disclosure. Flare explains how monitoring these communities reveals rapid weaponization of CVE-2026-24423 and CVE-2026-23760 tied to ransomware activity. [...]
Hard disks and magnetic tape have a limited lifespan, but glass storage developed by Microsoft could last millennia
Some cultures used stone, others used parchment. Some even, for a time, used floppy disks. Now scientists have come up with a new way to keep archived data safe that, they say, could endure for millennia: laser-writing in glass.
From personal photos that are kept for a lifetime to business documents, medical information, data for scientific research, national records and heritage data, there is no shortage of information that needs to be preserved for very long periods of time.
Hackers have stolen the personal and contact information of nearly 1 million accounts after breaching the systems of Figure Technology Solutions, a self-described blockchain-native financial technology company. [...]
The problem isnβt that we lack threat intelligence. Itβs that we lack the right kind of intelligence, intelligence that connects whatβs happening inside your environment with what attackers are planning outside it. Thatβs why two types of threat intelligence matter: internal and external. Alone, each tells part of the story. Together, they create clarity. Why Threat Intelligence Alone Falls Short Most organizations subscribe to multiple threat feeds. They pour in from every direction, generic, fragmented, and often delayed. Instead of clarifying risk, they confuse it. βOrganizations still make critical decisions based on incomplete or underrefined threat data.β β Gartner, The [β¦]
βA Glendale man was sentenced to nearly five years in federal prison for his role in a darknet drug trafficking operation that sold cocaine, methamphetamine, MDMA, and ketamine to customers across the United States. [...]
Vulnerabilities with high to critical severity ratings affecting popularΒ Visual Studio Code (VSCode) extensions collectively downloaded more than 128 million timesΒ could be exploited to steal local files and execute code remotely. [...]
A suspected Chinese state-backed hacking group has been quietly exploiting a critical Dell security flaw in zero-day attacks that started in mid-2024. [...]
Notepad++ has adopted a "double-lock" design for its update mechanism to address recently exploited security gaps that resulted in a supply-chain compromise. [...]
Leaked API keys are nothing new, but the scale of the problem in front-end code has been largely a mystery - until now. Intruder's research team built a new secrets detection method and scanned 5 million applications specifically looking for secrets hidden in JavaScript bundles. Here's what we learned. [...]
Traditional Security Is Blind to the Agentic Endpoint
Modern endpoints are no longer defined only by executables. Increasingly, endpoint behavior is shaped by non-binary software, such as code packages, browser extensions, IDE plugins, scripts, local servers (including MCP), containers and model artifacts. They are installed directly by employees and developers without centralized oversight. Because these components are not classic binaries, they often fall outside the visibility and control of traditional endpoint security tooling.
AI agents compound this problem. They are legitimate tools that operate with the userβs credentials and permissions, enabling them to read, write, move data and take privileged actions across systems. When compromised or misused, agents become the βultimate insider.β They can autonomously discover, invoke and even install additional components at machine speed, accelerating risk across an already expanding, largely unmanaged software layer.
Weaponizing Trusted Automation
This is not a future concern. The recent viral emergence of OpenClaw serves as a cautionary tale for the agentic era. Developed by a single individual in just one week, it rapidly secured millions of downloads while gaining broad permissions across users' emails, filesystems and shells. Within days, researchers identified 135,000 exposed instances and more than 800 malicious skills in its marketplace, underscoring how a single unvetted agent can create an immediate, global attack surface.
OpenClaw is not an outlier. Recent research highlights how quickly this risk is materializing:
Vibe Coding Threats: An AI extension in VS Code was found leaking code from 1.5 million developers. This tool could read any open file and send it back to the developer, collect mass files without user interaction, and track users with commercial analytics SDKs.
Malicious MCP Server:Koi documented the first malicious Model Context Protocol (MCP) server in the wild. When developers added a specific skill to tools like Claude Code or Cursor, it silently forwarded every email to the plugin creator. Whatβs more, this capability was added later, after developers had already started using it.
Compounding this risk is the fact that autonomous agent actions are often difficult to trace or reconstruct, leaving Security Operations Centers (SOCs) without the visibility they need when an incident occurs.
A New Category of Protection
Complete endpoint security for the rapidly expanding risk of agentic AI calls for a new category of protection: Agentic Endpoint Security. Thatβs why we announced our intent to acquire Koi, a pioneer in this space. Koi is designed to eliminate blind spots across the AI-native ecosystem and help organizations govern agentic tools safely.
Its technology rests on three core pillars:
See All AI Software β Gain complete visibility into the AI tools, agents and non-binary software running in your environment.
Understand Risks β Continuously analyze and understand the intent and risk level of all software and AI agents.
Control the AI Ecosystem β Enforce policy in real-time to remediate issues and block risky behaviors.
Securing the Agentic Enterprise
We are convinced that Agentic Endpoint Security will soon become a standard requirement for enterprise security. Upon closing the proposed acquisition, we intend to integrate Koiβs capabilities across our platforms to help our customers secure the AI-native workspace.
The wave of AI agents approaching the enterprise cannot be held back. Instead, we must offer secure tools that enable companies to confidently embrace agentic innovation.
Forward-Looking Statements
This blog post contains forward-looking statements that involve risks, uncertainties, and assumptions, including, but not limited to, statements regarding the anticipated benefits and impact of the proposed acquisition of Koi on Palo Alto Networks, Koi and their customers. There are a significant number of factors that could cause actual results to differ materially from statements made in this blog post, including, but not limited to: the effect of the announcement of the proposed acquisition on the partiesβ commercial relationships and workforce; the ability to satisfy the conditions to the closing of the acquisition, including the receipt of required regulatory approvals; the ability to consummate the proposed acquisition on a timely basis or at all; significant and/or unanticipated difficulties, liabilities or expenditures relating to proposed transaction, risks related to disruption of management time from ongoing business operations due to the proposed acquisition and the ongoing integration of other recent acquisitions; our ability to effectively operate Koiβs operations and business following the closing, integrate Koiβs business and products into our products following the closing, and realize the anticipated synergies in the transaction in a timely manner or at all; changes in the fair value of our contingent consideration liability associated with acquisitions; developments and changes in general market, political, economic and business conditions; failure of our platformization product offerings; risks associated with managing our growth; risks associated with new product, subscription and support offerings; shifts in priorities or delays in the development or release of new product or subscription or other offerings or the failure to timely develop and achieve market acceptance of new products and subscriptions, as well as existing products, subscriptions and support offerings; failure of our product offerings or business strategies in general; defects, errors, or vulnerabilities in our products, subscriptions or support offerings; our customersβ purchasing decisions and the length of sales cycles; our ability to attract and retain new customers; developments and changes in general market, political, economic, and business conditions; our competition; our ability to acquire and integrate other companies, products, or technologies in a successful manner; our debt repayment obligations; and our share repurchase program, which may not be fully consummated or enhance shareholder value, and any share repurchases which could affect the price of our common stock.
Additional risks and uncertainties that could affect our financial results are included under the captions "Risk Factors" and "Management's Discussion and Analysis of Financial Condition and Results of Operations" in our Quarterly Report on Form 10-Q filed with the SEC on November 20, 2025, which is available on our website at investors.paloaltonetworks.com and on the SEC's website at www.sec.gov. Additional information will also be set forth in other filings that we make with the SEC from time to time. All forward-looking statements in this blog post are based on information available to us as of the date hereof, and we do not assume any obligation to update the forward-looking statements provided to reflect events that occur or circumstances that exist after the date on which they were made.
Before tools, before frameworks, before hype, offensive security has always been about one thing: Thinking like an attacker. That foundation now defines the offensive AI security skills practitioners will need as AI reshapes the attack surface. AI systems introduce new behaviors and new failure modes, but the core mindset remains the same: understand how a