❌

Reading view

Google Cloud Security Threat Horizons Report #13 (H1 2026) Is Out!

This is my completely informal, uncertified, unreviewed and otherwise completely unofficial blog inspired by my reading of our next Cloud Threat Horizons Report, #13 (full version, no info to enter!) that we just released (the official blog for #1 report, my unofficial blogs for #2, #3, #4, #5, #6, #7, #8, #9, #10, #11 andΒ #12).

My favorite quotes from the report followΒ below:

  • [in Google Cloud] β€œsoftware exploitation overtook credentials as the primary initial access vector for the first time.” and β€œThreat actors exploited third-party software-based entry (44.5%) more frequently than weak credentials.” [A.C.β€Šβ€”β€Šsome of you may say this is because AI is making more zero days, but a dozen more mundane answers may be correctΒ instead]
THR H1 2026 imageΒ 1
  • β€œWhile threat actors continued to use brute-force attacks against weak credentials, the increase in RCE represents a pivot toward more automated exploitation of unpatched application-layer vulnerabilities.” [A.C.β€Šβ€”β€Što some extent β€œcreds or vulns” debate is rather pointless as the real answer is β€œboth”, and it varies by environment too, seeΒ below]
  • β€œThreat actors continued to transition from traditional phishing to voice-based social engineering (vishing), and credential harvesting from third-party SaaS tokens to facilitate large-scale, silent data exfiltration.” [A.C.β€Šβ€”β€Šagain, this means β€œAND” not β€œOR” because classic phishing still works well in many cases, but yes β€œcredential harvesting from third-party SaaS” has become very fruitfulΒ too]
  • [overall] Still β€œIdentity compromise underpinned 83% of compromises. [A.C.β€Šβ€”β€Šso, yes, β€œcreds” still beat β€œvulns” on many environments]
THR H1 2026 imageΒ 2
  • β€œHigh-volume data theft operationsβ€Šβ€”β€Šexecuted through compromised but legitimate access channelsβ€Šβ€”β€Šremained the primary goal for threat actors, with our metrics showing they targeted data in 73% of cloud-related incidents.” [A.C.β€Šβ€”β€Šagain, not new, but very useful data confirming the running trend.Β Beware!]
  • β€œThe window between vulnerability disclosure and mass exploitation collapsed by an order of magnitude, from weeks to days.” [A.C.β€Šβ€”β€Šagain, some of you may see the invisible robot hand of an AI here, but, as usual, the reality is more complicated…]
  • β€œTrend analysis from 2008–2025 indicates cloud services will soon surpass email as the primary data exfiltration pathway.” [A.C.β€Šβ€”β€Š$32B reasons to finally get serious about it across allΒ clouds?]
  • β€œ45% of intrusions resulted in data theft without immediate extortion attempts at the time of the engagement, and these were often characterized by prolonged dwell times and stealthy persistence.”
  • β€œThe traditional incident response model is no longer viable when dealing with containerized workloads and serverless architectures where data can vanish in seconds.” [A.C.β€Šβ€”β€Ša very useful reminder here! Cloud is cloudy! Don’t be that guy who thinks that cloud is a rented colo. Cloud is not JUST somebody else’s computer.]
  • β€œThreat actors used large language models (LLM) to automate credential harvesting and transition from a developer’s local environment to full cloud administration access.” [A.C.β€Šβ€”β€Šthis really should not be news for anybody in 2026, but if it is, HERE IS SOME NEWS: BAD GUYS USEΒ AI!]
  • Thus β€œPrevent LLM exploitation as an extension of living-off-the-land (LOTL) by treating LLM activity with the same scrutiny as administrative command-line tools.” [A.C.β€Šβ€”β€Šor, as I say, β€œwith AI agents, every prompt injection is anΒ RCE”]

Now, go and read the CTHR 13Β report!

Related posts:


Google Cloud Security Threat Horizons Report #13 (H1 2026) Is Out! was originally published in Anton on Security on Medium, where people are continuing the conversation by highlighting and responding to this story.

  •  

My Really Fun RSA 2026 Presentations!

This blog is perhaps a little bit more like an ad, so if you don’t want to check the ads, consider not readingΒ it.

a very cyber imageΒ (Gemini)

But this year at RSA 2026, I’m speaking on three topics: securing AI, using AI for SOC, and sharing lessons about how Google applies AI and other technologies toΒ D&R.

Here are these 3 funΒ things!

First, I’m doing a presentation on governing shadow AI agents. Believe it or not, this presentation was created mostly before OpenClaw became a thing (but updated for it!). So you may be surprised how well the content aged (think wine!) Attend this if you are struggling with shadow AI, specifically shadow agents atΒ work.

Shadow Agents: A Pragmatist’s Guide to Governing Unsanctioned AIβ€Šβ€”β€Š[STR-W08]

  • Wednesday, Mar 25 1:15 PMβ€Šβ€”β€Š2:05 PMΒ PDT

It is not the APT! The new threat is the β€œshadow AI agents” employees already use for work, leaking data and making decisions. Banning them is a losing game. This session will offer a better way: turn this organic behavior into a catalyst for secure progress. Learn to discover, assess, and channel unsanctioned agents into a formal strategy that empowers a team rather than force it underground.

The second is probably the most detailed discussion about how we use AI for detection and response at Google. You probably read our blogs and listen to our talks (especially this), but this time we are revealing a lot more interesting details about the machinery and also how we arrived at the state we’re in. I promise you this will be fun! And detailedΒ too.

This Is How We Do It: Building AI Agents for Cybersecurity and Defenseβ€Šβ€”β€Š[PART3-M07]

  • Monday, Mar 23 2:20 PMβ€Šβ€”β€Š3:10 PMΒ PDT

Presenters will share the playbook for building and scaling AI agents in cybersecurity. Attendees will learn four core lessons: Building trust with the team, prioritizing real problems, measuring value, and establishing solid governance foundations for the agenticΒ SOC.

Finally, the third isn’t a presentation but a discussion that would help you understand the real state of AI in security operations / SOC. This would not be about the slides, but about sharing lessons on what works and whatΒ doesn’t.

AI in SecOps: Sharing Lessons Learned for Adoption Maturityβ€Šβ€”β€Š[CXN-R05]

  • Thursday, Mar 26 12:20 PMβ€Šβ€”β€Š1:10 PMΒ PDT

Attendees in this peer-led discussion will share stories from the AI-powered SOC trenches. Explore real adoption journeys from manual processes to autonomous agents. Share practical use cases on analyst retraining, workflow auditing, malware analysis, remediation automation, RAG pipelines and more. Trade notes on what’s working, what’s breaking, trust gaps, AI hallucinations, and career redesign.

All in all, join me for securing AI and Shadow Agents, learning from Google about detection and response, and comparing the state of practice of AI in theΒ SOC.

See youΒ there!

P.S. Yes, we will also be podcasting from theΒ show.

Related:

RSA 2025: AI’s Promise vs. Security’s Pastβ€Šβ€”β€ŠA Reality Check”


My Really Fun RSA 2026 Presentations! was originally published in Anton on Security on Medium, where people are continuing the conversation by highlighting and responding to this story.

  •  

Stone, parchment or laser-written glass? Scientists find new way to preserve data

Hard disks and magnetic tape have a limited lifespan, but glass storage developed by Microsoft could last millennia

Some cultures used stone, others used parchment. Some even, for a time, used floppy disks. Now scientists have come up with a new way to keep archived data safe that, they say, could endure for millennia: laser-writing in glass.

From personal photos that are kept for a lifetime to business documents, medical information, data for scientific research, national records and heritage data, there is no shortage of information that needs to be preserved for very long periods of time.

Continue reading...

Β© Photograph: Tetra Images/Erik Isakson/Getty Images

Β© Photograph: Tetra Images/Erik Isakson/Getty Images

Β© Photograph: Tetra Images/Erik Isakson/Getty Images

  •  

Two Types of Threat Intelligence That Make Security Work

The problem isn’t that we lack threat intelligence. It’s that we lack the right kind of intelligence, intelligence that connects what’s happening inside your environment with what attackers are planning outside it. That’s why two types of threat intelligence matter: internal and external. Alone, each tells part of the story. Together, they create clarity. Why Threat Intelligence Alone Falls Short Most organizations subscribe to multiple threat feeds. They pour in from every direction, generic, fragmented, and often delayed. Instead of clarifying risk, they confuse it. β€œOrganizations still make critical decisions based on incomplete or underrefined threat data.” β€” Gartner, The […]

The post Two Types of Threat Intelligence That Make Security Work appeared first on Check Point Blog.

  •  

Securing the Agentic Endpoint

Traditional Security Is Blind to the Agentic Endpoint

Modern endpoints are no longer defined only by executables. Increasingly, endpoint behavior is shaped by non-binary software, such as code packages, browser extensions, IDE plugins, scripts, local servers (including MCP), containers and model artifacts. They are installed directly by employees and developers without centralized oversight. Because these components are not classic binaries, they often fall outside the visibility and control of traditional endpoint security tooling.

AI agents compound this problem. They are legitimate tools that operate with the user’s credentials and permissions, enabling them to read, write, move data and take privileged actions across systems. When compromised or misused, agents become the β€œultimate insider.” They can autonomously discover, invoke and even install additional components at machine speed, accelerating risk across an already expanding, largely unmanaged software layer.

Weaponizing Trusted Automation

This is not a future concern. The recent viral emergence of OpenClaw serves as a cautionary tale for the agentic era. Developed by a single individual in just one week, it rapidly secured millions of downloads while gaining broad permissions across users' emails, filesystems and shells. Within days, researchers identified 135,000 exposed instances and more than 800 malicious skills in its marketplace, underscoring how a single unvetted agent can create an immediate, global attack surface.

OpenClaw is not an outlier. Recent research highlights how quickly this risk is materializing:

  • Vibe Coding Threats: An AI extension in VS Code was found leaking code from 1.5 million developers. This tool could read any open file and send it back to the developer, collect mass files without user interaction, and track users with commercial analytics SDKs.
  • Malicious MCP Server: Koi documented the first malicious Model Context Protocol (MCP) server in the wild. When developers added a specific skill to tools like Claude Code or Cursor, it silently forwarded every email to the plugin creator. What’s more, this capability was added later, after developers had already started using it.

Compounding this risk is the fact that autonomous agent actions are often difficult to trace or reconstruct, leaving Security Operations Centers (SOCs) without the visibility they need when an incident occurs.

A New Category of Protection

Complete endpoint security for the rapidly expanding risk of agentic AI calls for a new category of protection: Agentic Endpoint Security. That’s why we announced our intent to acquire Koi, a pioneer in this space. Koi is designed to eliminate blind spots across the AI-native ecosystem and help organizations govern agentic tools safely.

Its technology rests on three core pillars:

  1. See All AI Software – Gain complete visibility into the AI tools, agents and non-binary software running in your environment.
  2. Understand Risks – Continuously analyze and understand the intent and risk level of all software and AI agents.
  3. Control the AI Ecosystem – Enforce policy in real-time to remediate issues and block risky behaviors.

Securing the Agentic Enterprise

We are convinced that Agentic Endpoint Security will soon become a standard requirement for enterprise security. Upon closing the proposed acquisition, we intend to integrate Koi’s capabilities across our platforms to help our customers secure the AI-native workspace.

The wave of AI agents approaching the enterprise cannot be held back. Instead, we must offer secure tools that enable companies to confidently embrace agentic innovation.

Forward-Looking Statements

This blog post contains forward-looking statements that involve risks, uncertainties, and assumptions, including, but not limited to, statements regarding the anticipated benefits and impact of the proposed acquisition of Koi on Palo Alto Networks, Koi and their customers. There are a significant number of factors that could cause actual results to differ materially from statements made in this blog post, including, but not limited to: the effect of the announcement of the proposed acquisition on the parties’ commercial relationships and workforce; the ability to satisfy the conditions to the closing of the acquisition, including the receipt of required regulatory approvals; the ability to consummate the proposed acquisition on a timely basis or at all; significant and/or unanticipated difficulties, liabilities or expenditures relating to proposed transaction, risks related to disruption of management time from ongoing business operations due to the proposed acquisition and the ongoing integration of other recent acquisitions; our ability to effectively operate Koi’s operations and business following the closing, integrate Koi’s business and products into our products following the closing, and realize the anticipated synergies in the transaction in a timely manner or at all; changes in the fair value of our contingent consideration liability associated with acquisitions; developments and changes in general market, political, economic and business conditions; failure of our platformization product offerings; risks associated with managing our growth; risks associated with new product, subscription and support offerings; shifts in priorities or delays in the development or release of new product or subscription or other offerings or the failure to timely develop and achieve market acceptance of new products and subscriptions, as well as existing products, subscriptions and support offerings; failure of our product offerings or business strategies in general; defects, errors, or vulnerabilities in our products, subscriptions or support offerings; our customers’ purchasing decisions and the length of sales cycles; our ability to attract and retain new customers; developments and changes in general market, political, economic, and business conditions; our competition; our ability to acquire and integrate other companies, products, or technologies in a successful manner; our debt repayment obligations; and our share repurchase program, which may not be fully consummated or enhance shareholder value, and any share repurchases which could affect the price of our common stock.

Additional risks and uncertainties that could affect our financial results are included under the captions "Risk Factors" and "Management's Discussion and Analysis of Financial Condition and Results of Operations" in our Quarterly Report on Form 10-Q filed with the SEC on November 20, 2025, which is available on our website at investors.paloaltonetworks.com and on the SEC's website at www.sec.gov. Additional information will also be set forth in other filings that we make with the SEC from time to time. All forward-looking statements in this blog post are based on information available to us as of the date hereof, and we do not assume any obligation to update the forward-looking statements provided to reflect events that occur or circumstances that exist after the date on which they were made.

Β 

The post Securing the Agentic Endpoint appeared first on Palo Alto Networks Blog.

  •  

The Skills That Will Matter for Offensive AI Security in 2026

Before tools, before frameworks, before hype, offensive security has always been about one thing: Thinking like an attacker. That foundation now defines the offensive AI security skills practitioners will need as AI reshapes the attack surface. AI systems introduce new behaviors and new failure modes, but the core mindset remains the same: understand how a

The post The Skills That Will Matter for Offensive AI Security in 2026 appeared first on OffSec.

  •  
❌