The threat intelligence landscape is often dominated with talks of sophisticated TTPs (tactics, tools, and procedures), zero-day vulnerabilities, and ransomware. While these technical threats are formidable, they are still managed by human beings, and it is the human element that often provides the most critical breakthroughs in attributing these attacks and de-anonymizing the threat actors behind them.
Here is how OSINT techniques, leveraged by Flashpoint’s expansive data capabilities, can dismantle illegal threat actor campaigns by turning a technical investigation into a human one.
Leveraging OPSEC as a Mindset
In a technical context, OPSEC is a risk management process that identifies seemingly innocuous pieces of information that, when gathered by an adversary, could be pieced together to reveal a larger, sensitive picture.
In the webinar, we break down the OPSEC mindset into three core pillars that every practitioner, and threat actor, must navigate. When these pillars fail, the investigation begins.
Analyzing the Signature: Every human has a digital signature, such as the way they type (stylometry), the times they are active, and the tools they prefer.
Identity Masking & Persona Management: This involves ensuring that your investigative identity has zero overlap with your real life. A common failure includes using the same browser for personal use and investigative research, which allows cookies to bridge the two identities.
Traffic Obfuscation: Even with a VPN, certain behaviors such as posting on a dark web forum and then using that same connection to check personal banking can expose an IP address, linking it to a practitioner or threat actor.
“Effective OPSEC isn’t about the tools you use; it’s about what breadcrumbs you are leaving behind that hackers, investigation subjects, or literally anyone could find about you.”
Joshua Richards, founder of Osint Praxis
Leveraging the Mindset for CTI
Understanding the OPSEC mindset allows security teams to think like the target. When we know the psychological traps attackers fall in, we know exactly where to look for their mistakes.
Assumption
The Mindset Trap
The Investigative Reality
Insignificant
“I’m not a high-value target; no one is looking for me.”
Automated Aggression: Hackers use scripts to scan millions of accounts. You aren’t “chosen”; you are “discovered” via automation.
Invisible
“I don’t have a LinkedIn or X account, so I don’t have a footprint.”
Shadow Data: Public birth records, property taxes, and historical data breaches create a footprint you didn’t even build yourself.
Invincible
“I have 2FA and complex passwords; I’m unhackable.”
Session Hijacking: Infostealer malware steals “session tokens” (cookies). This allows an actor to be you in a browser without ever needing your 2FA code.
During the webinar, Joshua shares a masterclass in how leveraging these concepts can turn a vague dark web threat into a real-world arrest. Check out the on-demand webinar to see exactly how the investigation started on Torum, a dark web forum, and ended with an arrest that saved the lives of two individuals.
Turn the Tables Using Flashpoint
The insights shared in this session powerfully illustrate that even the most dangerous threat actors are rarely as anonymous as they believe. Their downfall isn’t usually a failure of their technical prowess, but a failure of their mindset. By understanding these OSINT techniques, intelligence practitioners can transform a sea of digital noise into a clear path toward attribution.
The most effective way to dismantle threats is to bridge the gap between technical indicators and human behavior. Whether your teams are conducting high-stakes OSINT or protecting your own organization’s digital footprint, every breadcrumb counts. By leveraging Flashpoint’s expansive threat intelligence collections and real-time data, you can stay one step ahead of adversaries. Request a demo to learn more.
This article is based on a conversation with Nikesh Arora on the 100th episode of the Threat Vector podcast.
David Moulton interviews Nikesh Arora on the Threat Vector podcast.
"Most technologists think about technology, not about cybersecurity," Nikesh Arora says. "Cybersecurity is kind of like insurance. Let's go make great things happen, and let's make sure on the way we purchase insurance."
Coming from the CEO of the world's largest cybersecurity company, it's the quiet part said out loud, and it explains why AI deployment is racing ahead while security scrambles to keep up.
Earlier this year, Arora spoke with a CIO entirely focused on AI deployment challenges: building viable products, training models, measuring customer impact. Security never came up once. "If you're still going through the motion, trying to understand, ‘Can I actually make this thing work?’ You're not worried about security," Arora notes. The logic is brutal but consistent: Why secure something that might not even function?
Why the gap between innovation and security keeps widening.
How to read inflection points before they're obvious.
What separates organizations that prepare from those that scramble.
The Gap That Keeps Growing
The disconnect isn't new. It's the same psychology that makes airport security feel like overhead – necessary friction that slows down what should be seamless. But with AI, the gap is widening at an unprecedented pace.
Consider the infrastructure buildup happening right now. Nvidia has become a $4 trillion company selling chips that can't stay in stock. Hundreds of billions of dollars are flowing into AI-computer infrastructure. Cloud providers are buying out entire methane gas companies to power their data centers.
Yet organizations are treating AI security as something to bolt on later. That same CIO told Arora: "We worked on some stuff ourselves, and we're just jerry-rigging some things to make sure this happens securely."
Arora's response:
Jerry rig, production, and security don't work together as three terms.
Reading Signals Before They're Obvious
Arora has watched enough technology cycles to recognize the pattern. "You start seeing signs early, and then you look around, you don't see enough impact. You say, okay, maybe this is going to be just a passing shower. But you don't realize that over time this thing's getting more and more momentum."
The signs around AI are adding up:
Individual behavior has shifted.
Arora went from never talking to ChatGPT or Gemini to conducting 10-15 conversations daily. During a recent Tokyo trip, he used Gemini as his primary navigation tool, asking it to rank sumo wrestling shows for his kids rather than "trying to go read 14 websites and figure out what makes sense."
The spend is massive and accelerating.
Not just chips, entire energy infrastructures are being rebuilt to support AI compute needs.
Consumer and enterprise adoption are both surging.
From coding assistants to business analysis, use cases are expanding faster than security models can adapt.
"This thing's going to change our life fundamentally," Arora tells Moulton. "We're not seeing it at scale in our customers just yet. That doesn't mean we can sit back and wait."
Arora understands the risks involved in being late to new technology.
You have to not just anticipate where the trend is going. You have to prepare your organization and the resources to get there. Otherwise, the risk is that Silicon Valley will go fund those people who are thinking purely about the new world... and one of them's going to hit. Then you'll be two years behind with no organization, no resources deployed against it.
The Bets That Paid Off
When Arora joined Palo Alto Networks seven and a half years ago, he wrote two words on a piece of paper: cloud and AI. The company was a firewall business. Those two inflection points would require fundamental transformation, and, just as with AI now, being late was not an option.
If you don't get the network transformation right, 80% of our business will falter.
That insight drove a strategic bet on moving from point products to platform thinking, consolidating security tools rather than adding to the sprawl.
The platform approach wasn't about vendor consolidation for its own sake. It was about correlation. Unit 42® data shows that 70% of incidents now span three or more attack surfaces. When attacks move across endpoints, networks, cloud services and applications simultaneously, fragmented security creates gaps that attackers exploit ruthlessly.
Today we have coverage for 80 plus percent of the industry, which means our customers can come talk to us about a myriad of problems, and we can actually cross-correlate across all the different things we do.
With AI deployments touching every part of the technology stack, that cross-correlation becomes essential. Data flows between training environments and production systems. Models access APIs across cloud and on premises infrastructure. Applications consume AI services from multiple providers. Security that can't see and correlate across that entire landscape will miss the threats that matter most.
First Principles Over Tradition
What drives Arora's ability to spot inflection points isn't just pattern recognition, it's his refusal to accept how things have always been done.
His pet peeve: "Somebody said, well, this is how we've traditionally done it." The response reveals his approach: "You use the word traditional. I use the historical context saying, yeah, sure, they used to dig fields with picks and shovels, and now they use tractors."
This thinking drove Palo Alto Networks to reimagine SOC performance. The industry accepted four days as the normal time to detect and remediate security incidents. Arora called that unacceptable. "We need to get it to be real time."
The result was a fundamentally different architecture that analyzes data as it arrives rather than waiting for problems to appear, enabling 1-minute detection and response instead of four days.
Traditionally, SOCs would analyze the problem when the problem appeared. We said forget it. We're going to analyze everything to see if there's a problem. That architecture fundamentally transformed what we do compared to everybody else in the market.
The same first-principles approach needs to apply to AI security. Organizations can't simply extend existing security models and hope they work.
What Comes Next
With ransomware attacks now completing in as little as 25 minutes (100 times faster than just three years ago, according to Unit 42 research) reactive security simply can't keep pace. Organizations need security that thinks and responds at machine speed, built into AI deployments from day one.
"AI has become the biggest inflection point in current technology," Arora observes. Organizations are too busy deploying to worry about security. That's human nature. But it's also the moment when security teams need to stay in lockstep.
The question isn't whether to secure AI, it's whether security will be designed in or bolted on. The former takes strategic thinking now. The latter takes crisis management later.
Our job at Palo Alto and our industry is to make sure as they go build these experimental ideas into real production capability that we're staying in lockstep with them and saying, ‘Oh, by the way, here's something that can secure what you just built in a way that is not gonna get you into trouble.’
Listen to the full conversation between Nikesh Arora and David Moulton, senior director of thought leadership for Cortex® and Unit 42, on the 100th episode of Threat Vector.
For the past week, the massive “Internet of Things” (IoT) botnet known as Kimwolf has been disrupting The Invisible Internet Project (I2P), a decentralized, encrypted communications network designed to anonymize and secure online communications. I2P users started reporting disruptions in the network around the same time the Kimwolf botmasters began relying on it to evade takedown attempts against the botnet’s control servers.
Kimwolf is a botnet that surfaced in late 2025 and quickly infected millions of systems, turning poorly secured IoT devices like TV streaming boxes, digital picture frames and routers into relays for malicious traffic and abnormally large distributed denial-of-service (DDoS) attacks.
I2P is a decentralized, privacy-focused network that allows people to communicate and share information anonymously.
“It works by routing data through multiple encrypted layers across volunteer-operated nodes, hiding both the sender’s and receiver’s locations,” the I2P website explains. “The result is a secure, censorship-resistant network designed for private websites, messaging, and data sharing.”
On February 3, I2P users began complaining on the organization’s GitHub page about tens of thousands of routers suddenly overwhelming the network, preventing existing users from communicating with legitimate nodes. Users reported a rapidly increasing number of new routers joining the network that were unable to transmit data, and that the mass influx of new systems had overwhelmed the network to the point where users could no longer connect.
I2P users complaining about service disruptions from a rapidly increasing number of routers suddenly swamping the network.
When one I2P user asked whether the network was under attack, another user replied, “Looks like it. My physical router freezes when the number of connections exceeds 60,000.”
A graph shared by I2P developers showing a marked drop in successful connections on the I2P network around the time the Kimwolf botnet started trying to use the network for fallback communications.
The same day that I2P users began noticing the outages, the individuals in control of Kimwolf posted to their Discord channel that they had accidentally disrupted I2P after attempting to join 700,000 Kimwolf-infected bots as nodes on the network.
The Kimwolf botmaster openly discusses what they are doing with the botnet in a Discord channel with my name on it.
Although Kimwolf is known as a potent weapon for launching DDoS attacks, the outages caused this week by some portion of the botnet attempting to join I2P are what’s known as a “Sybil attack,” a threat in peer-to-peer networks where a single entity can disrupt the system by creating, controlling, and operating a large number of fake, pseudonymous identities.
Indeed, the number of Kimwolf-infected routers that tried to join I2P this past week was many times the network’s normal size. I2P’s Wikipedia page says the network consists of roughly 55,000 computers distributed throughout the world, with each participant acting as both a router (to relay traffic) and a client.
However, Lance James, founder of the New York City based cybersecurity consultancy Unit 221B and the original founder of I2P, told KrebsOnSecurity the entire I2P network now consists of between 15,000 and 20,000 devices on any given day.
An I2P user posted this graph on Feb. 10, showing tens of thousands of routers — mostly from the United States — suddenly attempting to join the network.
Benjamin Brundage is founder of Synthient, a startup that tracks proxy services and was the first to document Kimwolf’s unique spreading techniques. Brundage said the Kimwolf operator(s) have been trying to build a command and control network that can’t easily be taken down by security companies and network operators that are working together to combat the spread of the botnet.
Brundage said the people in control of Kimwolf have been experimenting with using I2P and a similar anonymity network — Tor — as a backup command and control network, although there have been no reports of widespread disruptions in the Tor network recently.
“I don’t think their goal is to take I2P down,” he said. “It’s more they’re looking for an alternative to keep the botnet stable in the face of takedown attempts.”
The Kimwolf botnet created challenges for Cloudflare late last year when it began instructing millions of infected devices to use Cloudflare’s domain name system (DNS) settings, causing control domains associated with Kimwolf to repeatedly usurp Amazon, Apple, Google and Microsoft in Cloudflare’s public ranking of the most frequently requested websites.
James said the I2P network is still operating at about half of its normal capacity, and that a new release is rolling out which should bring some stability improvements over the next week for users.
Meanwhile, Brundage said the good news is Kimwolf’s overlords appear to have quite recently alienated some of their more competent developers and operators, leading to a rookie mistake this past week that caused the botnet’s overall numbers to drop by more than 600,000 infected systems.
“It seems like they’re just testing stuff, like running experiments in production,” he said. “But the botnet’s numbers are dropping significantly now, and they don’t seem to know what they’re doing.”
Palo Alto Networks is proud to enter a strategic partnership with the RAF Association.
For over 90 years, the Royal Air Forces Association (RAFA) has championed a simple yet profound belief: No member of the RAF community should ever be left without the help they need. Serving personnel, veterans and their families, the RAF Association provides crucial welfare support, responding to increasingly complex needs in an era of operational changes and challenges, including persistent global deployment.
Delivering on their mission today requires not only compassion and expertise but also resilient digital foundations. To strengthen and future-proof its operations, RAFA has entered into a strategic partnership with Palo Alto Networks. Together, we are modernising the Association's cyber security posture through a secure-by-design, zero trust architecture to enhance organisational resilience, secure sensitive beneficiary data, and improve operational agility. This helps ensure they can focus on their mission of support, not security management.
As Nick Bunting OBE, Secretary General at the RAF Association, puts it:
Cybersecurity is essential to safeguarding the trust people place in our organisation. This transformation will give us greater protection for our data and systems, ensuring that our services remain dependable and that our organisation is secure, resilient and ready for the future. Strong digital security is not just a technical requirement, it is a fundamental part of how we uphold our duty of care to every individual who relies on us.
RAF Association & Palo Alto Networks Team (left to right): Gareth Turner, Tom Brookes, Nick Bunting OBE, Phil Sherwin, Ali Redfern, Darren Bisbey, Alistair Wildman
Securing the Mission
The RAF Association operates in a distributed environment comprising headquarters’ functions, remote caseworkers, and more than 20 RAFAKidz nursery sites, supported by a growing portfolio of cloud-based services. In this context, cybersecurity is not simply an IT concern. It is a safeguarding imperative.
Disruption to systems or a compromise of sensitive beneficiary data could directly impact RAFA’s ability to deliver services and maintain the trust of the communities it supports. By consolidating fragmented legacy tools into a unified platform, this partnership ensures the Association’s digital evolution aligns security controls with GDPR obligations and safeguarding requirements.
Digital Resilience with a Unified Platform for Visibility and Control
To support RAFA's lean IT operational model, this transformation will move them away from fragmented legacy tools toward a unified platform approach. The deployment of Prisma® SASE (secure access service edge) and Cortex XDR® will provide RAFA with consistent visibility and control across users, devices, applications and data, regardless of location. This consolidation replaces complexity with clarity, allowing the organisation to inspect traffic for threats in real-time. Security policies are now enforced continuously, threats are detected and contained faster, and access to critical systems is governed by zero trust principles without compromising the user experience.
As Phil Sherwin, Chief Information Officer, at the RAF Association states:
Our data is one of our most valuable assets and the protection of that data, as we continue to provide life-changing support to members of the RAF community, is our most important priority. This partnership will move us into the next generation of security tools that adopt zero trust principles and is a crucial step on our journey to providing a layered approach to data protection.
One of the most critical aspects of this modernisation is supporting RAFA’s diverse workforce, particularly within the RAFAKidz nursery sites. These environments rely on nondesk-based staff using iPads and mobile devices to get their critical work done.
Using zero touch provisioning and the Prisma Browser™, we are enabling secure, seamless connectivity for unmanaged devices. This ensures that nursery staff can access necessary SaaS applications safely without complex login hurdles or manual configuration, improving their agility and allowing them to focus on caring for children rather than troubleshooting technology.
Creating Operational Advantage by Scaling Operations with AI and Automation
As a charity, RAFA has a responsibility to ensure resources are used efficiently. A critical goal of this partnership is to improve productivity and allow the organisation to scale its services without increasing the IT burden.
By adopting Strata™ Cloud Manager with AIOps (artificial intelligence for IT operations), RAFA is shifting from reactive security operations to proactive, automated management. Machine learning helps identify configuration risks and performance issues before they affect users, while standardized policies enable the secure, consistent onboarding of new sites. This shift is projected to significantly reduce operational overhead, enabling RAFA to scale its support network cost-effectively. This shift is projected to reduce operational overhead by 40–50%.
A Resilient Future
This partnership is about more than deploying technology. It is about ensuring RAFA remains resilient, trusted and capable of supporting the RAF community for decades to come.
As Darren Bisbey, Head of Group Information Security for the RAF Association, puts it:
We live in an era where digital threats are accelerating in both scale and sophistication, creating unprecedented challenges for organisations. Our partnership with Palo Alto is a statement of intent, reflecting our unwavering commitment to building the most secure environments possible for our data.
At Palo Alto Networks, we are honored to support RAFA in this journey, providing the digital armour and operational advantage necessary to protect those who serve and have served.
As Alistair Wildman, Palo Alto Networks CEO for Northern Europe states:
For over 90 years, RAFA has been a lifeline for the RAF community; it is our privilege to ensure that legacy endures in a digital-first world. By embracing a unified, AI-driven platform, RAFA is moving beyond complex, fragmented security to a posture that is Secure by Design. This partnership allows them to navigate today’s threat landscape with confidence, ensuring their resources remain focused where they belong: on the families who need them.
Key Takeaways
Digital Resilience – Strategic Shift to Zero Trust Architecture: RAFA is modernizing its cybersecurity posture by implementing a comprehensive zero trust architecture. This transition involves moving from fragmented legacy tools to a unified platform approach, deploying Prisma® SASE and Cortex XDR for 360-degree visibility and complete control over access and traffic.
Interoperability – Secure, Seamless Access for Diverse Workforce: The partnership ensures operational agility by simplifying security for nondesk-based staff, particularly at the RAFAKidz nursery sites. Solutions like Zero-Touch Provisioning and the Prisma Access Browser enable secure, seamless connectivity for unmanaged devices, allowing nursery staff to focus on their critical work without complex login or configuration issues.
Creating Operational Advantage – Efficiency and Scalability through AI and Automation: RAFA is leveraging technology to scale services efficiently and reduce operational overhead. By using Strata Cloud Manager with AIOps (Artificial Intelligence for IT Operations), the organization can shift to proactive management and automating remediation, which is projected to reduce operational overhead by 40–50%.
How China’s “Walled Garden” is Redefining the Cyber Threat Landscape
In our latest webinar, Flashpoint unpacks the architecture of the Chinese threat actor cyber ecosystem—a parallel offensive stack fueled by government mandates and commercialized hacker-for-hire industry.
For years, the global cybersecurity community has operated under the assumption that technical information was a matter of public record. Security research has always been openly discussed and shared through a culture of global transparency. Today, that reality has fundamentally shifted. Flashpoint is witnessing a growing opacity—a “Walled Garden”—around Chinese data. As a result, the competence of Chinese threat actors and APTs has reached an industrialized scale.
In Flashpoint’s recent on-demand webinar, “Mapping the Adversary: Inside the Chinese Pentesting Ecosystem,” our analysts explain how China’s state policies surrounding zero-day vulnerability research have effectively shut out the cyber communities that once provided a window into Chinese tradecraft. However, they haven’t disappeared. Rather, they have been absorbed by the state to develop a mature, self-sustaining offensive stack capable of targeting global infrastructure.
Understanding the Walled Garden: The Shift from Disclosure to Nationalization
The “Walled Garden” is a direct result of a Chinese regulatory turning point in 2021: the Regulations on the Management of Security Vulnerabilities (RMSV). While the gradual walling off of China’s data is the cumulative result of years of implementing regulatory and policy strategies, the 2021 RMSV marks a critical turning point that effectively nationalized China’s vulnerability research capabilities. Under the RMSV, any individual or organization in China that discovers a new flaw must report it to the Ministry of Industry and Information Technology (MIIT) within 48 hours. Crucially, researchers are prohibited from sharing technical details with third parties—especially foreign entities—or selling them before a patch is issued.
It is important to note that this mandate is not limited to Chinese-based software or hardware; it applies to any vulnerability discovered, as long as the discoverer is a Chinese-based organization or national. This effectively treats software vulnerabilities as a national strategic resource for China. By centralizing this data, the Chinese government ensures it has an early window into zero-day exploits before the global defensive community.
For defenders, this means that by the time a vulnerability is public, there is a high probability it has already been analyzed and potentially weaponized within China’s state-aligned apparatus.
The Indigenous Kill Chain: Reconnaissance Beyond Shodan
Flashpoint analysts have observed that within this Walled Garden, traditional Western reconnaissance tools are losing their effectiveness. Chinese threat actors are utilizing an indigenous suite of cyberspace search engines that create a dangerous information asymmetry, allowing them to peer at defender infrastructure while shielding their own domestic base from Western scrutiny.
While Shodan remains the go-to resource for security teams, Flashpoint has seen Chinese threat actors favor three IoT search engines that offer them a massive home-field advantage:
FOFA: Specializes in deep fingerprinting for middleware and Chinese-specific signatures, often indexing dorks for new vulnerabilities weeks before they appear in the West.
Zoomai: Built for high-speed automation, offering APIs that integrate with AI systems to move from discovery to verified target in minutes.
360 Quake: Provides granular, real-time mapping through a CLI with an AI engine for complex asset portraits.
In the full session, we demonstrate exactly how Chinese operators use these tools to fuse reconnaissance and exploitation into a single, automated step—a capability most Western EDRs aren’t yet tuned to detect.
Building a State-Aligned Offensive Stack
Leveraging their knowledge of vulnerabilities and zero-day exploits, the illicit Chinese ecosystem is building tools designed to dismantle the specific technologies that power global corporate data centers and business hubs.
In the webinar, our analysts explain purpose-built cyber weapons designed to hunt VMware vCenter servers that support one-click shell uploads via vulnerabilities like Log4Shell. Beyond the initial exploit, Flashpoint highlights the rising use of Behinder (Ice Scorpion)—a sophisticated web shell management tool. Behinder has become a staple for Chinese operators because it encrypts command-and-control (C2) traffic, allowing attackers to evade conventional inspection and deep packet analytics.
Strengthen Your Defenses Against the Chinese Offensive Stack with Flashpoint
By understanding this “Walled Garden” architecture, defenders can move beyond generic signatures and begin to hunt for the specific TTPs—such as high-entropy C2 traffic and proprietary Chinese scanning patterns—that define the modern Chinese threat actor.
How can Flashpoint help? Flashpoint’s cyber threat intelligence platform cuts through the generic feed overload and delivers unrivaled primary-source data, AI-powered analysis, and expert human context.
The early weeks of 2026 have already made one thing clear: Government cybersecurity is in a new phase, shaped not by incremental change, but by the rapid integration of AI into core public-sector missions. AI systems are now embedded in critical infrastructure, federal service delivery, research environments, as well as state and local operations. At the same time, nation-state adversaries are leveraging AI to accelerate intrusion, scale deception and manipulate trusted systems in ways not possible even a year ago.
As Senior Vice President of Public Sector at Palo Alto Networks, I see a decisive shift underway. Defending the public sector in 2026 means navigating a world where security depends on verifying identity, securing data and governing AI-driven systems that act without human intervention. Success now hinges on architectures that assume automation, operations that prioritize coordination, and governance frameworks capable of managing AI at mission scale.
Here are the developments that will define the year ahead.
Federal Government
1. AI-Native Security Must Become Integral to Federal Operations
AI in federal environments is no longer an experiment. Agencies are now designing workflows, SOC missions and cloud architectures around AI-driven detection and response. The emphasis is shifting from supplementing human analysts to building systems that maintain visibility, correlate threats, and respond autonomously when human capacity is limited. This builds on what we forecasted last year, when federal cybersecurity teams began using AI to replace manual workflows and drive down detection and response times.
The shift will be practical. Federal teams must plan to deploy AI systems that correlate logs, identify behavioral anomalies, prioritize threats, and suppress noise before analysts ever see an alert. Manual, ticket-based workflows will no longer meet federal timelines for investigation or reporting, particularly as adversaries automate more phases of attack.
2. Identity Emerges as the Central Federal Security Challenge
The biggest shift in 2026 will be the collapse between “identity” and “attack surface.” Deepfake technologies now operate in real time. AI-generated voices and video can impersonate senior leaders at a level undetectable by traditional controls. Machine identities continue to proliferate; they will outnumber human identities this year. And autonomous agents can initiate high-impact actions without human oversight. This reflects a broader crisis of authenticity now reshaping how enterprises defend identity itself.
Identity abuse will no longer be limited to credential theft. This turns identity into a systemic risk. One compromised identity (human, machine or agent) can cascade through automated systems with little friction. Federal programs will need to prioritize continuous identity verification, stronger proofing and governance frameworks that validate the legitimacy of both human and AI-driven activity.
3. AI Systems Must Be Secure-by-Design
Stemming from the clear mandate in the AI Action Plan (and subsequent work by NIST to develop an AI/Cyber Profile on top of the existing Cybersecurity Framework) agencies will steadily integrate AI security into their deployment of AI technologies.
This imperative is critical as AI systems are susceptible to novel threats. Data poisoning of training sets, manipulated inputs and hidden instructions in untrusted datasets compromise the intelligence that agencies rely on for analysis, planning and mission support. To support the security of this AI-first moment, Palo Alto Networks was proud to make its AI security platform, Prisma® AIRS™, available through the GSA OneGov initiative.
4. Nation-State Operations Expand Through AI Automation
Adversaries will use AI to compress the time between reconnaissance, exploitation and lateral movement. We expect rapidly increasing the use of AI to chain vulnerabilities, tailor social engineering campaigns, and generated malware variants that adapt in real time.
The focus will broaden beyond IT networks. AI will be used to disrupt OT systems and target sensitive research environments. Foreign intelligence services will weaponize AI to blur the line between intrusion and information operations, producing hybrid campaigns that attack both systems and the legitimacy of institutions.
5. Autonomous SOC Capabilities Become Essential
Federal SOCs will evolve from human-centered command centers to hybrid operations where autonomous agents run major components of the detection and response mission. These agents will triage alerts, enforce containment, and initiate predefined responses.
This evolution comes with risk. AI agents with broad authority can be misused or manipulated if not properly governed. Agencies will need safeguards to track agent behavior, enforce least privilege on agents, and prevent misuse through runtime monitoring and “AI firewall” controls designed to stop malicious prompts and unauthorized actions. The same pressures are shaping enterprise security, where controls like AI firewalls and circuit breaker mechanisms are becoming standard practice. Automation will only strengthen federal security if paired with rigorous oversight and continuous validation of agent activity.
6. Shared and Federated SOC Structures Gain Momentum
As threats scale, agencies will increasingly operate through shared or federated security structures. Instead of isolated SOCs, agencies will adopt analytics layers capable of correlating activity across departments and exchanging findings in real time.
This shift will reduce redundancy and provide faster insight into nation-state campaigns that cross federal boundaries. Early adopters will establish shared analytic and response frameworks that allow agencies to coordinate without sacrificing mission-specific control. Civilian agencies will lead early adoption with broader participation across defense and national security stakeholders expected later in the year.
7. The Post-Quantum Deadline Becomes Immediate
In 2026, post-quantum cryptography planning will move to implementation. Accelerated advances in quantum computing and AI-based cryptanalysis will push agencies to transition from pilot efforts to mandated modernization.
Agencies will focus on discovering where vulnerable algorithms are used, replacing outdated libraries, and implementing crypto-agility so systems can evolve without major redesigns. Systems with unpatchable cryptographic components will be flagged for full replacement, forcing agencies to reconcile years of accumulated “crypto debt.”
8. Data Trust and Cloud Workload Protection Become Priority Missions
The rise of AI workloads will force agencies to rethink how they protect data. Infrastructure controls alone cannot detect when training data has been manipulated or when model outputs no longer reflect real-world conditions.
Agencies will unify developer and security workflows and use tools like Data Security Posture Management and AI security posture management (AI-SPM) to track data lineage and enforce protections at runtime. Enterprises are addressing the same issue by bringing development and security teams together under shared data governance models. Ensuring model trustworthiness will become a mission-support requirement, not just a security objective.
9. Platform Consolidation Becomes Necessary
Fragmented tools cannot support the visibility and oversight required for AI governance. Executives will push for platform consolidation to unify network, identity, cloud, endpoint and AI security. Integrated platforms will gain favor because they enable consistent policy enforcement and a single operational picture across increasingly automated environments.
State, Local and Educational Institutions
1. AI Adoption Splits SLED into Distinct Tiers
In 2026, disparities in funding and technical capacity will widen. Some states will deploy AI across security operations, citizen services and identity verification. Others will struggle to maintain legacy systems.
Well-resourced jurisdictions will reduce response times and improve resilience. Underfunded ones will remain exposed to ransomware and disruption. Without targeted modernization efforts, a national divide in SLED cybersecurity maturity will deepen.
2. Regional Models Become the Practical Path Forward
Silos are no longer sustainable. SLED organizations will rely on shared SOCs, regional threat intelligence hubs and coordinated incident response agreements. States will formalize partnerships to share expertise, reduce costs and defend interconnected systems. This evolution represents the maturation of the “team sport” mentality we predicted in 2025. These models reflect operational reality: Compromised data or infrastructure in one jurisdiction often creates immediate risk for its neighbors.
3. Higher Education Redesigns Its Security Baseline
Universities will classify cybersecurity alongside energy, research infrastructure and physical security as essential institutional functions. Secure browser adoption, stronger vendor oversight and centralized identity governance will become the norm.
AI research environments will receive increased scrutiny, and universities participating in federally funded research will face stricter compliance requirements to prevent data poisoning and model manipulation. Institutions with large research portfolios will prioritize securing lab environments where AI models are trained and evaluated.
4. K–12 Systems Enter a New Phase of Security Oversight
States will introduce new security mandates for K–12 environments, covering MFA, network segmentation, secure browsers, identity verification and foundational zero trust principles. AI-enabled ransomware will remain a threat. Smaller districts will adopt managed services or regional support structures as they confront growing operational and compliance demands. Districts that modernize identity controls and browser security will significantly reduce their exposure compared to those reliant on legacy tools. Building on the regulatory momentum we predicted in 2025, K–12 institutions will continue moving from defensive posture to proactive security adoption.
5. Local Governments Face Escalating AI-Driven Ransomware
Municipal governments remain high-value targets due to limited staffing and aging infrastructure. AI gives threat actors the ability to automate reconnaissance, craft targeted phishing messages, and identify vulnerabilities with little effort.
Attacks timed to public safety incidents or weather emergencies will increase, meaning local governments will need stronger identity controls, automated endpoint protection and access to managed detection and response. Operational continuity will depend on reducing time-to-detect and time-to-contain, capabilities that smaller municipalities cannot achieve without external support.
6. Managed Services and Platform Consolidation Become Standard
As technical demands grow, SLED organizations will move toward managed SOC models and consolidated vendor ecosystems. Platforms that integrate data protection, threat detection, identity governance and AI oversight will gain traction. Point tools without interoperability will decline. Budget-constrained environments will favor comprehensive platforms that reduce operational burden and simplify compliance.
7. Identity and Data Trust Become Central SLED Priorities
SLED organizations manage sensitive student records, election data and social services information. These environments are increasingly strained by the rapid growth of machine identities and AI-driven applications.
Synthetic identities and AI-generated credentials will be used to infiltrate systems with limited oversight. Continuous identity verification, data lineage tracking and posture management will become essential to prevent fraud, service disruption and data manipulation. Identity assurance and data integrity will become the foundation of public trust at the state and local level.
Between 2024 and 2026, Flashpoint analysts have observed the financial sector as a top target of threat actors, with 406 publicly disclosed victims falling prey to ransomware attacks alone—representing seven percent of all ransomware victim listings during that period.
However, ransomware is just one piece of the complex threat actor puzzle. The financial sector is also grappling with threats stemming from sophisticated Advanced Persistent Threat (APT) groups, the risks associated with third-party compromises, the illicit trade in initial access credentials, the ever-present danger of insider threats, and the emerging challenge of deepfake and impersonation fraud.
Why Finance?
The financial sector has long been one of the most attractive targets for threat actors, consistently ranking among the most targeted industries globally.
These institutions manage massive volumes of sensitive data—from high-value financial transactions and confidential customer information to vast sums of capital, making them especially lucrative for threat actors seeking financial gain. Additionally, the urgency and criticality of financial operations increases the chances that victim organizations will succumb to extortion and ransom demands.
Even beyond direct financial incentives, the financial sector remains an attractive target due to its deep interconnectivity with other industries.This means that malicious actors may simply target financial institutions to gain information about another target organization, as a single data breach can have far-reaching and cascading consequences for involved partners and third parties.
The Threat Actors Targeting the Financial Sector
To understand the complexities of the financial threat landscape, organizations need a comprehensive understanding of the key players involved. The following threat actors represent some of the most prominent and active groups targeting the financial sector between April 2024 and April 2025:
Active since March 2023, Akira has demonstrated increasingly sophisticated tactics and has targeted a significant number of victims across various sectors. Between April 2024 and April 2025, they targeted 34 organizations within the financial sector. Evidence suggests a potential link to the defunct Conti ransomware group. Akira commonly gains initial access through compromised credentials, Virtual Private Network (VPN) vulnerabilities, and Remote Desktop Protocol (RDP). They employ a double extortion model, exfiltrating data before encryption.
LockBit Ransomware
A long-standing and highly prolific RaaS group operating since at least September 2019, LockBit continued to be a major threat to the financial sector, claiming 29 publicly disclosed victims between April 2024 and April 2025. LockBit utilizes various initial access methods, including phishing, exploitation of known vulnerabilities, and compromised remote services.
Most notably, in June 2024, LockBit claimed it gained access to the US Federal Reserve, stating that they exfiltrated 33 TB of data. However, Flashpoint analysts found that the data posted on the Federal Reserve listing appears to belong to another victim, Evolve Bank & Trust.
FIN7
This financially motivated threat actor group, originating from Eastern Europe and active since at least 2015, focuses on stealing payment card data. They employ social engineering tactics and create elaborate infrastructure to achieve their goals, reportedly generating over $1 billion USD in revenue between 2015 and 2021. Their targets within the financial sector include interbank transfer systems (SWIFT, SAP), ATM infrastructure, and point-of-sale (POS) terminals. Initial access is often gained through phishing and exploiting public-facing applications.
Scattering Spider
Emerging in 2022, Scattered Spider has quickly become known for its rapid exploitation of compromised environments, particularly targeting financial services, cryptocurrency services, and more. They are notorious for using SMS phishing and fake Okta single sign-on pages to steal credentials and move laterally within networks. Their primary motivation is financial gain.
Lazarus Group
This advanced persistent threat (APT) group, backed by the North Korean government, has demonstrated a broad range of targets, including cryptocurrency exchanges and financial institutions. Their campaigns are driven by financial profit, cyberespionage, and sabotage. Lazarus Group employs sophisticated spear-phishing emails, malware disguised in image files, and watering-hole attacks to gain initial access.
Top Attack Vectors Facing the Financial Sector
Between April 2024 and April 2025, our analysts observed 6,406 posts pertaining to financial sector access listings within Flashpoint’s forum collections. How are these prolific threat actor groups gaining a foothold into financial data and systems? Examining Flashpoint intelligence, malicious actors are capitalizing on third-party compromises, initial access brokers, insider threats, amongst other attack vectors:
Third-Party Compromise
Ransomware attacks targeting third-party vendors can have a direct and significant impact on financial institutions through data exposure and compromised credentials. The Clop ransomware gang’s exploitation of the MOVEit vulnerability in December 2024 serves as a stark reminder of this risk.
Initial Access Brokers (IABs)
Initial Access Brokers specialize in gaining initial access to networks and selling these access credentials to other threat groups, including ransomware operators. Their tactics include phishing, the use of information-stealing malware, and exploiting RDP credentials, posing a significant risk to financial entities. Between April 2024 and April 2025, analysts observed 6,406 posts pertaining to financial sector access listings within Flashpoint’s forum collections.
Insider Threat
Malicious insiders, whether recruited or acting independently, can provide direct access to sensitive data and systems within financial institutions. Telegram has emerged as a prominent platform for advertising and recruiting insider services targeting the financial sector.
Deepfake and Impersonation
The increasing sophistication and accessibility of AI tools are enabling new forms of fraud. Deepfakes can bypass traditional security measures by creating convincing audio and video impersonations. While still evolving, this threat vector, along with other impersonation tactics like BEC and vishing, presents a growing concern for the financial sector. Within the past year, analysts observed 1,238 posts across fraud-related Telegram channels discussing impersonation of individuals working for financial institutions.
Defend Against Financial Threats Using Flashpoint
The financial sector remains a high-value target, facing a persistent and evolving array of threats. Understanding the tactics, techniques, and procedures (TTPs) of these top threat actors, as well as the broader threat landscape, is crucial for financial institutions to develop and implement effective security strategies.
Flashpoint is proud to offer a dedicated threat intelligence solution for banks and financial institutions. Our platform combines comprehensive data collection, AI-powered analysis, and expert human insight to deliver actionable intelligence, safeguarding your critical assets and operations. Request a demo today to see how our intelligence can empower your security team.
Why the GSA OneGov Agreement Is a Game-Changer for Federal Cybersecurity
The mission to modernize government IT is accelerating at lightning speed, largely thanks to the transformative power of artificial intelligence (AI). Federal agencies are strategically leveraging AI to boost efficiency, enhance citizen services, and strengthen national security – a vision fully supported by the administration’s AI Action Plan.
At Palo Alto Networks, we are all-in on helping agencies deploy AI bravely and securely. Because the challenge isn't just about using AI for cyberdefense, but also about defending AI itself. We appreciate the U.S. General Services Administration (GSA) recognizing the critical need for scalable, efficient solutions.
That is precisely why the GSA OneGov Initiative is a massive, game-changing step forward. We are proud to be the first pure-play cybersecurity vendor to secure a OneGov agreement with the GSA. This strategic alliance simplifies and standardizes the process for agencies to access our world-class, AI-powered security platform, ensuring security is foundational to this crucial modernization mission.
The Wake-Up Call: The Silent Threat of AI Agent Corruption
If you needed a clear sign that AI has fundamentally shifted the cybersecurity landscape, our own Unit 42 research provides it. The new reality isn't just about hackers using AI in their attacks; it’s also about how internal AI provides another attack surface for threat actors.
The most insidious new threat we've observed is AI Agent Smuggling, where malicious attackers use AI agents to exploit other agents. Our Unit 42 research highlights two major vectors:
Indirect Prompt Injection: A security risk in LLMs where a user crafts input containing deceptive instructions to manipulate the model’s behavior, which can lead to unauthorized data access or unintended actions.
Agent Session Smuggling: Exploit vulnerabilities in agent-to-agent communication, injecting malicious instructions into a conversation, hiding them among otherwise benign client requests and server responses.
This confirms our core belief as stated in a recent secure AI by Design blog: The AI ecosystem (the models, data and infrastructure) is now a complex, expanding attack surface that traditional perimeter defenses were simply not designed to protect.
As I’ve said before, “If you’re deploying AI, you must deploy AI security.”
Secure AI by Design: A Strategic Alliance with GSA
The GSA’s OneGov Initiative aims to streamline procurement and drive down costs by leveraging the purchasing power of the entire federal government. This is more than an agreement; it’s a direct response to the call for a "secure-by-design" approach to federal AI adoption. This agreement simplifies and standardizes the process for agencies to access our world-class, AI-powered security platform, ensuring that security is foundational, not an afterthought. It provides industry leading AI security tools into the hands of our cyber defenders today.
Under the Hood: Technical Capabilities for the AI Ecosystem
To counter the autonomous threats we’re seeing, we provide a platform that protects the entire AI lifecycle, from the developer's keyboard to the data center.
1. Runtime Protection for AI Workloads
Securing the AI supply chain requires visibility across every stage, especially during runtime when models are processing sensitive data.
Prisma® AIRSdelivers comprehensive security for the entire AI lifecycle, in one unified platform. It allows organizations to deploy traditional apps as well as AI applications, models and agents with confidence by reducing risk from misuse, data loss and sophisticated AI-driven threats. Prisma AIRS provides a clear, connected view of assets in multicloud environments, so teams can eliminate silos, accelerate responses, as well as scale cloud and AI apps securely.
Our Cloud-Native Application Protection Platform (CNAPP) has achieved the FedRAMP High designation, making it the preferred Code to Cloud solution to secure the entire application lifecycle from development to runtime. Our industry-leading CNAPP eliminates silos to deliver comprehensive visibility and best-in-class protection across multicloud environments.
2. Protecting Users and Data at the Edge
Even the most advanced AI defenses are undermined if users accessing applications and data are left vulnerable outside corporate security boundaries. The explosive growth of generative AI tools and the unseen behavior of AI agents are amplifying data exposure risks.
Prisma SASE (secure access service edge) secures all users, apps, devices and data, no matter where they are and no matter where applications reside.
Prisma Access (FedRAMP High Authorized) and Prisma Browser(FedRAMP-Moderate Authorized) integrate security capabilities, like zero trust network access (ZTNA), secure web gateway (SWG) and cloud access security broker (CASB), to provide a unified policy framework and a consistent user experience.
This approach helps agencies outpace the speed of AI-driven threats, safeguarding critical data and simplifying operations for a frictionless user experience. It ensures that the human element interacting with the AI is protected by the most stringent security controls available.
Deploy AI Bravely
The GSA OneGov agreement is a pivotal moment that provides federal agencies with the cost-effective, streamlined access they need to deploy AI with confidence. By leveraging our unified, AI-powered platform, government organizations can stop reacting to threats and start building secure-by-design AI environments. We are committed to remaining a key partner in this strategic initiative and helping the government achieve its mission outcomes safely.
For more information and access to promotional offers for new contracts signed on or before January 31, 2028, federal agencies can visit the GSA OneGov website.
Modernizing Vulnerability Sharing for a New Class of Threats
In cybersecurity, vulnerability information sharing frameworks have long assumed that conventional threats exploit flaws in software or systems, and they can be resolved with patches or configuration updates. AI and machine learning (ML) models upend that premise as adversarial attacks, like poisoning and evasion, target the unique way AI models process information. Consequently, the risks for AI systems include tactics like model poisoning (from evasion attacks) in datasets and training, which are not conventional software vulnerabilities. These new vulnerabilities fall outside the scope of traditional cybersecurity taxonomies like the Common Vulnerabilities and Exposures (CVE) Program.
There is a need to bridge the gap between the existing cybersecurity vulnerability sharing structure and burgeoning efforts to catalog security risks to AI systems. Provisions in the White House AI Action Plan, which Palo Alto Networks supports, call for the creation of an AI Information Sharing and Analysis Center (AI-ISAC), reinforcing the importance of addressing that disconnect. This integration is essential, as leveraging the existing, widely adopted cybersecurity infrastructure will be the fastest path to ensuring these new standards are accepted and operationalized.
Established Construct for Vulnerability Management and Disclosure
The global cybersecurity community relies on a mature infrastructure for sharing standardized vulnerability intelligence. Central to this ecosystem is the CVE List, established in 1999 as the authoritative catalog of cybersecurity vulnerabilities. Through CVE IDs and a network of CVE Numbering Authorities (CNAs), this framework enables consistent vulnerability documentation and disclosure.
While this infrastructure has served the cybersecurity community effectively for over two decades, it was designed around traditional threat models that AI systems substantially upend. Attacks on AI systems represent a critical departure from traditional cybersecurity threats as they operate insidiously, subtly corrupting core reasoning processes, causing persistent, systemic failures, some of which only become evident over time. Most traditional cybersecurity tools are not equipped to recognize those breakdowns because they assume deterministic behavior and rules-based logic. AI systems defy those assumptions because AI is probabilistic, not deterministic. Consequently, attacks on AI models may remain hidden for extended periods.
Unlike traditional cybersecurity threats that target code, adversarial AI attacks target the underlying data and algorithms that govern how AI systems learn, reason and make decisions. Consider the following predominant adversarial attack methodologies on machine learning:
Poisoning attacks inject malicious data into training datasets, corrupting the model's learning process and creating deliberate vulnerabilities or degraded performance.
Inference-related attacks exploit model outputs to extract sensitive information or learn about its training data. This includes model inversion, which reconstructs sensitive data from the model's outputs, as well as membership inference, which identifies whether specific data points were used in training.
The expansion of existing security frameworks and programs is necessary to cover the enumeration, disclosure and downstream management of security risks to AI systems.
Advancing AI Security Through the AI Action Plan
In July, the Administration unveiled the AI Action Plan, an innovation-first framework balancing AI advancement with security imperatives. The Plan prioritizes Secure-by-Design AI technologies and applications, strengthened critical infrastructure cybersecurity and protection of commercial and government AI innovations.
Notably, it recommends establishing an AI Information Sharing and Analysis Center (AI-ISAC) to facilitate threat intelligence sharing across U.S. critical infrastructure sectors and encourages sharing known AI vulnerabilities, “tak[ing] advantage of existing cyber vulnerability sharing mechanisms.” These provisions affirm that AI security underpins American leadership in the field and, where possible, should be built upon existing frameworks.
Redefining Boundaries for AI Threats
To position the CVE Program for the AI-driven future, Palo Alto Networks is engaging directly with industry and program stakeholders to chart the path forward. Traditionally, the CVE Program serves as an ecosystem-wide central warning system. It provides a unified source of truths for security risks. A security risk catalog and identification system are needed for AI systems, as they currently fall outside the traditional scope of the CVE Program that has focused exclusively on vulnerabilities rather than on malicious components. The historical aperture of the current CVE Program excludes harmful artifacts, such as backdoored AI models or poisoned datasets, which represent fundamentally different attack vectors, in turn creating security blind spots.
Securing AI’s Promise
The United States leads in AI innovation and must equally lead in securing it. As momentum builds behind the AI Action Plan and the establishment of the AI-ISAC, we have a critical window to shape information sharing frameworks of the future. The goal is to ensure that cybersecurity and AI security infrastructure advance in unison with the technology itself. Integrating new AI vulnerability standards into trusted frameworks like the CVE Program aligns with industry focus and needs. Through proactive, coordinated action, we can unlock AI’s full promise while safeguarding the models that are embedded in the critical systems on which our nation depends.
The transformative potential of artificial intelligence (AI) across industries is undeniable. But realizing AI's true value hinges on three cybersecurity imperatives: Understanding the AI-cybersecurity nexus, harnessing AI to supercharge cyber defense, and embedding security into AI tools from the ground up through Secure AI by Design.
Nowhere is this convergence more urgent than in financial services. Sitting at the center of our global economy, financial institutions face a dual mandate: Embrace AI for cybersecurityandcybersecurity for AI.
I was honored to cover these key principals in my testimony before the House Committee on Financial Services, led by Chairman French Hill. The hearing, entitled “From Principles to Policy: Enabling 21st Century AI Innovation in Financial Services” convened witnesses from Palo Alto Networks, Google, NASDAQ, Zillow and Public Citizen. Together, we examined AI use cases in the financial services and housing sectors, including those specific to cybersecurity. We assessed how existing laws and frameworks apply in the age of AI.
The Defense Advantage Is AI-Powered Security Operations
Attacks have become faster, with the time from compromise to data exfiltration now 100 times faster than four years ago. The financial sector bears disproportionate risk, given the value of its data and interconnected systems, while firms contend with evolving regulatory expectations, talent shortages and the persistent tendency to elevate cybersecurity only after an incident.
Generative and agentic AI intensify these pressures by accelerating every phase of the attack chain, from deepfake-driven fraud to tailored spear phishing campaigns. Our researchers at Unit 42® have found that agentic AI, autonomous systems that can reason and act without human intervention, can compress what was once a multiday ransomware campaign into roughly 25 minutes.
To keep pace, financial institutions must pivot to AI-driven defenses that operate at machine speed.
Security operations centers (SOC) have long been overwhelmed by traditional alerts and fragmented data. Security teams, forced into manual triage across dozens of disparate tools, face an inefficient model that leaves vulnerabilities exposed, burns out analysts and makes it impossible to operate at the speed necessary to outpace modern attacks.
The average enterprise SOC ingests data from 83 security solutions across 29 vendors. In 75% of breaches, logging existed that should have flagged anomalous behavior, but critical signals were buried. With 90% of SOCs still relying on manual processes, adversaries have the clear advantage.
AI-driven SOCs flip this paradigm, acting as a force multiplier to substantially reduce detection and response times. To illustrate the scale of this necessity, consider our own security operations. Palo Alto Networks SOC analyzes over 90 billion events daily. Without AI, this would be an impossible task for human analysts. But by applying AI, we distill that down to a single actionable incident.
Financial institutions migrating to AI-driven SOC platforms are seeing transformative results:
One customer reduced the Mean Time to Respond (MTTR) from one day to 14 minutes.
Another prevented 22,831 threats and processed 113,271 threat indicators in less than 5 seconds.
A large bank saved 180 hours per year by automating security information and event management reporting; 500 hours through automated data collection; 360 hours by automating four Chief Technology Officer playbooks; and 240 hours with automated threat intelligence enrichment.
These improvements are critical to stopping threat actors. But none of this would be possible without AI.
Securing the New AI Attack Surface
As AI adoption grows, it will further expand the attack surface, creating new vectors targeting training data and model environments. AI's rapid growth is outpacing the adoption of security measures designed to protect it. Nearly three-quarters of S&P 500 companies now flag AI as a material risk in their public disclosures, up from just 12% in 2023.
Traditional security tools rely on static rules that miss advanced attacks, like multistep prompt injections or adversarial manipulations. Autonomous AI agents can take unpredictable actions that are difficult to monitor with legacy methods.
Rapid AI adoption has exposed organizations' infrastructure, data, models, applications and agents to unique threats. Unlike traditional cyber exploits that target software vulnerabilities, AI-specific attacks can manipulate the foundation of how an AI system learns and operates.
A Secure AI by Design
Even with an understanding of the risks, many organizations struggle with the lack of clarity on what effective AI security looks like in practice. Recognizing the gap between intent and execution, Palo Alto Networks developed a Secure AI by Design policy roadmap that provides organizations with a comprehensive roadmap that integrates security throughout the entire AI lifecycle.
A proactive stance ensures security is a feature, not an afterthought, crucial for building trust, maintaining compliance and mitigating risks. The approach addresses four imperatives organizations most pressingly face in AI adoption:
1. Secure the use of external AI tools.
2. Secure the underlying AI infrastructure and data.
3. Safely build and deploy AI applications.
4. Monitor and control AI agents.
The Path Forward
For financial institutions, Secure AI by Design must be anchored in enterprise governance. Institutions should maintain risk-tiered AI inventories, enforce strict access controls and implement testing commensurate with risk. Governance structures should enable board oversight and align with established model risk practices.
Policymakers also have a critical role to play in promoting AI-driven security operations, championing voluntary Secure AI by Design frameworks, ensuring policies safeguard innovation, enabling controlled experimentation and strengthening public-private collaboration.
Ultimately, the financial institutions that will thrive will recognize cybersecurity as the foundation that makes innovation possible. By embracing AI-driven defenses and securing AI systems from the ground up, the sector can confidently unlock AI's transformative potential while safeguarding the trust and stability that underpin the global economy.
Read the full testimony to learn more about how cybersecurity can enable AI innovation in financial services.
Today, we’re excited to announce expanded service availability for Dedicated Local Zones, giving customers more choice and control without compromise. In addition to the data residency, sovereignty, and data isolation benefits they already enjoy, the expanded service list gives customers additional options for compute, storage, backup, and recovery.
Dedicated Local Zones are AWS infrastructure fully managed by AWS, built for exclusive use by a customer or community, and placed in a customer-specified location or data center. They help customers across the public sector and regulated industries meet security and compliance requirements for sensitive data and applications through a private infrastructure solution configured to meet their needs. Dedicated Local Zones can be operated by local AWS personnel and offer the same benefits of AWS Local Zones, such as elasticity, scalability, and pay-as-you-go pricing, with added security and governance features.
Since being launched, Dedicated Local Zones have supported a core set of compute, storage, database, containers, and other services and features for local processing. We continue to innovate and expand our offerings based on what we hear from customers to help meet their unique needs.
More choice and control without compromise
The following new services and capabilities deliver greater flexibility for customers to run their most critical workloads while maintaining strict data residency and sovereignty requirements.
New generation instance types
To support complex workloads in AI and high-performance computing, customers can now use newer generation instance types, including Amazon Elastic Compute Cloud (Amazon EC2) generation 7 with accelerated computing capabilities.
AWS storage options
AWS storage options provide two storage classes including Amazon Simple Storage Service (Amazon S3) Express One Zone, which offers high-performance storage for customers’ most frequently accessed data, and Amazon S3 One Zone-Infrequent Access, which is designed for data that is accessed less frequently and is ideal for backups.
Advanced block storage capabilities are delivered through Amazon Elastic Block Store (Amazon EBS) gp3 and io1 volumes, which customers can use to store data within a specific perimeter to support critical data isolation and residency requirements. By using the latest AWS general purpose SSD volumes (gp3), customers can provision performance independently of storage capacity with an up to 20% lower price per gigabyte than existing gp2 volumes. For intensive, latency-sensitive transactional workloads, such as enterprise databases, provisioned IOPS SSD (io1) volumes provide the necessary performance and reliability.
Backup and recovery capabilities
We have added backup and recovery capabilities through Amazon EBS Local Snapshots, which provides robust support for disaster recovery, data migration, and compliance. Customers can create backups within the same geographical boundary as EBS volumes, helping meet data isolation requirements. Customers can also create AWS Identity and Access Management (IAM) policies for their accounts to enable storing snapshots within the Dedicated Local Zone. To automate the creation and retention of local snapshots, customers can use Amazon Data Lifecycle Manager (DLM).
Customers can use local Amazon Machine Images (AMIs) to create and register AMIs while maintaining underlying local EBS snapshots within Dedicated Local Zones, helping achieve adherence to data residency requirements. By creating AMIs from EC2 instances or registering AMIs using locally stored snapshots, customers maintain complete control over their data’s geographical location.
One of GovTech Singapore’s key focuses is on the nation’s digital government transformation and enhancing the public sector’s engineering capabilities. Our collaboration with GovTech Singapore involved configuring their Dedicated Local Zones with specific services and capabilities to support their workloads and meet stringent regulatory requirements. This architecture addresses data isolation and security requirements and ensures consistency and efficiency across Singapore Government cloud environments.
With the availability of the new AWS services with Dedicated Local Zones, government agencies can simplify operations and meet their digital sovereignty requirements more effectively. For instance, agencies can use Amazon Relational Database Service (Amazon RDS) to create new databases rapidly. Amazon RDS in Dedicated Local Zones helps simplify database management by automating tasks such as provisioning, configuring, backing up, and patching. This collaboration is just one example of how AWS innovates to meet customer needs and configures Dedicated Local Zones based on specific requirements.
Chua Khi Ann, Director of GovTech Singapore’s Government Digital Products division, who oversees the Cloud Programme, shared: “The deployment of Dedicated Local Zones by our Government on Commercial Cloud (GCC) team, in collaboration with AWS, now enables Singapore government agencies to host systems with confidential data in the cloud. By leveraging cloud-native services like advanced storage and compute, we can achieve better availability, resilience, and security of our systems, while reducing operational costs compared to on-premises infrastructure.”
Get started with Dedicated Local Zones
AWS understands that every customer has unique digital sovereignty needs, and we remain committed to offering customers the most advanced set of sovereignty controls and security features available in the cloud. Dedicated Local Zones are designed to be customizable, resilient, and scalable across different regulatory environments, so that customers can drive ongoing innovation while meeting their specific requirements.
Ready to explore how Dedicated Local Zones can support your organization’s digital sovereignty journey? Visit AWS Dedicated Local Zones to learn more.
At Amazon Web Services, we’re committed to deeply understanding the evolving needs of both our customers and regulators, and rapidly adapting and innovating to meet them. The upcoming AWS European Sovereign Cloud will be a new independent cloud for Europe, designed to give public sector organizations and customers in highly regulated industries further choice to meet their unique sovereignty requirements. The AWS European Sovereign Cloud expands on the same strong foundation of security, privacy, and compliance controls that apply to other AWS Regions around the globe with additional governance, technical, and operational measures to address stringent European customer and regulatory expectations. Sovereignty is the defining feature of the AWS European Sovereign Cloud and we’re using an independently validated framework to meet our customers’ requirements for sovereignty, while delivering the scalability and functionality you expect from the AWS Cloud.
Today, we’re pleased to share further details about the AWS European Sovereign Cloud: Sovereign Reference Framework (ESC-SRF). This reference framework aligns sovereignty criteria across multiple domains such as governance independence, operational control, data residency and technical isolation. Working backwards from our customers’ sovereign use cases, we aligned controls to each of the criteria and the AWS European Sovereign Cloud is undergoing an independent third-party audit to verify the design and operations of these controls conform to AWS sovereignty commitments. Customers and partners can also leverage the ESC-SRF as a foundation upon which they can build their own complementary sovereignty criteria and controls when using the AWS European Sovereign Cloud.
To clearly explain how the AWS European Sovereign Cloud meets sovereignty expectations, we’re publishing the ESC-SRF in AWS Artifact including the criteria and control mapping. In AWS Artifact, our self-service audit artifact retrieval portal, you have on-demand access to AWS security and compliance documents and AWS agreements. You can now use the ESC-SRF to define best practices for your own use case, map these to controls, and illustrate how you meet and even exceed sovereign needs of your customers.
A transparent and validated sovereignty model
The ESC-SRF has been built from customer feedback, regulatory requirements across the European Union (EU), industry frameworks, AWS contractual commitments, and partner input. ESC-SRF is industry and sector agnostic, as it’s written to address fundamental sovereignty needs and expectations at the foundational layer of our cloud offerings with additional sovereignty-specific requirements and controls that apply exclusively to the AWS European Sovereign Cloud. Each criterion is implemented through sovereign controls that will be independently validated by a third-party auditor.
The framework builds on core AWS security capabilities, including encryption, key management, access governance, AWS Nitro System-based isolation, and internationally recognized compliance certifications. The framework adds sovereign-specific governance, technical, and operational measures such as independent EU corporate structures, dedicated EU trust and certificate services, operations by AWS EU-resident personnel, strict residency for customer data and customer created metadata, separation from all other AWS Regions, and incident response operated within the EU.
These controls are the basis of a dedicated AWS European Sovereign Cloud System and Organization Controls (SOC) 2 attestation. The ESC-SRF establishes a solid foundation for sovereignty of the cloud, so that customers can focus on defining sovereignty measures in the cloud that are tailored to their goals, regulatory needs, and risk posture.
How you can use the ESC-SRF
The ESC-SRF describes how AWS implements and validates sovereignty controls in the AWS European Sovereign Cloud. AWS treats each criterion as binding and its implementation will be validated by an independent third-party auditor in 2026. While most customers don’t operate at the size and scale of AWS, you can use the ESC-SRF as both an assurance model and a reference framework you can adapt to your specific use cases.
From an assurance perspective, it provides end-to-end visibility for each sovereignty criterion through to its technical implementation. We will also provide third-party validation in the AWS European Sovereign Cloud SOC 2 report. Customers can use this report with internal auditors, external assessors, supervisory authorities, and regulators. This can reduce the need for ad-hoc evidence requests and supports customers by providing them with evidence to demonstrate clear and enforceable sovereignty assurances.
From a design perspective, you can refer to the framework when shaping your own sovereignty architecture, selecting configurations, and defining internal controls to meet regulatory, contractual, and mission-specific requirements. Because the ESC-SRF is industry and sector agnostic, you can apply criteria from the framework to suit your own unique needs. Depending on your sovereign use case, not all criteria may apply to your use case sovereign needs. The ESC-SRF can also be used in conjunction with AWS Well-Architected which can help you learn, measure, and build using architectural best practices. Where appropriate you can create your version of the ESC-SRF, map to controls, and have them tested by a third party. To download the ESC-SRF, visit AWS Artifact (login required).
A strong, clear foundation
The publication of the ESC-SRF is part of our ongoing commitment to delivering on the AWS Digital Sovereignty Pledge through transparency and assurances to help customers meet their evolving sovereignty needs with assurances designed, implemented, and validated entirely within the EU. Within the framework, customers can build solutions in the AWS European Sovereign Cloud with confidence and a strong understanding of how they are able to meet their sovereignty goals using AWS.
For more information about the AWS European Sovereign Cloud, visit aws.eu.
If you have feedback about this post, submit comments in the Comments section below.
Beyond the Malware: Inside the Digital Empire of a North Korean Threat Actor
In this post Flashpoint reveals how an infostealer infection on a North Korean threat actor’s machine exposed their digital operational security failures and reliance on AI. Leveraging Flashpoint intelligence, we pivot from a single persona to a network of fake identities and companies targeting the Web3 and crypto industry.
Last week, Hudson Rock published a blog on “Trevor Greer,” a persona tied to a North Korean IT Worker. Flashpoint shared additional insights with our clients back in July, and we’re now making those findings public.
Trevor Greer, a North Korean operative, was identified via an infostealer infection on their own machine. Information-stealing malware, also known as Infostealers or stealers, are malware designed to scrape passwords and cookies from unsuspecting victims. Stealers (like LummaC2 or RedLine) are typically used by cybercriminals to steal login credentials from everyday users to sell on the Dark Web. It is rare to see them infect the machines of a state-sponsored advanced persistent threat group (APT).
However, when adversaries unknowingly infect themselves, they can expose valuable insights into the inner workings of their campaigns. Leveraging Flashpoint intelligence sourced from the leaked logs of “Trevor Greer,” our analysts uncovered a myriad of fake identities and companies used by DPRK APTs.
Finding Trevor Greer
Flashpoint analysts have been tracking the Trevor Greer email address since December 2024 in relation to the “Contagious Interview” campaign, in which threat actors operated as LinkedIn recruiters to target Web3 developers, resulting in the deployment of multiple stealers compromising developer Web3 wallets. Flashpoint also identified the specific persona’s involvement in a campaign in which North Korean threat actors posed as IT freelance workers and applied for jobs at legitimate companies before compromising the organizations internally.
ByBit Compromise
The ByBit compromise in late February 2025 further fueled Flashpoint’s investigations into the Trevor Greer email address. Bybit, a cryptocurrency exchange, suffered a critical incident resulting in North Korean actors extorting US $1.5 billion worth of cryptocurrency. In the aftermath, Silent Push researchers identified the persona “Trevor Greer” associated with the email address trevorgreer9312@gmail[.]com, which registered the domain “Bybit-assessment[.]com” prior to the Bybit compromise.
A later report claimed that the domain “getstockprice[.]com” was involved in the compromise. Despite these domain discrepancies, both investigations attributed the attack to North Korean advanced persistent threat (APT) nexus groups.
Tracing the Infection
Using Flashpoint’s vast intelligence collections, we performed a full investigation of compromised virtual private servers (VPS), revealing the actor’s potential involvement in several other operations, including remote IT work, several self-made blockchain and cryptocurrency exchange companies, and a potential crypto scam dating back to 2022.
Flashpoint analysts also discovered that the Trevor Greer email address was linked to domains infected with information-stealing malware.
What the Logs Revealed
Analysts extracted information about the associated infected host from Trevor Greer, revealing possible tradecraft and tools used. Analysts further identified specific indicators of compromise (IOCs) used in the campaigns mentioned above, as well as email addresses used by the actor for remote work.
The data painted a vivid picture of how these threat actors operate:
Preparation for “Contagious Interviews”
The browser history revealed the actor logging into Willo, a legitimate video interview platform. This suggests the actor was conducting reconnaissance to clone the site for the “Contagious Interview” campaign, where they lured Web3 developers into fake job interviews to deploy malware.
Reliance on AI Tools
The logs exposed the actor’s reliance on AI to bridge the language gap. The operator frequently accessed ChatGPT and Quillbot, likely using them to write convincing emails, build resumes, and generate code for their malware.
Pivoting: One Node to a Network
By analyzing the “Trevor Greer” logs, we were able to pivot to other personas and campaigns involved in the operation.
Fake Employment: The logs contained credentials for freelance platforms, such as Upwork and Freelancer, associated with other aliases, including “Kenneth Debolt” and “Fabian Klein.” This confirmed the actor was part of a broader scheme to infiltrate Western companies as remote IT workers.
Fake Companies: The data linked the actor to fake corporate entities, such as Block Bounce (blockbounce[.]xyz), a sham crypto trading firm set up to appear legitimate to potential victims.
Developer Personas: The infection data linked the actor to the GitHub account svillalobosdev, which had been active in open source projects to build credibility before the attack.
Legitimate Platforms & Tools: Analysts observed the actor using job boards such as Dice and HRapply[.]com, freelance platforms such as Upwork and Freelancer, and direct applications through company Workday sites. To improve their resume, the actor used resumeworded[.]com or cakeresume[.]com. For conversing, the threat actor likely relies on a mix of both GPT and Quilbot, as found in infected host logins, to ensure they sound human. During interviews, analysts determined that they potentially used Speechify.
Deep & Dark Web Resources: The actor also likely purchased Social Security numbers (SSNs) from SSNDOB24[.]com, a site for acquiring Social Security data.
Disrupt Threat Actors Using Flashpoint
The “Trevor Greer” case study illustrates a critical shift in modern threat intelligence. We are no longer limited to analyzing the malware adversaries deploy; sometimes, we can analyze the adversaries themselves.
Using their own tools against them, Flashpoint transformed a faceless state-sponsored entity into a tangible user with bad habits, sloppy OPSEC, and a trail of digital breadcrumbs. Behind every sophisticated APT campaign is a human operator, and sometimes, they click the wrong link too.
Request a demo today to delve deeper into the tactics, techniques, and procedures of advanced persistent threats and learn how Flashpoint’s intelligence strengthens your defenses.
Hackers stole personal information of 6.6m people but outsourcing firm did not shut device targeted for 58 hours
The outsourcing company Capita has been fined £14m for data protection failings after hackers stole the personal information of 6.6 million people, including staff details and those of its clients’ customers.
John Edwards, the UK information commissioner who levied the fine, said the March 2023 data theft from the group and companies it supported, including 325 pension providers, caused anxiety and stress for those affected.
Join us for this one-hour Black Hills Information Security webcast with Joseph - Security Analyst, as he shares with you what he's discovered and learned about the Dark Web, so you never ever ever have to go there for yourself.
This webcast was originally published on September 12, 2024. In this video, Kirsten Gross and James Marrs discuss how logging strategies can affect cyber investigations, specifically focusing on Windows logs. […]
How to Combat Check Fraud: Leveraging Intelligence to Prevent Financial Loss
Criminals increasingly steal checks and sell them on illicit online marketplaces, where check fraud-related services are common. Intelligence is helping the financial sector fight back
Checks are one of the most vulnerable legacy payment methods. Check fraud can actively affect the bottom lines (and reputations) of banks, financial services organizations, government entities, and many other organizations that utilize checks. According to the Financial Crimes Enforcement Network (FinCEN), fraud—including check fraud—is “the largest source of illicit proceeds in the US” as well as “one of the most significant money laundering threats to the United States.”
Targeting the mail
Criminals target the US mail system to steal a variety of checks. In fact, there is a nationwide surge in check fraud schemes targeting the US mail and shipping system, as threat actors continue to steal, alter, and sell checks through illicit means and channels.
This includes personal checks and tax refund checks to government or government assistance-related checks (Social Security payments, e.g.). Business checks are also a primary target because they are often written for larger amounts and may take longer for the victim to identify fraudulent activity.
In 2022 alone, US banks filed 680,000 check fraud-related suspicious activity reports (SARs). This represents a nearly two-fold increase from 2021 (which itself represents a 23 percent YoY increase from 2020). This surge in check fraud has been exacerbated by Covid-19 Economic Impact Payments (EIPs) under the CARES Act, which presented threat actors with a new avenue to attempt to commit fraud.
Related Reading
This Is What Covid Fraud Looks Like: Targeting Government Relief Funding
In order to mitigate and ultimately prevent check-fraud-related risks, it’s crucial for financial intelligence and fraud teams to understand what threat actors seek, how they work, and where they operate.
This begins, as we detail below, with intelligence into the communities, forums, and marketplaces where check fraud occurs as well as the tools that enable deep understandings, timely insights, and measurable action.
Below is an intelligence narrative, in three acts, that tells the story of how transactions involving some of the above examples could play out.
Act I: Obtain
Threat actors are known to remove mail from individuals’ mailboxes and parcel lockers using blue box “arrow” master keys. These arrow keys are often stolen from USPS employees, which has led to numerous incidents of harassment, threats, and even violence. Generally, arrow keys are sold within illicit community chats and/or the deep and dark web, often fetching upwards of $3,000 per key.
In general, when it comes to check fraud, threat actors may sell or seek:
Mailbox keys
Stolen checks
Check alteration services (physical and digital)
Synthetic identity provisioning
Drop account sharing
Counterfeit check creation
Writing a check with insufficient funds behind it
Insider access
A screenshot of Flashpoint’s Ignite platform, showing the results of an OCR-driven search for stolen checks.
Act II: Alter
Check alteration comes in two forms: “washing” and “cooking.”
Washing refers to the process of altering a check by chemically removing ink and replacing the newly empty spaces with a different value, recipient name, or another fraud-enabling alteration.
Cooking involves digitally scanning the check and altering text or values through digital means.
Act III: Monetize
Threat actors will deposit the fraudulent check and rapidly withdraw the funds from an ATM, or sell a stolen or altered check on an illicit marketplace or chat group, and then receive payment, often via cryptocurrency.
Four key elements of actionable check fraud intelligence
Financial institutions should rely on four essential intelligence-led technologies, tools, or capabilities to effectively combat check fraud.
1) Visibility and access to illicit communities and channels
To prevent check fraud, organizations should focus on a few key places. Financially motivated threat actors operate and share information on messaging apps like Telegram and other open-source channels, as well as illicit marketplaces on the deep and dark web. Therefore, it is imperative for financial intelligence and fraud teams to have access to the most relevant check fraud-related threats across the internet.
Keep in mind, however, that accessing these communities is not always straightforward and, if done frivolously, can compromise an investigation.
2) Timeliness and curated alerting
Intelligence is often only as good as it is relevant. Flashpoint enables security and intelligence practitioners to bubble the most important, mission-critical intelligence through our real-time alerting capability, which allows users to receive notifications for keywords and phrases that relate to their mission, such as check fraud-related lingo and activity.
Essential Reading
The Flashpoint Guide to Card Fraud for the Financial Services Sector
In addition to real-time alerts, analysts can rely on curated alerting and saved searches to track topics of long-term interest. Flashpoint Ignite enables analysts to research particular accounts and their recent activity and matches transactions to their respective ATM slips and institution address. This helps to ensure the accuracy of the information found within these communities and marketplaces before raising any alarms, as many scammers post false content.
This approach is particularly valuable as check fraudsters often share crucial information such as preferred methodologies, social media handles, and geolocations that can aid in identifying malicious activities. In addition, by closely observing newly emerging trends, such as the evolution of pandemic relief fraud to refund fraud to check fraud, analysts can proactively develop robust preventative measures to mitigate risks before these tactics become widespread.
3) Actionable OCR and Video Search
In order to provide “material proof,” cyber threat actors will often tout and post an image of a check in a chat application or marketplace in hopes of increasing the likelihood of a successful transaction. Optical Character Recognition (OCR) technology can capture important information about check fraud attempts, since actors often share images of the fraudulent check or subsequent monetization transactions. OCR alerts are customizable with the financial institution’s name and common phrases used on checks to enhance accuracy.
Images of fraudulent checks provide valuable insights into the fraud attempt, including the check’s unique identifier, the account holder’s name, the bank’s name and address, and the endorsement signature. By analyzing these details, financial institutions and law enforcement agencies can identify patterns and leads that can help them track down the perpetrators and prevent future fraudulent activity.
Related Resource
The Risk-Reducing Power of Flashpoint Video Search
Moreover, ATM withdrawal slips can offer critical information about the transaction, such as the location of the ATM, the time of the deposit, and the type of account used. This data is useful when taking appropriate measures to prevent similar attempts and protect customers’ assets. With the help of advanced technologies like Flashpoint’s OCR, institutions can quickly extract and analyze this information to generate real-time alerts and take prompt action to prevent monetary losses.
An essential investigative component, Flashpoint’s industry-first video search technology, like its OCR capability, enables fraud and cyber threat intelligence (CTI) teams to surface logos, text, explicit content, and other critical intelligence to enhance investigations.
Combat check fraud with Flashpoint
Flashpoint delivers the intelligence that enables financial institutions to combat check fraud at scale. With timely, actionable, and accurate intelligence, financial institutions can mitigate and prevent financial loss, protect customer assets, and track down perpetrators. Get a free trial today to learn how:
A financial services customer detected more than $4M in illicitly marketed assets, including checks and compromised accounts, using Flashpoint’s OCR capabilities.
A customer received 125 actionable alerts in a single month equated to over $15M in potentially averted losses.
An automated alert enabled a customer to identify a threat actor’s specific operations, saving them over $5M.
Raymond Felch // Preface: I began my exploration of reverse-engineering firmware a few weeks back (see “JTAG – Micro-Controller Debugging“), and although I made considerable progress finding and identifying the […]
Raymond Felch // Being an embedded firmware engineer for most of my career, I quickly became fascinated when I learned about reverse engineering firmware using JTAG. I decided to […]