Reading view

Securing the AI Frontier

Why the GSA OneGov Agreement Is a Game-Changer for Federal Cybersecurity

The mission to modernize government IT is accelerating at lightning speed, largely thanks to the transformative power of artificial intelligence (AI). Federal agencies are strategically leveraging AI to boost efficiency, enhance citizen services, and strengthen national security – a vision fully supported by the administration’s AI Action Plan.

At Palo Alto Networks, we are all-in on helping agencies deploy AI bravely and securely. Because the challenge isn't just about using AI for cyberdefense, but also about defending AI itself. We appreciate the U.S. General Services Administration (GSA) recognizing the critical need for scalable, efficient solutions.

That is precisely why the GSA OneGov Initiative is a massive, game-changing step forward. We are proud to be the first pure-play cybersecurity vendor to secure a OneGov agreement with the GSA. This strategic alliance simplifies and standardizes the process for agencies to access our world-class, AI-powered security platform, ensuring security is foundational to this crucial modernization mission.

The Wake-Up Call: The Silent Threat of AI Agent Corruption

If you needed a clear sign that AI has fundamentally shifted the cybersecurity landscape, our own Unit 42 research provides it. The new reality isn't just about hackers using AI in their attacks; it’s also about how internal AI provides another attack surface for threat actors.

The most insidious new threat we've observed is AI Agent Smuggling, where malicious attackers use AI agents to exploit other agents. Our Unit 42 research highlights two major vectors:

  • Indirect Prompt Injection: A security risk in LLMs where a user crafts input containing deceptive instructions to manipulate the model’s behavior, which can lead to unauthorized data access or unintended actions.
  • Agent Session Smuggling: Exploit vulnerabilities in agent-to-agent communication, injecting malicious instructions into a conversation, hiding them among otherwise benign client requests and server responses.

This confirms our core belief as stated in a recent secure AI by Design blog: The AI ecosystem (the models, data and infrastructure) is now a complex, expanding attack surface that traditional perimeter defenses were simply not designed to protect.

As I’ve said before, “If you’re deploying AI, you must deploy AI security.”

Secure AI by Design: A Strategic Alliance with GSA

The GSA’s OneGov Initiative aims to streamline procurement and drive down costs by leveraging the purchasing power of the entire federal government. This is more than an agreement; it’s a direct response to the call for a "secure-by-design" approach to federal AI adoption. This agreement simplifies and standardizes the process for agencies to access our world-class, AI-powered security platform, ensuring that security is foundational, not an afterthought. It provides industry leading AI security tools into the hands of our cyber defenders today.

Under the Hood: Technical Capabilities for the AI Ecosystem

To counter the autonomous threats we’re seeing, we provide a platform that protects the entire AI lifecycle, from the developer's keyboard to the data center.

1. Runtime Protection for AI Workloads

Securing the AI supply chain requires visibility across every stage, especially during runtime when models are processing sensitive data.

  • Prisma® AIRS™ delivers comprehensive security for the entire AI lifecycle, in one unified platform. It allows organizations to deploy traditional apps as well as AI applications, models and agents with confidence by reducing risk from misuse, data loss and sophisticated AI-driven threats. Prisma AIRS provides a clear, connected view of assets in multicloud environments, so teams can eliminate silos, accelerate responses, as well as scale cloud and AI apps securely.
  • Our Cloud-Native Application Protection Platform (CNAPP) has achieved the FedRAMP High designation, making it the preferred Code to Cloud™ solution to secure the entire application lifecycle from development to runtime. Our industry-leading CNAPP eliminates silos to deliver comprehensive visibility and best-in-class protection across multicloud environments.

2. Protecting Users and Data at the Edge

Even the most advanced AI defenses are undermined if users accessing applications and data are left vulnerable outside corporate security boundaries. The explosive growth of generative AI tools and the unseen behavior of AI agents are amplifying data exposure risks.

  • Prisma SASE (secure access service edge) secures all users, apps, devices and data, no matter where they are and no matter where applications reside.
    • Prisma Access (FedRAMP High Authorized) and Prisma Browser™ (FedRAMP-Moderate Authorized) integrate security capabilities, like zero trust network access (ZTNA), secure web gateway (SWG) and cloud access security broker (CASB), to provide a unified policy framework and a consistent user experience.
  • This approach helps agencies outpace the speed of AI-driven threats, safeguarding critical data and simplifying operations for a frictionless user experience. It ensures that the human element interacting with the AI is protected by the most stringent security controls available.

Deploy AI Bravely

The GSA OneGov agreement is a pivotal moment that provides federal agencies with the cost-effective, streamlined access they need to deploy AI with confidence. By leveraging our unified, AI-powered platform, government organizations can stop reacting to threats and start building secure-by-design AI environments. We are committed to remaining a key partner in this strategic initiative and helping the government achieve its mission outcomes safely.

For more information and access to promotional offers for new contracts signed on or before January 31, 2028, federal agencies can visit the GSA OneGov website.

The post Securing the AI Frontier appeared first on Palo Alto Networks Blog.

  •  

Bridging Cybersecurity and AI

Modernizing Vulnerability Sharing for a New Class of Threats

In cybersecurity, vulnerability information sharing frameworks have long assumed that conventional threats exploit flaws in software or systems, and they can be resolved with patches or configuration updates. AI and machine learning (ML) models upend that premise as adversarial attacks, like poisoning and evasion, target the unique way AI models process information. Consequently, the risks for AI systems include tactics like model poisoning (from evasion attacks) in datasets and training, which are not conventional software vulnerabilities. These new vulnerabilities fall outside the scope of traditional cybersecurity taxonomies like the Common Vulnerabilities and Exposures (CVE) Program.

There is a need to bridge the gap between the existing cybersecurity vulnerability sharing structure and burgeoning efforts to catalog security risks to AI systems. Provisions in the White House AI Action Plan, which Palo Alto Networks supports, call for the creation of an AI Information Sharing and Analysis Center (AI-ISAC), reinforcing the importance of addressing that disconnect. This integration is essential, as leveraging the existing, widely adopted cybersecurity infrastructure will be the fastest path to ensuring these new standards are accepted and operationalized.

Established Construct for Vulnerability Management and Disclosure

The global cybersecurity community relies on a mature infrastructure for sharing standardized vulnerability intelligence. Central to this ecosystem is the CVE List, established in 1999 as the authoritative catalog of cybersecurity vulnerabilities. Through CVE IDs and a network of CVE Numbering Authorities (CNAs), this framework enables consistent vulnerability documentation and disclosure.

Similarly, the Common Vulnerability Scoring System (CVSS) provides standardized severity assessments, allowing security teams to prioritize responses. Together with resources like the National Vulnerability Database (NVD) and CISA’s KEV Catalog catalog, these tools form the backbone of global vulnerability management, information sharing and coordinated disclosure.

Why AI Breaks the Traditional Model

While this infrastructure has served the cybersecurity community effectively for over two decades, it was designed around traditional threat models that AI systems substantially upend. Attacks on AI systems represent a critical departure from traditional cybersecurity threats as they operate insidiously, subtly corrupting core reasoning processes, causing persistent, systemic failures, some of which only become evident over time. Most traditional cybersecurity tools are not equipped to recognize those breakdowns because they assume deterministic behavior and rules-based logic. AI systems defy those assumptions because AI is probabilistic, not deterministic. Consequently, attacks on AI models may remain hidden for extended periods.

Unlike traditional cybersecurity threats that target code, adversarial AI attacks target the underlying data and algorithms that govern how AI systems learn, reason and make decisions. Consider the following predominant adversarial attack methodologies on machine learning:

  • Poisoning attacks inject malicious data into training datasets, corrupting the model's learning process and creating deliberate vulnerabilities or degraded performance.
  • Inference-related attacks exploit model outputs to extract sensitive information or learn about its training data. This includes model inversion, which reconstructs sensitive data from the model's outputs, as well as membership inference, which identifies whether specific data points were used in training.

The expansion of existing security frameworks and programs is necessary to cover the enumeration, disclosure and downstream management of security risks to AI systems.

Advancing AI Security Through the AI Action Plan

In July, the Administration unveiled the AI Action Plan, an innovation-first framework balancing AI advancement with security imperatives. The Plan prioritizes Secure-by-Design AI technologies and applications, strengthened critical infrastructure cybersecurity and protection of commercial and government AI innovations.

Notably, it recommends establishing an AI Information Sharing and Analysis Center (AI-ISAC) to facilitate threat intelligence sharing across U.S. critical infrastructure sectors and encourages sharing known AI vulnerabilities, “tak[ing] advantage of existing cyber vulnerability sharing mechanisms.” These provisions affirm that AI security underpins American leadership in the field and, where possible, should be built upon existing frameworks.

Redefining Boundaries for AI Threats

To position the CVE Program for the AI-driven future, Palo Alto Networks is engaging directly with industry and program stakeholders to chart the path forward. Traditionally, the CVE Program serves as an ecosystem-wide central warning system. It provides a unified source of truths for security risks. A security risk catalog and identification system are needed for AI systems, as they currently fall outside the traditional scope of the CVE Program that has focused exclusively on vulnerabilities rather than on malicious components. The historical aperture of the current CVE Program excludes harmful artifacts, such as backdoored AI models or poisoned datasets, which represent fundamentally different attack vectors, in turn creating security blind spots.

Securing AI’s Promise

The United States leads in AI innovation and must equally lead in securing it. As momentum builds behind the AI Action Plan and the establishment of the AI-ISAC, we have a critical window to shape information sharing frameworks of the future. The goal is to ensure that cybersecurity and AI security infrastructure advance in unison with the technology itself. Integrating new AI vulnerability standards into trusted frameworks like the CVE Program aligns with industry focus and needs. Through proactive, coordinated action, we can unlock AI’s full promise while safeguarding the models that are embedded in the critical systems on which our nation depends.

The post Bridging Cybersecurity and AI appeared first on Palo Alto Networks Blog.

  •  

From the Hill: The AI-Cybersecurity Imperative in Financial Services

The transformative potential of artificial intelligence (AI) across industries is undeniable. But realizing AI's true value hinges on three cybersecurity imperatives: Understanding the AI-cybersecurity nexus, harnessing AI to supercharge cyber defense, and embedding security into AI tools from the ground up through Secure AI by Design.

Nowhere is this convergence more urgent than in financial services. Sitting at the center of our global economy, financial institutions face a dual mandate: Embrace AI for cybersecurity and cybersecurity for AI.

I was honored to cover these key principals in my testimony before the House Committee on Financial Services, led by Chairman French Hill. The hearing, entitled “From Principles to Policy: Enabling 21st Century AI Innovation in Financial Services” convened witnesses from Palo Alto Networks, Google, NASDAQ, Zillow and Public Citizen. Together, we examined AI use cases in the financial services and housing sectors, including those specific to cybersecurity. We assessed how existing laws and frameworks apply in the age of AI.

The Defense Advantage Is AI-Powered Security Operations

Attacks have become faster, with the time from compromise to data exfiltration now 100 times faster than four years ago. The financial sector bears disproportionate risk, given the value of its data and interconnected systems, while firms contend with evolving regulatory expectations, talent shortages and the persistent tendency to elevate cybersecurity only after an incident.

Generative and agentic AI intensify these pressures by accelerating every phase of the attack chain, from deepfake-driven fraud to tailored spear phishing campaigns. Our researchers at Unit 42® have found that agentic AI, autonomous systems that can reason and act without human intervention, can compress what was once a multiday ransomware campaign into roughly 25 minutes.

To keep pace, financial institutions must pivot to AI-driven defenses that operate at machine speed.

Security operations centers (SOC) have long been overwhelmed by traditional alerts and fragmented data. Security teams, forced into manual triage across dozens of disparate tools, face an inefficient model that leaves vulnerabilities exposed, burns out analysts and makes it impossible to operate at the speed necessary to outpace modern attacks.

The average enterprise SOC ingests data from 83 security solutions across 29 vendors. In 75% of breaches, logging existed that should have flagged anomalous behavior, but critical signals were buried. With 90% of SOCs still relying on manual processes, adversaries have the clear advantage.

AI-driven SOCs flip this paradigm, acting as a force multiplier to substantially reduce detection and response times. To illustrate the scale of this necessity, consider our own security operations. Palo Alto Networks SOC analyzes over 90 billion events daily. Without AI, this would be an impossible task for human analysts. But by applying AI, we distill that down to a single actionable incident.

Financial institutions migrating to AI-driven SOC platforms are seeing transformative results:

  • One customer reduced the Mean Time to Respond (MTTR) from one day to 14 minutes.
  • Another prevented 22,831 threats and processed 113,271 threat indicators in less than 5 seconds.
  • A large bank saved 180 hours per year by automating security information and event management reporting; 500 hours through automated data collection; 360 hours by automating four Chief Technology Officer playbooks; and 240 hours with automated threat intelligence enrichment.

These improvements are critical to stopping threat actors. But none of this would be possible without AI.

Securing the New AI Attack Surface

As AI adoption grows, it will further expand the attack surface, creating new vectors targeting training data and model environments. AI's rapid growth is outpacing the adoption of security measures designed to protect it. Nearly three-quarters of S&P 500 companies now flag AI as a material risk in their public disclosures, up from just 12% in 2023.

Traditional security tools rely on static rules that miss advanced attacks, like multistep prompt injections or adversarial manipulations. Autonomous AI agents can take unpredictable actions that are difficult to monitor with legacy methods.

Rapid AI adoption has exposed organizations' infrastructure, data, models, applications and agents to unique threats. Unlike traditional cyber exploits that target software vulnerabilities, AI-specific attacks can manipulate the foundation of how an AI system learns and operates.

A Secure AI by Design

Even with an understanding of the risks, many organizations struggle with the lack of clarity on what effective AI security looks like in practice. Recognizing the gap between intent and execution, Palo Alto Networks developed a Secure AI by Design policy roadmap that provides organizations with a comprehensive roadmap that integrates security throughout the entire AI lifecycle.

A proactive stance ensures security is a feature, not an afterthought, crucial for building trust, maintaining compliance and mitigating risks. The approach addresses four imperatives organizations most pressingly face in AI adoption:

1. Secure the use of external AI tools.

2. Secure the underlying AI infrastructure and data.

3. Safely build and deploy AI applications.

4. Monitor and control AI agents.

The Path Forward

For financial institutions, Secure AI by Design must be anchored in enterprise governance. Institutions should maintain risk-tiered AI inventories, enforce strict access controls and implement testing commensurate with risk. Governance structures should enable board oversight and align with established model risk practices.

Policymakers also have a critical role to play in promoting AI-driven security operations, championing voluntary Secure AI by Design frameworks, ensuring policies safeguard innovation, enabling controlled experimentation and strengthening public-private collaboration.

Ultimately, the financial institutions that will thrive will recognize cybersecurity as the foundation that makes innovation possible. By embracing AI-driven defenses and securing AI systems from the ground up, the sector can confidently unlock AI's transformative potential while safeguarding the trust and stability that underpin the global economy.

Read the full testimony to learn more about how cybersecurity can enable AI innovation in financial services.

The post From the Hill: The AI-Cybersecurity Imperative in Financial Services appeared first on Palo Alto Networks Blog.

  •  

Meet digital sovereignty needs with AWS Dedicated Local Zones expanded services

At Amazon Web Services (AWS), we continue to invest in and deliver digital sovereignty solutions to help customers meet their most sensitive workload requirements. To address the regulatory and digital sovereignty needs of public sector and regulated industry customers, we launched AWS Dedicated Local Zones in 2023, with the Government Technology Agency of Singapore (GovTech Singapore) as our first customer.

Today, we’re excited to announce expanded service availability for Dedicated Local Zones, giving customers more choice and control without compromise. In addition to the data residency, sovereignty, and data isolation benefits they already enjoy, the expanded service list gives customers additional options for compute, storage, backup, and recovery.

Dedicated Local Zones are AWS infrastructure fully managed by AWS, built for exclusive use by a customer or community, and placed in a customer-specified location or data center. They help customers across the public sector and regulated industries meet security and compliance requirements for sensitive data and applications through a private infrastructure solution configured to meet their needs. Dedicated Local Zones can be operated by local AWS personnel and offer the same benefits of AWS Local Zones, such as elasticity, scalability, and pay-as-you-go pricing, with added security and governance features.

Since being launched, Dedicated Local Zones have supported a core set of compute, storage, database, containers, and other services and features for local processing. We continue to innovate and expand our offerings based on what we hear from customers to help meet their unique needs.

More choice and control without compromise

The following new services and capabilities deliver greater flexibility for customers to run their most critical workloads while maintaining strict data residency and sovereignty requirements.

New generation instance types

To support complex workloads in AI and high-performance computing, customers can now use newer generation instance types, including Amazon Elastic Compute Cloud (Amazon EC2) generation 7 with accelerated computing capabilities.

AWS storage options

AWS storage options provide two storage classes including Amazon Simple Storage Service (Amazon S3) Express One Zone, which offers high-performance storage for customers’ most frequently accessed data, and Amazon S3 One Zone-Infrequent Access, which is designed for data that is accessed less frequently and is ideal for backups.

Advanced block storage capabilities are delivered through Amazon Elastic Block Store (Amazon EBS) gp3 and io1 volumes, which customers can use to store data within a specific perimeter to support critical data isolation and residency requirements. By using the latest AWS general purpose SSD volumes (gp3), customers can provision performance independently of storage capacity with an up to 20% lower price per gigabyte than existing gp2 volumes. For intensive, latency-sensitive transactional workloads, such as enterprise databases, provisioned IOPS SSD (io1) volumes provide the necessary performance and reliability.

Backup and recovery capabilities

We have added backup and recovery capabilities through Amazon EBS Local Snapshots, which provides robust support for disaster recovery, data migration, and compliance. Customers can create backups within the same geographical boundary as EBS volumes, helping meet data isolation requirements. Customers can also create AWS Identity and Access Management (IAM) policies for their accounts to enable storing snapshots within the Dedicated Local Zone. To automate the creation and retention of local snapshots, customers can use Amazon Data Lifecycle Manager (DLM).

Customers can use local Amazon Machine Images (AMIs) to create and register AMIs while maintaining underlying local EBS snapshots within Dedicated Local Zones, helping achieve adherence to data residency requirements. By creating AMIs from EC2 instances or registering AMIs using locally stored snapshots, customers maintain complete control over their data’s geographical location.

Dedicated Local Zones meet the same high AWS security standards and sovereign-by-design principles that apply to AWS Regions and Local Zones. For instance, the AWS Nitro System provides the foundation with hardware- and software-level security. This is complemented by AWS Key Management Service (AWS KMS) and AWS Certificate Manager (ACM) for encryption management, Amazon Inspector, Amazon GuardDuty, and AWS Shield to help protect workloads, and AWS CloudTrail for audit logging of user and API activity across AWS accounts.

Continued innovation with GovTech Singapore

One of GovTech Singapore’s key focuses is on the nation’s digital government transformation and enhancing the public sector’s engineering capabilities. Our collaboration with GovTech Singapore involved configuring their Dedicated Local Zones with specific services and capabilities to support their workloads and meet stringent regulatory requirements. This architecture addresses data isolation and security requirements and ensures consistency and efficiency across Singapore Government cloud environments.

With the availability of the new AWS services with Dedicated Local Zones, government agencies can simplify operations and meet their digital sovereignty requirements more effectively. For instance, agencies can use Amazon Relational Database Service (Amazon RDS) to create new databases rapidly. Amazon RDS in Dedicated Local Zones helps simplify database management by automating tasks such as provisioning, configuring, backing up, and patching. This collaboration is just one example of how AWS innovates to meet customer needs and configures Dedicated Local Zones based on specific requirements.

Chua Khi Ann, Director of GovTech Singapore’s Government Digital Products division, who oversees the Cloud Programme, shared:
“The deployment of Dedicated Local Zones by our Government on Commercial Cloud (GCC) team, in collaboration with AWS, now enables Singapore government agencies to host systems with confidential data in the cloud. By leveraging cloud-native services like advanced storage and compute, we can achieve better availability, resilience, and security of our systems, while reducing operational costs compared to on-premises infrastructure.”

Get started with Dedicated Local Zones

AWS understands that every customer has unique digital sovereignty needs, and we remain committed to offering customers the most advanced set of sovereignty controls and security features available in the cloud. Dedicated Local Zones are designed to be customizable, resilient, and scalable across different regulatory environments, so that customers can drive ongoing innovation while meeting their specific requirements.

Ready to explore how Dedicated Local Zones can support your organization’s digital sovereignty journey? Visit AWS Dedicated Local Zones to learn more.

TAGS: AWS Digital Sovereignty Pledge, Digital Sovereignty, Security Blog, Sovereign-by-design, Public Sector, Singapore, AWS Dedicated Local Zones

Max Peterson Max Peterson
Max is the Vice President of AWS Sovereign Cloud. He leads efforts to help public sector organizations modernize their missions with the cloud while meeting necessary digital sovereignty requirements. Max previously oversaw broader digital sovereignty efforts at AWS and served as the VP of AWS Worldwide Public Sector with a focus on empowering government, education, healthcare, and nonprofit organizations to drive rapid innovation.
Stéphane Israël Stéphane Israël
Stéphane is the Managing Director of the AWS European Sovereign Cloud and Digital Sovereignty. He is responsible for the management and operations of the AWS European Sovereign Cloud GmbH, including infrastructure, technology, and services, and leads broader worldwide digital sovereignty efforts at AWS. Prior to AWS, he was the CEO of Arianespace, where he oversaw numerous successful space missions, including the launch of the James Webb Space Telescope.
  •  

Exploring the new AWS European Sovereign Cloud: Sovereign Reference Framework

At Amazon Web Services, we’re committed to deeply understanding the evolving needs of both our customers and regulators, and rapidly adapting and innovating to meet them. The upcoming AWS European Sovereign Cloud will be a new independent cloud for Europe, designed to give public sector organizations and customers in highly regulated industries further choice to meet their unique sovereignty requirements. The AWS European Sovereign Cloud expands on the same strong foundation of security, privacy, and compliance controls that apply to other AWS Regions around the globe with additional governance, technical, and operational measures to address stringent European customer and regulatory expectations. Sovereignty is the defining feature of the AWS European Sovereign Cloud and we’re using an independently validated framework to meet our customers’ requirements for sovereignty, while delivering the scalability and functionality you expect from the AWS Cloud.

Today, we’re pleased to share further details about the AWS European Sovereign Cloud: Sovereign Reference Framework (ESC-SRF). This reference framework aligns sovereignty criteria across multiple domains such as governance independence, operational control, data residency and technical isolation. Working backwards from our customers’ sovereign use cases, we aligned controls to each of the criteria and the AWS European Sovereign Cloud is undergoing an independent third-party audit to verify the design and operations of these controls conform to AWS sovereignty commitments. Customers and partners can also leverage the ESC-SRF as a foundation upon which they can build their own complementary sovereignty criteria and controls when using the AWS European Sovereign Cloud.

To clearly explain how the AWS European Sovereign Cloud meets sovereignty expectations, we’re publishing the ESC-SRF in AWS Artifact including the criteria and control mapping. In AWS Artifact, our self-service audit artifact retrieval portal, you have on-demand access to AWS security and compliance documents and AWS agreements. You can now use the ESC-SRF to define best practices for your own use case, map these to controls, and illustrate how you meet and even exceed sovereign needs of your customers.

A transparent and validated sovereignty model

The ESC-SRF has been built from customer feedback, regulatory requirements across the European Union (EU), industry frameworks, AWS contractual commitments, and partner input. ESC-SRF is industry and sector agnostic, as it’s written to address fundamental sovereignty needs and expectations at the foundational layer of our cloud offerings with additional sovereignty-specific requirements and controls that apply exclusively to the AWS European Sovereign Cloud. Each criterion is implemented through sovereign controls that will be independently validated by a third-party auditor.

The framework builds on core AWS security capabilities, including encryption, key management, access governance, AWS Nitro System-based isolation, and internationally recognized compliance certifications. The framework adds sovereign-specific governance, technical, and operational measures such as independent EU corporate structures, dedicated EU trust and certificate services, operations by AWS EU-resident personnel, strict residency for customer data and customer created metadata, separation from all other AWS Regions, and incident response operated within the EU.

These controls are the basis of a dedicated AWS European Sovereign Cloud System and Organization Controls (SOC) 2 attestation. The ESC-SRF establishes a solid foundation for sovereignty of the cloud, so that customers can focus on defining sovereignty measures in the cloud that are tailored to their goals, regulatory needs, and risk posture.

How you can use the ESC-SRF

The ESC-SRF describes how AWS implements and validates sovereignty controls in the AWS European Sovereign Cloud. AWS treats each criterion as binding and its implementation will be validated by an independent third-party auditor in 2026. While most customers don’t operate at the size and scale of AWS, you can use the ESC-SRF as both an assurance model and a reference framework you can adapt to your specific use cases.

From an assurance perspective, it provides end-to-end visibility for each sovereignty criterion through to its technical implementation. We will also provide third-party validation in the AWS European Sovereign Cloud SOC 2 report. Customers can use this report with internal auditors, external assessors, supervisory authorities, and regulators. This can reduce the need for ad-hoc evidence requests and supports customers by providing them with evidence to demonstrate clear and enforceable sovereignty assurances.

From a design perspective, you can refer to the framework when shaping your own sovereignty architecture, selecting configurations, and defining internal controls to meet regulatory, contractual, and mission-specific requirements. Because the ESC-SRF is industry and sector agnostic, you can apply criteria from the framework to suit your own unique needs. Depending on your sovereign use case, not all criteria may apply to your use case sovereign needs. The ESC-SRF can also be used in conjunction with AWS Well-Architected which can help you learn, measure, and build using architectural best practices. Where appropriate you can create your version of the ESC-SRF, map to controls, and have them tested by a third party. To download the ESC-SRF, visit AWS Artifact (login required).

A strong, clear foundation

The publication of the ESC-SRF is part of our ongoing commitment to delivering on the AWS Digital Sovereignty Pledge through transparency and assurances to help customers meet their evolving sovereignty needs with assurances designed, implemented, and validated entirely within the EU. Within the framework, customers can build solutions in the AWS European Sovereign Cloud with confidence and a strong understanding of how they are able to meet their sovereignty goals using AWS.

For more information about the AWS European Sovereign Cloud, visit aws.eu.


If you have feedback about this post, submit comments in the Comments section below.

Andreas Terwellen

Andreas Terwellen

Andreas is a Senior Manager in security audit assurance at AWS, based in Frankfurt, Germany. His team is responsible for third-party and customer audits, attestations, certifications, and assessments across Europe. Previously, he was a CISO in a DAX-listed telecommunications company in Germany. He also worked for various consulting companies managing large teams and programs across multiple industries and sectors.

  •  
❌