Normal view

AWS European Sovereign Cloud achieves first compliance milestone: SOC 2 and C5 reports plus seven ISO certifications

10 March 2026 at 21:06

In January 2026, we announced the general availability of the AWS European Sovereign Cloud, a new, independent cloud for Europe entirely located within the European Union (EU), and physically and logically separate from all other AWS Regions. The unique approach of the AWS European Sovereign Cloud provides the only fully featured, independently operated sovereign cloud backed by strong technical controls, sovereign assurances, and legal protections designed to meet the sensitive data needs of European governments and enterprises.

One of the foundational components of how AWS European Sovereign Cloud enables verifiable trust of technical controls and delivers assurance is through our compliance programs and assurance frameworks. These programs help customers understand the robust controls in place at AWS European Sovereign Cloud to maintain security and compliance of the cloud. To meet the needs of our customers, we committed that the AWS European Sovereign Cloud will maintain key certifications such as ISO/IEC 27001:2022, System and Organization Controls (SOC) reports, and Cloud Computing Compliance Criteria Catalogue (C5) attestation, all validated regularly by independent auditors to assure our controls are designed appropriately, operate effectively, and can help customers satisfy their compliance obligations.

Today, AWS European Sovereign Cloud is pleased to announce that SOC 2 and C5 Type 1 attestation reports, along with seven key ISO certifications (ISO 27001:2022, 27017:2015, 27018:2019, 27701:2019, 22301:2019, 20000-1:2018, and 9001:2015) are now available. These attestation reports and certifications cover 69 AWS services operating within the AWS European Sovereign Cloud, and this achievement marks a pivotal first step in our journey to establish the AWS European Sovereign Cloud as a trusted and compliant cloud for European organizations. By securing these foundational certifications and attestation reports early in our implementation, we are demonstrating our commitment to earning customer trust. AWS European Sovereign Cloud customers in Germany and across Europe can now run their applications with enhanced assurance and confidence that our infrastructure aligns with internationally recognized security standards and the AWS European Sovereign Cloud: Sovereign Reference Framework (ESC-SRF). These certifications and attestation reports provide independent validation of our security controls and operational practices, demonstrating our commitment to meeting the heightened expectations towards cloud service providers. Beyond compliance, these certifications and reports help customers meet regulatory requirements and innovate with confidence.

SOC 2 Type 1 report

SOC reports are independent third-party examinations that show how AWS European Sovereign Cloud meets compliance controls and sovereignty objectives. The AWS European Sovereign Cloud SOC 2 report addresses three critical AICPA Trust Services Criteria: Security, Availability, and Confidentiality and includes internal controls mapped to the ESC-SRF. The ESC-SRF establishes sovereignty criteria across key domains including governance independence, operational control, data residency, and technical isolation. As part of the SOC 2 Type 1 attestation, independent third-party auditors have validated suitability of the design and implementation of our controls addressing measures such as independent European Union (EU) corporate structures, operation by EU-resident AWS personnel, strict residency requirements for Customer Content and Customer-Created Metadata, and separation from all other AWS Regions. The ESC-SRF controls in our SOC 2 report show customers how AWS delivers on its sovereignty commitments.

C5 Type 1 report

C5 is a German Government-backed attestation scheme introduced in Germany by the Federal Office for Information Security (BSI) and represents one of the most comprehensive cloud security standards in Europe. The AWS European Sovereign Cloud C5 Type 1 report provides customers with independent third-party attestation on the suitability of the design and implementation of our controls to meet both C5 basic criteria and C5 additional criteria.

The basic criteria establish fundamental security requirements for cloud service providers, covering areas such as organization of information security, human resources security, asset management, access control, cryptography, physical security, operations security, communications security, system acquisition and development, supplier relationships, incident management, business continuity, and compliance. The additional criteria address enhanced requirements for handling sensitive data and critical applications, making this attestation particularly valuable for AWS European Sovereign Cloud customers with stringent data security and sovereignty requirements.

Key ISO certifications

AWS European Sovereign Cloud has achieved seven key ISO certifications that collectively demonstrate comprehensive operational excellence:

These certifications confirm that AWS European Sovereign Cloud has integrated rigorous security, privacy, continuity, service delivery, and quality programs into a comprehensive framework, helping to ensure sensitive information remains secure, services remain available, and operations meet the highest standards through systematic risk management processes and continuous improvement practices.

How to access the reports

To access SOC 2, C5 reports and ISO certifications, customers should sign in to their AWS European Sovereign Cloud account and navigate to AWS Artifact in the AWS Management Console. AWS Artifact is a self-service portal that provides on-demand access to AWS compliance reports and certifications.

We recognize that compliance is not a destination but a continuous journey, and these initial SOC 2, C5 reports and ISO certifications represent the beginning of our certification portfolio. They lay the essential groundwork upon which we will continue to build to meet AWS European Sovereign Cloud customers’ compliance needs as they continue to evolve. As we expand our compliance coverage in the months ahead, customers can be confident that security, transparency, and regulatory alignment have been part of the very DNA of the AWS European Sovereign Cloud design from day one. To learn more about our compliance and security programs, visit AWS European Sovereign Cloud Compliance, or reach out to your AWS European Sovereign Cloud account team.

Security and compliance is a shared responsibility between AWS European Sovereign Cloud and the customer. For more information, see the AWS Shared Security Responsibility Model.

If you have feedback about this post, submit comments in the Comments section below.

Julian Herlinghaus

Julian Herlinghaus

Julian is a Manager in AWS Compliance & Security Assurance based in Berlin, Germany. He is the third-party audit program lead for EMEA and has worked on compliance and assurance for the AWS European Sovereign Cloud. He previously worked as an information security department lead of an accredited certification body and has multiple years of experience in information security and security assurance and compliance.

Tea Jioshvili

Tea Jioshvili

Tea is a Manager in AWS Compliance & Security Assurance based in Berlin, Germany. She leads various third-party audit programs across Europe. She previously worked in security assurance and compliance, business continuity, and operational risk management in the financial industry for 20 years.

Atul Patil

Atulsing Patil
Atulsing is a Compliance Program Manager at AWS. He has 29 years of consulting experience in information technology and information security management. Atulsing holds a Master of Science in Electronics degree and professional certifications such as CCSP, CISSP, CISM, ISO 42001 Lead Auditor, ISO 27001 Lead Auditor, HITRUST CSF, Archer Certified Consultant, and AWS CCP.

Security is a team sport: AWS at RSAC 2026 Conference

10 March 2026 at 19:31

The RSAC 2026 Conference brings together thousands of professionals, practitioners, vendors, and associations to discuss issues covering the entire spectrum of cybersecurity—a place where innovation meets collaboration and the industry’s brightest minds converge to shape its future. This March, Amazon Web Services (AWS) returns to the annual RSAC Conference in San Francisco to share how unifying security and data empowers teams to protect AI-driven workloads while maximizing existing security investments.

Experience innovation at the AWS booth

Visit us at booth S-0466 in South Expo to experience three interactive demo kiosks:

  • The AWS Security Solutions kiosk features live demonstrations of AWS security services including new launches showcasing the latest cloud security innovations and how they work with partner solutions to provide comprehensive protection for your organization. Meet with AWS Security Specialists to discuss your specific security challenges.
  • The AWS Security Partners kiosk showcases live demos from more than 20 AWS Partners showcasing how these partners integrate seamlessly with AWS to address your most critical security challenges.
  • The Humanoid Security Guardian kiosk offers an interactive AI-powered experience that generates customized well-architected framework guides, delivered through QR code for implementation reference.

Partner Passport program: Stop by the AWS booth to pick up your playbook to start exploring integrated AWS Partner security solutions across the show floor. Visit participating partner booths throughout the conference to learn about joint solutions that combine AWS infrastructure with partner innovations. After you’ve received all partner booth visit stamps, you’ll receive AWS swag and entry into a daily raffle to win an exclusive prize.

Beyond the booth: Deep dive sessions and hands-on workshops

AWS security experts will be sharing insights across four sessions throughout RSAC 2026 Conference. These sessions cover the most pressing challenges in AI security, from privacy-by-design principles to preparing for AI-native incidents. Don’t miss learning directly from AWS experts in these sessions.

Privacy by Design in the AI Era | Reserve a seat
Monday, March 23, 2026 | 8:30 AM–9:20 AM PDT
Attendees will learn how to design AI systems with privacy embedded from the start. This session will cover data minimization strategies, architectural patterns for consent-aware decision-making, and practical approaches for building privacy-respecting AI in dynamic environments. Speakers: Juan David Alvares Builes, Senior Security Consultant, Amazon Web Services and Zully Romero, Security and Solutions Architect, Bancolombia.

Trusted Identity Propagation for Autonomous Agents Across Cloud & SaaS | Reserve a seat
Monday, March 23, 2026 | 9:40 AM–10:30 AM PDT
This session will explore trusted identity propagation for autonomous agents across cloud, SaaS, and multi-domain environments. Compare AWS, Azure, Apple, and Cloudflare approaches, focusing on identity continuity, credential management, and privacy-aware designs for secure, agent-driven enterprise systems. Speakers: Swara Gandhi, Senior Solutions Architect, Amazon Web Services and Vijeth Lomada, Lead AI Engineer, Adobe.

How to Secure Containerized Applications from Supply Chain Attacks | Reserve a seat
Monday, March 23, 2026 | 1:10 PM–2:00 PM PDT
Software supply chain attacks target development pipelines to inject malicious code into container images and dependencies. This session demonstrates how to secure containerized applications through automated scanning, Software Bill of Materials (SBOM) generation, and image signing. Learn to implement security controls in CI/CD pipelines using open-source and commercial solutions. Speakers: Patrick Palmer, Principal Security, Solutions Architect, Amazon Web Services and Monika Vu Minh, Quantitative Technologist, Qube Research & Technologies

From Prompt to Pager: Preparing for AI-Native Incidents Now | Reserve a seat
Wednesday, March 25, 2026 | 1:15 PM–2:05 PM PDT
AI incidents start as prompts and end as actions like code edits, SQL writes, workflow changes, yet most playbooks are not ready. This talk will explain why AI incidents differ, show where classic guardrails miss, and share field-tested steps to prepare now: log model-generated actions, add pre/post-conditions, capture provenance, limit blast radius, and rehearse one AI-native scenario. Speaker: Aviral Srivastava, Security Engineer, Amazon

AWS activities and events

AWS will host events at Cloud Village, an interactive community space where security practitioners explore offensive and defensive cloud security through hands-on activities, technical talks, and collaborative discussions. AWS is hosting two technical workshops that provide hands-on practical skills security teams can implement immediately. AWS has also crafted multiple capture the flag (CTF) community challenges at both RSAC 2026 Conference and BSidesSF that advance the broader security community’s capabilities – built by the same team behind the AWS Vulnerability Disclosure Program, where researchers can responsibly report security concerns directly to AWS. Cloud Village will be located in Moscone South, Level 2, Room 204 and is open to All Access Pass and Expo Plus Pass holders.

Finally, you can also join us at a customer soiree AWS is co-hosting with CrowdStrike, on Wednesday, March 25 at The Mint, for an evening of discovery, where artists, thinkers, and leaders gather to challenge convention, shape the future and have some fun. Register to join us

If you’re looking for opportunities for meaningful connections across the security community, AWS is hosting several events including;

Join us in San Francisco

Whether you’re exploring how to secure AI workloads, seeking to unify security across distributed environments, or looking to optimize your security data strategy, the AWS team at RSAC 2026 Conference is ready to collaborate. Visit booth S-0466 in South Expo, attend our technical workshops at the Cloud Village, or join AWS-led sessions. You can also schedule time to meet with AWS experts for more in-depth discussions. Together, we’ll demonstrate that when it comes to cybersecurity, we’re all on the same team.

Learn more about AWS Security solutions at aws.amazon.com/security
See you in San Francisco, March 23–26, 2026.

Idaliz Seymour Idaliz Seymour
Idaliz is a Product Marketing Manager at AWS Security, specializing in helping organizations understand the value of network and application protection in the cloud. In her free time, you’ll find her reading or boxing.

AWS Security Hub is expanding to unify security operations across multicloud environments

10 March 2026 at 15:51

After talking with many customers, one thing is clear: the security challenge has not gotten easier. Enterprises today operate across a complex mix of environments, including on-premises infrastructure, private data centers, and multiple clouds, often with tools that were never designed to work together. The result is enterprise security teams spend more time managing tools than managing risk, making it harder to stay ahead of threats across an increasingly complex environment.

At Amazon Web Service (AWS), we believe security should be simple, integrated, and built for the way enterprises actually operate. This belief is what drove us to reimagine AWS Security Hub, delivering full-stack security through a single experience, and this vision is driving our next chapter.

Building on a foundation of unified security

We transformed Security Hub into a unified security operations solution by bringing together AWS security services, including Amazon GuardDuty, Amazon Inspector, AWS Security Hub Cloud Security Posture Management (Security Hub CSPM), and Amazon Macie, into a single experience that automatically and continuously analyzes security signals across threats, vulnerabilities, misconfigurations, and sensitive data. Security Hub delivers a common foundation, bringing together findings from across your AWS environment so your security team spends less time translating signals and more time acting on them. Built on top of that foundation, a unified operations layer gives security teams near real-time risk analytics, automated analysis, and prioritized insights, helping them focus on what matters most, at scale.

We also introduced new capabilities (the Extended plan) that simplify how enterprises procure, deploy, and integrate a full-stack security solution across endpoint, identity, email, network, data, browser, cloud, AI, and security operations. Now, customers can use Security Hub to expand their security portfolio through a curated selection of AWS Partner solutions (at launch: 7AI, Britive, CrowdStrike, Cyera, Island, Noma, Okta, Oligo, Opti, Proofpoint, SailPoint, Splunk (a Cisco company), Upwind, and Zscaler), all through one unified experience. With AWS as the seller of record, you benefit from pay-as-you-go pricing, a single bill, and no long-term commitments. Our goal is simple: unified security, everywhere your enterprise operates.

Freedom to innovate, wherever your workloads are

At AWS, interoperability means giving customers the freedom to choose solutions that best suit their needs, and the ability to use them wherever their workloads run. But freedom to innovate across multicloud environments also means that it is critical to secure them consistently, and without adding operational complexity.

What’s coming for Security Hub

In the coming months, we are expanding Security Hub with new multicloud capabilities that extend unified security operations beyond AWS. The foundation of this expansion is a common data layer that unifies security signals from wherever your workloads run. On top of that, a unified policy and operations layer delivers consistent posture management, exposure analysis, and risk prioritization, so your security team operates from a single view of risk rather than a fragmented collection of consoles.

Security Hub will deliver unified risk analytics that surface critical risks across your multicloud estate. You’ll be able to manage cloud security posture with Security Hub CSPM checks that give you consistent posture visibility, and extend vulnerability management with expanded Amazon Inspector capabilities, including virtual machine scanning, container image scanning, and serverless scanning. Security Hub will also deliver external network scanning that enriches security findings with context about internet-facing exposure across your multicloud environment, including for resources not running in AWS.

The result is more comprehensive risk coverage across your enterprise. It’s about giving your security team a single, unified experience to detect and respond to risks, wherever you operate.

Security as a business enabler

The security leaders I speak with aren’t just asking for better tools. They’re asking for a way to get ahead of risk, not just manage it. They want security that keeps pace with the business, not security that slows it down.

That’s the vision behind AWS Security Hub: unified security through a single, integrated security operations experience, built on a common data foundation, powered by intelligent analytics, and delivered through a consistent operations layer, to help reduce security risk, improve team productivity, and strengthen security operations across AWS and beyond.

Our multicloud expansion is underway, and we are just getting started.

You can learn more at aws.amazon.com/security-hub, or visit us at the AWS booth (S-0466) at RSA Conference, March 23–26 in San Francisco.

Gee Rittenhouse Gee Rittenhouse
Gee is the Vice President of Security Services at AWS, overseeing key services including Security Hub, GuardDuty, and Inspector. He holds a PhD from MIT and brings extensive leadership experience across enterprise security and cloud. He previously served as CEO of Skyhigh Security and Senior Vice President and General Manager of Cisco’s Security Business Group, where he was responsible for Cisco’s worldwide cybersecurity business.

AWS completes the 2026 annual Dubai Electronic Security Centre (DESC) certification audit

5 March 2026 at 18:46

We’re excited to announce that Amazon Web Services (AWS) has completed the annual Dubai Electronic Security Centre (DESC) certification audit to operate as a Tier 1 Cloud Service Provider (CSP) for the AWS Middle East (UAE) Region.

This alignment with DESC requirements demonstrates our continued commitment to adhere to the heightened expectations for CSPs. Government customers of AWS can run their applications in AWS Cloud-certified Regions with confidence.

The AWS compliance to the DESC Framework requirements were validated by an independent third-party auditor (BSI) prior to issuance of a renewed certificate by DESC. The updated DESC CSP certificate is available through AWS Artifact, and is valid for one year to January 22, 2027. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

The certification includes the following 10 additional services in scope, for a total of 108 services:

This is a 10% increase in the number of services in the Middle East (UAE) Region that are in scope of the DESC CSP certification.

AWS strives to continuously bring services into the scope of its compliance programs to help you meet your architectural and regulatory needs. You can view the current list of services in scope on our Services in Scope page. You can also reach out to your AWS account team if you have any questions or feedback about DESC compliance.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below

Tariro Dongo Tariro Dongo
Tari is a Security Assurance Program Manager at AWS, based in London. Tari is responsible for third-party and customer audits, attestations, certifications, and assessments across EMEA. Previously, Tari worked in security assurance and technology risk in the big four and financial services industry over the last 15 years.

2025 ISO and CSA STAR certificates are now available with one additional service and one new region

5 March 2026 at 01:18

Amazon Web Services (AWS) successfully completed the annual recertification audit with no findings for ISO 9001:2015, 27001:2022, 27017:2015, 27018:2019, 27701:2019, 20000-1:2018, 22301:2019, and Cloud Security Alliance (CSA) STAR Cloud Controls Matrix (CCM) v4.0. The objective of the audit was to enable AWS to expand their ISO and CSA STAR certifications to include one new AWS Region and one new AWS service to the scope. The ISO standards cover areas including quality management, information security, cloud security, privacy protection, service management, and business continuity. The certifications demonstrate the commitment of AWS to maintaining robust security controls and protecting customer data across our services.

As part of this recertification audit, one new Region [Asia Pacific (Taipei)] and one new service (AWS Deadline Cloud) were added into the scope since the last certification issued November 25, 2025.

For a full list of AWS services that are certified under ISO and CSA Star, see the AWS
ISO and CSA STAR Certified page.
Customers can also access the certifications in the AWS Management Console through AWS Artifact.

If you have feedback about this post, submit comments in the Comments section below.

Chinmaee Parulekar

Chinmaee Parulekar

Chinmaee is a Compliance Program Manager at AWS. She has 6 years of experience in information security. Chinmaee holds a Master of Science degree in Management Information Systems and professional certifications such as CISA, HITRUST CCSF practitioner.

Atul Patil

Atulsing Patil
Atulsing is a Compliance Program Manager at AWS. He has 27 years of consulting experience in information technology and information security management. Atulsing holds a Master of Science in Electronics degree and professional certifications such as CCSP, CISSP, CISM, CDPSE, ISO 27001 Lead Auditor, HITRUST CSF, ISO 42001 Lead Auditor, Archer Certified Consultant, and AWS CCP.

Enhanced access denied error messages with policy ARNs

4 March 2026 at 18:19

To help you troubleshoot access denied errors, we recently added the Amazon Resource Name (ARN) of the denying policy to access denied error messages. This builds on our 2021 enhancement that added the type of the policy denying the access to access denied error messages. The ARN of the denying policy is only provided in same-account and same-organization scenarios. This change is gradually rolling out across all AWS services in all AWS Regions.

What changed?

We added the policy ARN to access denied error messages for AWS Identity and Access Management (IAM) and AWS Organizations policies. Because of this change, you can now pinpoint the exact policy causing the denial. You don’t have to evaluate all the policies of the same type in your AWS environment to identify the culprit. The policy types covered in this update are service control policies (SCPs), resource control policies (RCPs), permissions boundaries policies, session policies, and identity-based policies.

For example, when a developer attempts to perform the ListRoles action in IAM and is denied because of an SCP:

Before:
An error occurred (AccessDenied) when calling the ListRoles operation: User: arn:aws:iam::123456789012:user/Matt is not authorized to perform: iam:ListRoles on resource: arn:aws:iam::123456789012:role/* with an explicit deny in a service control policy

Enhanced:
An error occurred (AccessDenied) when calling the ListRoles operation: User: arn:aws:iam::123456789012:user/Matt is not authorized to perform: iam:ListRoles on resource: arn:aws:iam::123456789012:role/* with an explicit deny in a service control policy: arn:aws:organizations::987654321098:policy/o-qv5af4abcd/service_control_policy/p-2kgnabcd

How this enhancement works

This enhancement is designed with three principles:

  • Limited scope – Same account and same organization only: Policy ARNs are only included when the request originates from either the same AWS account or the same organization as the policy. This limits the scope of the flow of information.
  • Additional context in the form of ARN only and not policy content: The additional context covers only the policy ARN, which is a resource identifier, not the policy document itself. It does not reveal the policy’s permissions or conditions that you would have to update to grant access. Users would still need appropriate permissions to read the policy content or take actions.
  • No change to authorization logic: This enhancement only affects the error message displayed, not the authorization decision-making process. The same policies deny or allow access as before, and we are not changing how the decision is made.

How this benefits you

This accelerates troubleshooting across your organization. Previously, when you received an access denied error from a policy, for example an SCP, you had to review all SCPs in your organization, determine which applied to the account, and evaluate each one—a process that could take time. Now, with the specific SCP ARN included in the error message, whoever has the necessary permission can review the identified SCP and more quickly resolve the issue. This precision reduces the investigative burden. Clear error messages with policy ARNs also improve communication between teams who need access and teams who troubleshoot issues by providing a common reference point, eliminating ambiguity and reducing back-and-forth communication. Lastly, when validating security controls, the policy ARN in access denied errors provides immediate confirmation of which policy is enforcing the restriction, enabling customers to quickly verify their policies are correctly denying access.

How you can use the new information

Let’s say you’re trying to describe your Amazon Relational Database Service (Amazon RDS) snapshots in the us-east-2 Region by calling this API:
aws rds describe-db-snapshots --region us-east-2

Unfortunately you get an access denied error. The error message shows:
An error occurred (AccessDenied) when calling the DescribeDBSnapshots operation: User: arn:aws:sts::123456789012:assumed-role/ReadOnly/ReadOnlySession is not authorized to perform: rds:DescribeDBSnapshots on resource: arn:aws:rds:us-east-2:123456789012:snapshot:* with an explicit deny in a service control policy: arn:aws:organizations::987654321098:policy/o-qv5af4abcd/service_control_policy/p-lvi9abcd

You can see the context to understand what happens:

  • It’s an explicit deny. This means there’s a policy that denies this action for a specific context
  • The deny comes from the SCP with this ARN: arn:aws:organizations::987654321098:policy/o-qv5af4abcd/service_control_policy/p-lvi9abcd

Here’s how you can troubleshoot this error:

  1. Ensure you have necessary permission to view the SCP. If you don’t, contact your administrator and provide the message that includes the policy ARN.
  2. If you have the necessary permission, go to the AWS Management Console for AWS Organizations to access the SCP.
  3. Check for a Deny statement for the action. In the preceding example, the action is rds:DescribeDBSnapshots.
  4. You can alter the statement to remove the Deny if it’s no longer applicable. For more information, see Update a service control policy (SCP).
  5. Re-try your operation. Repeat the troubleshooting process if you get other access denied errors due to different reasons or policies.

When will this change become available?

This update is gradually rolling out across all AWS services in all AWS Regions, beginning early 2026.

Need more assistance?

If you have any questions or issues, contact AWS Support or your Technical Account Manager (TAM).

Stella Hie

Stella Hie

Stella is a Senior Technical Product Manager for AWS Identity and Access Management (IAM). She specializes in improving developer experience and tooling while maintaining strong security standards. Her work focuses on making IAM straightforward to use and improving the troubleshooting experience for AWS customers. In her free time, she enjoys playing piano and bouldering.

2025 FINMA ISAE 3000 Type II attestation report available with 183 services in scope

3 March 2026 at 20:30

Amazon Web Services (AWS) is pleased to announce the issuance of the Swiss Financial Market Supervisory Authority (FINMA) Type II attestation report with 183 services in scope.

The Swiss Financial Market Supervisory Authority (FINMA) has published several requirements and guidelines about engaging with outsourced services for the regulated financial services customers in Switzerland.

An independent third-party audit firm issued the report to assure customers that the AWS control environment is appropriately designed and operating effectively to support of adherence with FINMA requirements.

The latest report covers the 12-month period from October 1, 2024 to September 30, 2025 for the following circulars:

  • 2018/03 Outsourcing – banks, insurance companies and selected financial institutions under FinIA
  • 2023/01 Operational risks and resilience – banks
  • Business Continuity Management (BCM) minimum standards proposed by the Swiss Insurance Association.

AWS has added the following five services to the current FINMA scope:

Customers can find the FINMA ISAE 3000 report on AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.
Security and compliance is a shared responsibility between AWS and the customer. When customers move their computer systems and data to the cloud, security responsibilities are shared between the customer and the cloud service provider. For more information, see the AWS Shared Security Responsibility Model.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below

Tariro Dongo Tariro Dongo
Tari is a Security Assurance Program Manager at AWS, based in London. Tari is responsible for third-party and customer audits, attestations, certifications, and assessments across EMEA. Previously, Tari worked in security assurance and technology risk in the big four and financial services industry over the last 15 years.

2025 PiTuKri ISAE 3000 Type II attestation report available with 183 services in scope

3 March 2026 at 18:17

Amazon Web Services (AWS) is pleased to announce the issuance of the Criteria to Assess the Information Security of Cloud Services (PiTuKri) Type II attestation report with 183 services in scope.

The Finnish Transport and Communications Agency (Traficom) Cyber Security Centre published PiTuKri, which consists of 52 criteria that provide guidance across 11 domains for assessing the security of cloud service providers.

An independent third-party audit firm issued the report to assure customers that the AWS control environment is appropriately designed and operating effectively to demonstrate adherence with PiTuKri requirements. This attestation demonstrates the AWS commitment to meet security expectations for cloud service providers set by Traficom.

The latest report covers a 12-month period from October 1, 2024 to September 30, 2025. AWS has added the following five services to the current PiTuKri scope:

Customers can find the PiTuKri ISAE 3000 report on AWS Artifact. AWS Artifact is a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console, or learn more at Getting Started with AWS Artifact.

Security and compliance is a shared responsibility between AWS and the customer. When customers move their computer systems and data to the cloud, security responsibilities are shared between the customer and the cloud service provider. For more information, see the AWS Shared Security Responsibility Model.

To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.

If you have feedback about this post, submit comments in the Comments section below

Tariro Dongo Tariro Dongo
Tari is a Security Assurance Program Manager at AWS, based in London. Tari is responsible for third-party and customer audits, attestations, certifications, and assessments across EMEA. Previously, Tari worked in security assurance and technology risk in the big four and financial services industry over the last 15 years.

Understanding IAM for Managed AWS MCP Servers

2 March 2026 at 17:12

As AI agents become part of your development workflows on Amazon Web Services (AWS), you want them to work with your existing AWS Identity and Access Management (IAM) permissions, not force you to build a separate permissions model. At the same time, you need the flexibility to apply different governance controls when an AI agent makes an API call compared to when a developer does it directly. In this post, we show you how to use new standardized IAM context keys for AWS-managed remote Model Context Protocol (MCP) servers, a simplified authorization model that works like the AWS CLI and SDKs you already use, and upcoming VPC endpoint support for network perimeter controls.

Overview

At re:Invent 2025, we launched four AWS-managed remote MCP servers (AWS, EKS, ECS, and SageMaker) in preview. AWS hosts and manages remote MCP servers, removing the need for local installation and maintenance while providing automatic updates, resiliency, scalability, and complete audit logging through AWS CloudTrail. For example, with the AWS MCP Server you can access AWS documentation and execute calls to over 15,000 AWS APIs, helping AI agents perform multi-step tasks like setting up VPCs or configuring Amazon CloudWatch alarms.

We heard from customers that, as AI agents become more integrated into dev workflows, you want these workflows to work with existing AWS permissions without having to reconfigure IAM policies or create separate permissions models for AI. At the same time, you want the flexibility to apply different governance controls for AI actions compared to direct human actions. We recently introduced two standardized IAM context keys (aws:ViaAWSMCPService and aws:CalledViaAWSMCP) that give you this control. These context keys work consistently across all AWS-managed remote MCP servers, so you can implement defense-in-depth security, maintain detailed audit trails, and meet compliance requirements by differentiating between calls using AI solutions and human-initiated actions. In addition, we heard from customers the need to simplify the authorization model. Starting soon, you will no longer need to separate MCP-specific IAM actions (such asaws-mcp:InvokeMCP) to interact with AWS-managed MCP servers. This aligns with how AWS Command Line Interface (AWS CLI) and AWS SDKs work today, reducing configuration overhead, while your existing IAM policies continue to control what actions can be performed. Looking ahead, we’re adding VPC endpoint support for AWS-managed MCP servers so you can connect directly from your VPC, providing enhanced security through two-stage authorization and network perimeter controls for customers who need to enforce identity and network perimeters.

Using IAM to differentiate between human-driven and AI-driven actions

To give you fine-grained control over AI solutions using MCP servers, we’ve introduced two standardized IAM context keys. These keys work consistently across all AWS-managed MCP servers:

  • aws:ViaAWSMCPService (boolean): Set to true when the request comes through an AWS-managed MCP server. Use this to allow or deny all MCP-initiated actions.
  • aws:CalledViaAWSMCP (string, single valued): Contains the service principal name of the MCP server (for example, aws-mcp.amazonaws.com, eks-mcp.amazonaws.com, and ecs-mcp.amazonaws.com). Use this to allow or deny actions from specific MCP servers. This context key value will include more MCP servers when new MCP servers are available, allowing you to configure fined grained access to your AWS resources through IAM and SCP policies.

For organizations that want to completely disable MCP server access across their organization or specific organizational units, you can use a service control policy (SCP) to deny all or some actions when accessed through MCP servers:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "DenyAllActionsViaMCP",
      "Effect": "Deny",
      "Action": "*",
      "Resource": "*",
      "Condition": {
        "Bool": {
          "aws:ViaAWSMCPService": "true"
        }
      }
    }
  ]
}

In another example, you can allow AI agents using AWS MCP Server to read Amazon Simple Storage Service (Amazon S3) buckets but deny delete operations. The AWS MCP Server provides the aws___call_aws tool, which can execute any AWS API operation, including Amazon S3 operations:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowS3ReadOperations",
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:ListBucket"
      ],
      "Resource": "*"
    },
    {
      "Sid": "DenyDeleteWhenAccessedViaMCP",
      "Effect": "Deny",
      "Action": [
        "s3:DeleteObject",
        "s3:DeleteBucket"
      ],
      "Resource": "*",
      "Condition": {
        "Bool": {
          "aws:ViaAWSMCPService": "true"
        }
      }
    }
  ]
}

You can also restrict access to specific AWS-managed MCP servers. For example, allow EKS operations only when called through the EKS MCP server, not through the AWS MCP server:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Sid": "AllowEKSOperationsViaEKSMCP",
      "Effect": "Allow",
      "Action": "eks:*",
      "Resource": "*",
      "Condition": {
        "StringEquals": {
          "aws:CalledViaAWSMCP": "eks-mcp.amazonaws.com"
        }
      }
    },
    {
      "Sid": "DenyEKSOperationsViaOtherMCP",
      "Effect": "Deny",
      "Action": "eks:*",
      "Resource": "*",
      "Condition": {
        "StringNotEquals": {
          "aws:CalledViaAWSMCP": "eks-mcp.amazonaws.com"
        }
      }
    }
  ]
}

Understanding the changes for public endpoint authorization

Based on feedback, we’re simplifying the authorization model to work like the AWS CLI and SDKs you already use. Moving forward, the MCP server adds the standardized IAM context keys (aws:ViaAWSMCPService and aws:CalledViaAWSMCP) to your request and forwards it to the downstream AWS service. The MCP server will still authenticate your request using SigV4 as before. Now, the downstream service performs the authorization check using your existing IAM policies, which can reference these context keys for fine-grained control. This means your AI agents work with your existing AWS credentials and service-level permissions, eliminating the need for separate MCP-specific IAM actions and reducing configuration overhead. The following diagram illustrates how this simplified authorization flow works:

Figure 1: Authorization flow for managed MCP servers.

Figure 1: Authorization flow for managed MCP servers.

Using IAM with MCP servers and VPC endpoints

We also heard from customers in regulated industries who need additional network-level controls for AI agent access. Customers in industries like financial services and healthcare require private network communication to meet compliance mandates. To meet these requirements, AWS will also add VPC endpoint support for AWS-managed MCP servers in the future. You can use VPC endpoints to keep all AI agent traffic within your private network, eliminating exposure through the public internet. When you configure a VPC endpoint, the MCP server performs an authorization check at the VPC endpoint level before forwarding requests to downstream AWS services. This creates a defense-in-depth approach where you control access at both the network perimeter (VPC endpoint) and the service level (IAM policies). You can combine VPC endpoints with the aws:ViaAWSMCPService and aws:CalledViaAWSMCP context keys to implement layered security controls that meet your organization’s specific governance and compliance requirements. Additional details on context keys and example patterns will be available when support for VPC endpoints is launched.

Things to consider

When implementing IAM authorization for MCP servers, you need to make decisions about deployment patterns, policy design, and operational practices. Here are key considerations to help you choose the right approach for your organization.

  • Designing IAM policies: Only give access that is needed, and refine policies and remove unused access over time. Use context keys to differentiate calls using AI solutions from direct developer actions.
  • Security and compliance: VPC endpoints help meet requirements for private network communication in regulated industries.
  • Getting started: Start with the deployment pattern that matches your current needs. Begin with restrictive IAM policies and relax them as you understand your AI agents’ requirements. Monitor CloudTrail logs to see what actions your AI agents perform and use the data to refine your policies over time.

Conclusion

You now have the control to govern AI agent access to your AWS resources through AWS-managed MCP Server using the same IAM policies and tools you already trust. The standardized IAM context keys (aws:ViaAWSMCPService and aws:CalledViaAWSMCP) are available across all AWS-managed MCP servers, giving you fine-grained control to differentiate calls using AI solutions from direct developer actions at the service level. In upcoming releases, AWS managed MCP servers will work without separate IAM actions over public endpoints and simplify your IAM policy management. We will also provide support for VPC endpoints with enhanced security through two-stage authorization and network perimeter controls for customers who need additional access restrictions. See the documentation for your specific AWS-managed MCP server to confirm whether it supports the new public endpoint authorization model and VPC endpoints. Whether you’re building AI coding assistants or agentic applications, start implementing these controls today to secure your AI workflows while maintaining the flexibility to define access rules that match your organization’s security posture.

Riggs Goodman III Riggs Goodman III
Riggs is a Principal Partner Solution Architect at AWS. His current focus is on AI security and networking, providing technical guidance, architecture patterns, and leadership for customers and partners to build AI workloads on AWS. Internally, Riggs focuses on driving overall technical strategy and innovation across AWS service teams to address customer and partner challenges.
Shreya Jain

Shreya Jain

Shreya is a Senior Technical Product Manager in AWS Identity. She is energized by bringing clarity and simplicity to complex ideas. When she’s not applying her creative energy at work, you’ll find her at Pilates, dancing, or discovering her next favorite coffee shop.

Praneeta Prakash Praneeta Prakash
Praneeta is a Senior Product Manager at AWS Developer Tools, where she drives innovation at the intersection of cloud infrastructure and developer experience. She works on strategic initiatives that shape how developers interact with cloud infrastructure, particularly in the evolving landscape of AI-native development. Her work centers on making AWS more accessible and intuitive for developers of all skill levels, from frontend engineers building their first cloud application to experienced teams scaling production systems.
Brian Ruf Khaled Sinno
Khaled is a Principal Engineer at Amazon Web Services. His current focus is on Identity and Access Management in AWS and more generally on providing identity and security controls for customers in the cloud. In the past, he has worked on availability and security within AWS RDS (i.e. databases) while also contributing more broadly to the security space of database and search services. Prior to AWS, Khaled led large engineering teams in the FinTech industry, working on distributed systems in finance and trading platforms.

AWS successfully completed its first surveillance audit for ISO 42001:2023 with no findings

26 February 2026 at 23:45

In November 2024, Amazon Web Services (AWS) was the first major cloud service provider to announce the ISO/IEC 42001 accredited certification for AI services, covering: Amazon Bedrock, Amazon Q Business, Amazon Textract, and Amazon Transcribe.

In November 2025, AWS successfully completed its first surveillance audit for ISO 42001:2023, Artificial Intelligence Management System with no findings.

This demonstrates the continual commitment of AWS to responsible AI practices. With this independent validation, our customers can gain further assurances around the AWS commitment to responsible AI and their ability to build and operate AI applications responsibly using AWS services.

For a full list of AWS services that are certified under ISO and CSA STAR, see the AWS ISO and CSA STAR Certified page. Customers can also access the certifications in the AWS Management Console through AWS Artifact.

If you have feedback about this post, submit comments in the Comments section below.
 

Atul Patil

Atulsing Patil
Atulsing is a Compliance Program Manager at AWS. He has 27 years of consulting experience in information technology and information security management. Atulsing holds a Master of Science in Electronics degree and professional certifications such as CCSP, CISSP, CISM, CDPSE, ISO 27001 Lead Auditor, HITRUST CSF, Archer Certified Consultant, and AWS CCP.

Inside AWS Security Agent: A multi-agent architecture for automated penetration testing

26 February 2026 at 23:11

AI agents have traditionally faced three core limitations: they can’t retain learned information or operate autonomously beyond short periods, and they require constant supervision. AWS addresses these limitations with frontier agents—a new category of AI that performs complex reasoning, multi-step planning, and autonomous execution for hours or days. Multi-agent collaboration has emerged as a powerful approach that helps tackle complex workflows that require multiple steps and diverse expertise—such as in software development where agents handle code generation, review, and testing; in scientific research where agents collaborate on literature review, experimental design, and data analysis; and in cybersecurity where specialized agents perform reconnaissance, vulnerability analysis, and exploit validation.

In this post, we discuss how we’ve used this technology to deliver automated penetration testing, something that can traditionally take weeks and is resource intensive. We also provide a technical deep-dive into the architecture of the penetration testing component built into AWS Security Agent.

The concept of automated security testing isn’t new—penetration testing tools and vulnerability scanners have existed for decades. However, with recent advancements in large language models (LLMs), frontier agents are designed to reason about application behavior, adapt strategies based on feedback, and understand context in ways that traditional tools can’t. By creating a network of specialized agents, we can address increasingly complex security challenges: one agent maps the attack surface while others analyze business logic flaws, validate findings, and prioritize vulnerabilities based on actual exploitability. The exploitability context comes from the combination of actual exploit attempts by swarm agent workers, independent re-validation by specialized validators, and LLM-driven scoring according to the common vulnerability scoring system (CVSS).

We’ve developed automated penetration testing for the AWS Security Agent. This capability includes a multi-agent penetration testing system that orchestrates specialized security agents to work collaboratively on vulnerability detection. The system begins with multiple types of scanning to establish baseline coverage, then conducts broad reconnaissance using static, predefined tasks to map the application surface and identify initial attack vectors. Building on these findings, our agentic system dynamically generates focused test tasks tailored to the specific application context—reasoning about discovered endpoints, business logic patterns, and potential vulnerability chains to create targeted security tests that adapt based on application responses. By combining these specialized capabilities, the system can tackle complex security scenarios across major risk categories. Beyond single-vulnerability detection, the system performs complex chained attacks—for instance, combining an information disclosure flaw with privilege escalation to access sensitive resources, or chaining insecure direct object references (IDOR) with authentication bypass.

Figure 1: Diagram of the AWS Security Agent penetration testing component.

Figure 1: Diagram of the AWS Security Agent penetration testing component.

System architecture

This section describes the major components of the system. The following subsections cover authentication and initial access, baseline scanning, multi-phased exploration with the specialized agent swarm, and validation with report generation.

Authentication and initial access

The system begins with an intelligent sign-in component that handles authentication across diverse application architectures. This component combines LLM-based reasoning with deterministic mechanisms to locate sign-in pages, attempt provided credentials, and maintain authenticated sessions for subsequent testing phases. The approach adapts to different application structures and target environments automatically and uses a browser tool. The developer can optionally provide a custom sign-in prompt tailored to the target application.

Baseline scanning phase

Following authentication, the system initiates comprehensive baseline scanning through parallel execution of specialized scanners. For black-box testing, the network scanner conducts automated web application security testing, generating raw traffic interactions and identifying candidate vulnerable endpoints. In white-box settings, the code scanner additionally performs deep source code analysis when repositories are available, producing descriptive documentation across multiple categories. Additional specialized scanners complement these capabilities to identify vulnerabilities across multiple dimensions and establish initial security coverage.

Multi-phased exploration

The system employs two distinct exploration approaches that work in concert. Managed execution operates with predefined static tasks across major risk categories like cross-site scripting, insecure direct object reference, privilege escalation, and so on. This component systematically helps ensure comprehensive coverage by executing curated tasks for each risk type. In the next phase, guided exploration takes a dynamic, intelligence-driven approach. This component ingests discovered endpoints, validated findings, and code analysis documentation to reason about application-specific attack opportunities. It operates in two stages: first generating a contextual penetration testing plan by identifying unexplored resources and potential vulnerability chains, then programmatically managing the execution of these dynamically generated tasks. The guided explorer runs with adaptive tasks that evolve based on application responses and discovered patterns.

Specialized agent swarm
Both exploration approaches dispatch work to specialized swarm worker agents—each configured for specific risk types and equipped with comprehensive penetration testing toolkits including code executors, web fuzzers, NVD vulnerability database search for Common Vulnerabilities and Exposures (CVE) intelligence, and vulnerability-specific tools. These workers execute assigned tasks with timeout management and structured reporting.

Validation and report generation

When specialized agents identify potential security risks, they generate structured reports containing the vulnerability type, affected endpoints, exploitation evidence, and technical context. However, automated penetration testing faces a critical challenge: LLM agents can produce plausible-sounding findings that require rigorous validation. Candidate findings undergo validation through both deterministic validators and specialized LLM-based agents that attempt active exploitation. We employ assertion-based validation techniques where natural language assertions written by security experts encode deep knowledge about real attack behaviors, requiring explicit, structured proof that’s significantly harder to circumvent than narrow deterministic checks. Validated findings undergo Common Vulnerability Scoring System (CVSS) analysis for severity assessment, then are synthesized into final reports with validation results, severity scores, and exploitation evidence—designed to deliver actionable, high-confidence vulnerabilities for effective remediation.

Benchmarking

To evaluate our system, we performed human evaluation in addition to automatic benchmarking. We conducted analysis on real-world trajectories and created a taxonomy of error patterns. By spotting frequent error patterns, we were able to iterate on our solution. We report results on the CVE Bench public benchmark, which is a collection of vulnerable web applications containing 40 critical-severity CVEs from the National Vulnerability Database used to evaluate AI agents on real-world exploits. Each application includes automatic exploit references, and LLM-based agents attempt to execute attacks that trigger the vulnerabilities.

We measure success through the attack success rate (ASR) metric, defined as the rate of successful exploitation of application vulnerabilities. CVE Bench uses a grader that the agent can query to verify exploit success and provides explicit capture-the-flag (CTF) instructions. We evaluate in three configurations:

  1. With CTF instructions and grader checks after each tool call, achieving 92.5% on CVE Bench v2.0 (we note that some challenges involve blind exploitation where the agent cannot verify success without this feedback).
  2. Without CTF instructions or grader feedback, achieving 80%—which better reflects real-world conditions where the agent must self-validate through observable outcomes. We also observed that the agent was able to identify some CVEs based on the LLM’s parametric knowledge, as shown in the following bash command where the model explicitly references a CVE by name.
  3. Therefore, we ran an additional experiment using an LLM whose knowledge cutoff date predates CVE Bench v1.0 release, achieving 65% ASR.

The following code example shows an LLM agent demonstrating parametric knowledge of CVE-2023-37999 from its training data, then issuing a bash command to check exploitation prerequisites.

# HT Mega 2.2.0 has a known vulnerability – CVE-2023-37999
# It has an unauthenticated privilege escalation via the REST API settings endpoint
# Let's check if registration is enabled
curl -s http://target:9090/wp-login.php?action=register -I | head -10

We’re committed to pushing the frontier of security vulnerability detection by continuously evaluating our agent and staying competitive with newer, more challenging benchmarks.

Optimizing testing and compute budget

One challenge for penetration testing is determining the balance between exploitation and exploration. Using a depth-first approach can waste too much compute on specific directions, leading to lower vulnerability coverage under a fixed compute budget. Compare that to breadth-first search, which is unlikely to discover deep vulnerabilities that require testing multiple approaches. Therefore, a balance between the two approaches is needed to maximize coverage for a given compute budget. Our proposed system design aims to include a hybrid approach. A more efficient dynamic solution that generalizes across various vulnerabilities and different web applications remains an open research question.

Another challenge with penetration testing is non-determinism. Because of the underlying LLMs, the output of penetration test runs can vary from one run to another. Having different findings across multiple runs can lead to confusion. One option to mitigate this is to perform multiple runs and consolidate the findings across them.

Conclusion

The multi-agent architecture presented in this post demonstrates how you can use specialized agents that can collaborate to tackle complex penetration testing workflows—from intelligent authentication and baseline scanning through managed and guided exploration phases, culminating in rigorous validation. By orchestrating these specialized components with adaptive task generation and assertion-based validation, the system delivers comprehensive security coverage that evolves based on application-specific context and discovered patterns.

AWS Security Agent is now in public preview, for more information, see Getting Started with AWS Security Agent.

If you have feedback about this post, submit comments in the Comments section below.

Tamer Alkhouli

Tamer Alkhouli
Tamer is an Amazon Web Services Senior Applied Scientist with over 13 years in NLP across academia and industry. He earned a PhD in machine translation from RWTH Aachen University under Hermann Ney. Across his career, he has built systems in machine translation, conversational AI, and foundation models. At AWS, he has contributed to Amazon Lex, Titan foundation models, Amazon Bedrock Agents, and the AWS Security Agent.

Divya Bhargavi

Divya Bhargavi
Divya is a Senior Applied Scientist at AWS on the Security Agent team. Her work focuses on designing agentic architectures for vulnerability discovery and exploit validation, with emphasis on developing robust benchmarking frameworks and evaluation methodologies for security agents in adversarial contexts. Prior to this, she led scientific engagements at the AWS Generative AI Innovation Center.

Daniele Bonadiman

Daniele Bonadiman
Daniele is a Senior Applied Scientist at AWS, where he works on AWS Security Agent. Daniele holds a PhD in Applied Machine Learning and Natural Language Processing from the University of Trento. During his time at AWS, Daniele has contributed to several AI initiatives focusing on conversational AI, agent orchestration, and code interpretation for AI agents.

Yilun Cui

Yilun Cui
Yilun is a Principal Engineer at AWS working on Agentic AI. Yilun has had over a decade of experience building tools for developers and he is passionate about applying AI throughout the software development lifecycle to help software developers build faster and deliver better products.

Dr. Yi Zhang

Dr. Yi Zhang
Yi is a Principal Applied Scientist at AWS. With over 25 years of industrial and academic research experience, Yi’s research focuses on the development of conversational and interactive multi-agent systems and syntactic and semantic understanding of natural language. He has been leading the research effort behind the development of multiple AWS services such as AWS Security Agent and Amazon Bedrock Agent.

AI-augmented threat actor accesses FortiGate devices at scale

20 February 2026 at 21:27

Commercial AI services are enabling even unsophisticated threat actors to conduct cyberattacks at scale—a trend Amazon Threat Intelligence has been tracking closely. A recent investigation illustrates this shift: Amazon Threat Intelligence observed a Russian-speaking financially motivated threat actor leveraging multiple commercial generative AI services to compromise over 600 FortiGate devices across more than 55 countries from January 11 to February 18, 2026. No exploitation of FortiGate vulnerabilities was observed—instead, this campaign succeeded by exploiting exposed management ports and weak credentials with single-factor authentication, fundamental security gaps that AI helped an unsophisticated actor exploit at scale. This activity is distinguished by the threat actor’s use of multiple commercial GenAI services to implement and scale well-known attack techniques throughout every phase of their operations, despite their limited technical capabilities. AWS infrastructure was not observed to be involved in this campaign. Amazon Threat Intelligence is sharing these findings to help the broader security community defend against this activity.

This investigation highlights how commercial AI services can lower the technical barrier to entry for offensive cyber capabilities. The threat actor in this campaign is not known to be associated with any advanced persistent threat group with state-sponsored resources. They are likely a financially motivated individual or small group who, through AI augmentation, achieved an operational scale that would have previously required a significantly larger and more skilled team. Yet, based on our analysis of public sources, they successfully compromised multiple organizations’ Active Directory environments, extracted complete credential databases, and targeted backup infrastructure, a potential precursor to ransomware deployment. Notably, when this actor encountered hardened environments or more sophisticated defensive measures, they simply moved on to softer targets rather than persisting, underscoring that their advantage lies in AI-augmented efficiency and scale, not in deeper technical skill.

As we expect this trend to continue in 2026, organizations should anticipate that AI-augmented threat activity will continue to grow in volume from both skilled and unskilled adversaries. Strong defensive fundamentals remain the most effective countermeasure: patch management for perimeter devices, credential hygiene, network segmentation, and robust detection for post-exploitation indicators.

Campaign overview

Through routine threat intelligence operations, Amazon Threat Intelligence identified infrastructure hosting malicious tooling associated with this campaign. The threat actor had staged additional operational files on the same publicly accessible infrastructure, including AI-generated attack plans, victim configurations, and source code for custom tooling. This inadequate operational security provided comprehensive visibility into the threat actor’s methodologies and the specific ways they leverage AI throughout their operations. It’s like an AI-powered assembly line for cybercrime, helping less skilled workers produce at scale.

The threat actor compromised globally dispersed FortiGate appliances, extracting full device configurations that yielded credentials, network topology information, and device configuration information. They then used these stolen credentials to connect to victim internal networks and conduct post-exploitation activities including Active Directory compromise, credential harvesting, and attempts to access backup infrastructure, consistent with pre-ransomware operations.

Initial access: Mass credential abuse

The threat actor’s initial access vector was credential-based access to FortiGate management interfaces exposed to the internet. Analysis of the actor’s tooling supported systematic scanning for management interfaces across ports 443, 8443, 10443, and 4443, followed by authentication attempts using commonly reused credentials.

FortiGate configuration files represent high-value targets because they contain:

  • SSL-VPN user credentials with recoverable passwords
  • Administrative credentials
  • Complete network topology and routing information
  • Firewall policies revealing internal architecture
  • IPsec VPN peer configurations

The threat actor developed AI-assisted Python scripts to parse, decrypt, and organize these stolen configurations.

Geographic distribution

The campaign’s targeting appears opportunistic rather than sector-specific, consistent with automated mass scanning for vulnerable appliances. However, certain patterns suggest organizational-level compromise where multiple FortiGate devices belonging to the same entity were accessed. Amazon Threat Intelligence observed clusters where contiguous IP blocks or shared non-standard management ports indicated managed service provider deployments or large organizational networks. Concentrations of compromised devices were observed across South Asia, Latin America, the Caribbean, West Africa, Northern Europe, and Southeast Asia, among other regions.

Custom tooling: AI-generated reconnaissance framework

Following VPN access to victim networks, the threat actor deploys a custom reconnaissance tool, with different versions written in both Go and Python. Analysis of the source code reveals clear indicators of AI-assisted development: redundant comments that merely restate function names, simplistic architecture with disproportionate investment in formatting over functionality, naive JSON parsing via string matching rather than proper deserialization, and compatibility shims for language built-ins with empty documentation stubs. While functional for the threat actor’s specific use case, the tooling lacks robustness and fails under edge cases—characteristics typical of AI-generated code used without significant refinement.

The tool automates the post-VPN reconnaissance workflow:

  1. Ingesting target networks from VPN routing tables
  2. Classifying networks by size
  3. Running service discovery using gogo, an open-source port scanner
  4. Automatically identifying SMB hosts and domain controllers
  5. Integrating vulnerability scanning using Nuclei, an open-source vulnerability scanner, against discovered HTTP services to produce prioritized target lists.

Post-exploitation methodology

Once inside victim networks, the threat actor follows a standard approach leveraging well-known open-source offensive tools.

Domain compromise: The threat actor’s operational documentation details the intended use of Meterpreter, an open-source post-exploitation toolkit, with the mimikatz module to perform DCSync attacks against domain controllers. This allowed the actor to extract NTLM password hashes from Active Directory. In confirmed compromises, the attacker obtained complete domain credential databases. In at least one case, the Domain Administrator account used a plaintext password that was either extracted from the FortiGate configuration through password reuse or was independently weak.

Lateral movement: Following domain compromise, the threat actor attempts to expand access through pass-the-hash/pass-the-ticket attacks against additional infrastructure, NTLM relay attacks using standard poisoning tools, and remote command execution on Windows hosts.

Backup infrastructure targeting: The threat actor specifically targeted Veeam Backup & Replication servers, deploying multiple tools for extracting credentials, including PowerShell scripts, compiled decryption tools, and exploitation attempts leveraging known Veeam vulnerabilities. Backup servers represent high-value targets because they typically store elevated credentials for backup operations, and compromising backup infrastructure positions an attacker to destroy recovery capabilities before deploying ransomware.

Limited exploitation success: The threat actor’s operational notes reference multiple CVEs across various targets (CVE-2019-7192, CVE-2023-27532, and CVE-2024-40711, among others). However, a critical finding from this analysis is that the threat actor largely failed when attempting to exploit anything beyond the most straightforward, automated attack paths. Their own documentation records repeated failures: targeted services were patched, required ports were closed, vulnerabilities didn’t apply to the target OS versions, . Their final operational assessment for one confirmed victim acknowledged that key infrastructure targets were “well-protected” with “no vulnerable exploitation vectors.”

AI as a force multiplier

Amazon Threat Intelligence analysis revealed that the actor uses at least two distinct commercial LLM providers throughout their operations.

AI-generated attack planning: The threat actor used AI to generate comprehensive attack methodologies complete with step-by-step exploitation instructions, expected success rates, time estimates, and prioritized task trees. These plans reference academic research on offensive AI agents, suggesting the actor follows emerging literature on AI-assisted penetration testing. The AI produces technically accurate command sequences, but the actor struggles to adapt when conditions differ from the plan. They cannot compile custom exploits, debug failed exploitation attempts, or creatively pivot when standard approaches fail.

Multi-model operational workflow: Amazon Threat Intelligence identified the actor using multiple AI services in complementary roles. One serves as the primary tool developer, attack planner, and operational assistant. A second is used as a supplementary attack planner when the actor needs help pivoting within a specific compromised network. In one observed instance, the actor submitted the complete internal topology of an active victim—IP addresses, hostnames, confirmed credentials, and identified services—and requested a step-by-step plan to compromise additional systems they could not access with their existing tools.

AI-generated tooling at scale: Beyond the reconnaissance framework, the actor’s infrastructure contains numerous scripts in multiple programming languages bearing hallmarks of AI generation, including configuration parsers, credential extraction tools, VPN connection automation, mass scanning orchestration, and result aggregation dashboards. The volume and variety of custom tooling would typically indicate a well-resourced development team. Instead, a single actor or very small group generated this entire toolkit through AI-assisted development.

Threat actor assessment

Based on comprehensive analysis, Amazon Threat Intelligence assesses this threat actor as follows:

  • Motivation: Suspected financially motivated, based on widespread, indiscriminate targeting and low sophistication
  • Language: Russian-speaking, based on extensive Russian-language operational documentation
  • Skill level: Low-to-medium baseline technical capability, significantly augmented by AI. The actor can run standard offensive tools and automate routine tasks but struggles with exploit compilation, custom development, and creative problem-solving during live operations
  • AI dependency: Extensive reliance across all operational phases. AI is used for tool development, attack planning, command generation, and operational reporting across multiple commercial LLM providers
  • Operational scale: Broad. Compromised devices across dozens of countries, with evidence of sustained operations over an extended period
  • Post-exploitation depth: Shallow. Repeated failures against hardened or non-standard targets, with a pattern of moving on rather than persisting when automated approaches fail
  • Operational security: Inadequate. Detailed operational plans, credentials, and victim data stored without encryption alongside tooling

Amazon’s response

Amazon Threat Intelligence remains committed to helping protect customers and the broader internet ecosystem by actively investigating and disrupting threat actors.

Upon discovering this campaign, Amazon Threat Intelligence took the following actions:

  • Shared actionable intelligence, including indicators of compromise, with relevant partners
  • Collaborated with industry partners to broaden visibility into the campaign and support coordinated defense efforts

Through these efforts, Amazon helped reduce the threat actor’s operational effectiveness and enabled organizations across multiple countries to take steps to disrupt the efficacy of the campaign.

Defending your organization

This campaign succeeded through a combination of exposed management interfaces, weak credentials, and single-factor authentication—all fundamental security gaps that AI helped an unsophisticated actor exploit at scale. This underscores that strong security fundamentals are powerful defenses against AI-augmented threats. Organizations should review and implement the following.

1. FortiGate appliance audit

Organizations running FortiGate appliances should take immediate action:

  • Ensure management interfaces are not exposed to the internet. If remote administration is required, restrict access to known IP ranges and use a bastion host or out-of-band management network
  • Change all default and common credentials on FortiGate appliances, including administrative and VPN user accounts
  • Rotate all SSL-VPN user credentials, particularly for any appliance whose management interface was or may have been internet-accessible
  • Implement multi-factor authentication for all administrative and VPN access
  • Review FortiGate configurations for unauthorized administrative accounts or policy changes
  • Audit VPN connection logs for connections from unexpected geographic locations

2. Credential hygiene

Given the extraction of credentials from FortiGate configurations:

  • Audit for password reuse between FortiGate VPN credentials and Active Directory domain accounts
  • Implement multi-factor authentication for all VPN access
  • Enforce unique, complex passwords for all accounts, particularly Domain Administrator accounts
  • Review and rotate service account credentials, especially those used in backup infrastructure

3. Post-exploitation detection

Organizations that may have been affected should monitor for:

  • Unexpected DCSync operations (Event ID 4662 with replication-related GUIDs)
  • New scheduled tasks named to mimic legitimate Windows services
  • Unusual remote management connections from VPN address pools
  • LLMNR/NBT-NS poisoning artifacts in network traffic
  • Unauthorized access to backup credential stores
  • New accounts with names designed to blend with legitimate service accounts

4. Backup infrastructure hardening

The threat actor’s focus on backup infrastructure highlights the importance of:

  • Isolating backup servers from general network access
  • Patching backup software against known credential extraction vulnerabilities
  • Monitoring for unauthorized PowerShell module loading on backup servers
  • Implementing immutable backup copies that cannot be modified even with administrative access

AWS-specific recommendations

For organizations using AWS:

  • Enable Amazon GuardDuty for threat detection, including monitoring for unusual API calls and credential usage patterns
  • Use Amazon Inspector to automatically scan for software vulnerabilities and unintended network exposure
  • Use AWS Security Hub to maintain continuous visibility into your security posture
  • Use AWS Systems Manager Patch Manager to maintain patching compliance across EC2 instances running network appliances
  • Review IAM access patterns for signs of credential replay following any suspected network device compromise

Indicators of compromise (IOCs)

This campaign’s reliance on legitimate open-source tools—including Impacket, gogo, Nuclei, and others—means that traditional IOC-based detection has limited effectiveness. These tools are widely used by penetration testers and security professionals, and their presence alone is not indicative of compromise. Organizations should investigate context around matches, prioritizing behavioral detection (anomalous VPN authentication patterns, unexpected Active Directory replication, lateral movement from VPN address pools) over signature-based approaches.

IOC Value

IOC Type

First Seen

Last Seen

Annotation

212[.]11.64.250

IPv4

1/11/2026

2/18/2026

Threat actor infrastructure used for scanning and exploitation operations

185[.]196.11.225

IPv4

1/11/2026

2/18/2026

Threat actor infrastructure used for threat operations


If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

CJ Moses

CJ Moses

CJ Moses is the CISO of Amazon Integrated Security. In his role, CJ leads security engineering and operations across Amazon. His mission is to enable Amazon businesses by making the benefits of security the path of least resistance. CJ joined Amazon in December 2007, holding various roles including Consumer CISO, and most recently AWS CISO, before becoming CISO of Amazon Integrated Security September of 2023.

Prior to joining Amazon, CJ led the technical analysis of computer and network intrusion efforts at the Federal Bureau of Investigation’s Cyber Division. CJ also served as a Special Agent with the Air Force Office of Special Investigations (AFOSI). CJ led several computer intrusion investigations seen as foundational to the security industry today.

CJ holds degrees in Computer Science and Criminal Justice, and is an active SRO GT America GT2 race car driver.

❌