With AWS Payment Cryptography, your payment processing applications can use payment hardware security modules (HSMs) that are PCI PIN Transaction Security (PTS) HSM certified and fully managed by AWS, with PCI PIN-compliant key management. This attestation gives you the flexibility to deploy your regulated workloads with reduced compliance overhead.
The PCI PIN compliance report package for AWS Payment Cryptography includes two key components:
PCI PIN Attestation of Compliance (AOC) – demonstrating that AWS Payment Cryptography was successfully validated against the PCI PIN standard with zero findings
PCI PIN Responsibility Summary – provides guidance to help AWS customers understand their responsibilities in developing and operating a highly secure environment for handling PIN-based transactions
AWS was evaluated by Coalfire, a third-party Qualified Security Assessor (QSA). Customers can access the PCI PIN Attestation of Compliance (AOC) and PCI PIN Responsibility Summary reports through AWS Artifact.
To learn more about our PCI programs and other compliance and security programs, visit the AWS Compliance Programs page. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Compliance Support page.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Amazon Web Services (AWS) is pleased to announce a successful completion of the 2025 Cloud Computing Compliance Criteria Catalogue (C5) attestation cycle with 183 services in scope. This alignment with C5 requirements demonstrates our ongoing commitment to adhere to the heightened expectations for cloud service providers. AWS customers in Germany and across Europe can run their applications in the AWS Regions that are in scope of the C5 report with the assurance that AWS aligns with C5 criteria.
The C5 attestation scheme is backed by the German government and was introduced by the Federal Office for Information Security (BSI) in 2016. AWS has adhered to the C5 requirements since their inception. C5 helps organizations demonstrate operational security against common cybersecurity threats when using cloud services.
Independent third-party auditors evaluated AWS for the period of October 1, 2024, through September 30, 2025. The C5 report illustrates the compliance status of AWS for both the basic and additional criteria of C5. Customers can download the C5 report through AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console or learn more at Getting Started with AWS Artifact.
AWS has added the following five services to the current C5 scope:
The following AWS Regions are in scope of the 2025 C5 attestation: Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Europe (Spain), Europe (Zurich), and Asia Pacific (Singapore). For up-to-date information, see the C5 page of our AWS Services in Scope by Compliance Program.
Security and compliance is a shared responsibility between AWS and the customer. When customers move their computer systems and data to the cloud, security responsibilities are shared between the customer and the cloud service provider. For more information, see the AWS Shared Security Responsibility Model.
To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.
Reach out to your AWS account team if you have questions or feedback about the C5 report. If you have feedback about this post, submit comments in the Comments section below.
Amazon Web Services (AWS) is pleased to announce the expansion of GSMA Security Accreditation Scheme for Subscription Management (SAS-SM) certification to four new AWS Regions: US West (Oregon), Europe (Frankfurt), Asia Pacific (Tokyo), and Asia Pacific (Singapore). Additionally, the AWS US East (Ohio) and Europe (Paris) Regions have been recertified. All certifications are under the GSM Association (GSMA) SAS-SM with scope Data Centre Operations and Management (DCOM). AWS was evaluated by GSMA-selected independent third-party auditors, and all Region certifications are valid through October 2026. The Certificate of Compliance that shows AWS achieved GSMA compliance status is available on both the GSMA and AWS websites.
The US East (Ohio) Region first obtained GSMA certification in September 2021, and the Europe (Paris) Region first obtained GSMA certification in October 2021. Since then, multiple independent software vendors (ISVs) have inherited the controls of our SAS-SM DCOM certification to build GSMA compliant subscription management or eSIM (embedded subscriber identity module) services on AWS. For established market leaders, this reduces technical debt while meeting the scalability and performance needs of their customers. Startups innovating with eSIM solutions can accelerate their time to market by many months, compared to on-premises deployments.
Until 2023, the shift from physical subscriber identity modules (SIMs) to eSIMs was primarily driven by automotives, cellular connected wearables, and companion devices such as tablets. GSMA is promoting the SGP.31 and SGP.32 specifications, which standardize protocols and guarantee compatibility and consistent user experience for all eSIM devices spanning smartphones, IoT, smart home, industrial Internet of Things (IoT), and so on. As more device manufacturers launch eSIM only models, our customers are demanding robust, cloud-centered eSIM solutions. Over 400 telecom operators around the world now support eSIM services for their subscribers. Hosting eSIM platforms in the cloud allows them to integrate efficiently with their next generation cloud-based operations support systems (OSS) and business support systems (BSS).
The AWS expansion to certify four new Regions into scope in November 2025 demonstrates our continuous commitment to adhere to the heightened expectations for cloud service providers and extends our global coverage for GSMA-certified infrastructure. With two GSMA-certified Regions in the US, EU, and Asia respectively, customers can now build geo-redundant eSIM solutions to improve their disaster recovery and resiliency posture.
To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page. If you have feedback about this post, submit comments in the Comments section below.
Amazon Web Services (AWS) is pleased to announce that the Fall 2025 System and Organization Controls (SOC) 1, 2, and 3 reports are now available. The reports cover 185 services over the 12-month period from October 1, 2024–September 30, 2025, giving customers a full year of assurance. These reports demonstrate our continuous commitment to adhering to the heightened expectations of cloud service providers.
AWS strives to continuously bring services into the scope of its compliance programs to help customers meet their architectural and regulatory needs. You can view the current list of services in scope on our Services in Scope page. As an AWS customer, you can reach out to your AWS account team if you have any questions or feedback about SOC compliance.
To learn more about AWS compliance and security programs, see AWS Compliance Programs. As always, we value feedback and questions; reach out to the AWS Compliance team through the Contact Us page.
If you have feedback about this post, submit comments in the Comments section below.
Amazon Web Services (AWS) is pleased to announce that two additional AWS services and one additional AWS Region have been added to the scope of our Payment Card Industry Data Security Standard (PCI DSS) certification:
This certification allows customers to use these services while maintaining PCI DSS compliance, enabling innovation without compromising security. The full list of services can be found on the AWS Services in Scope by Compliance Program. The PCI DSS compliance package includes two key components:
Attestation of Compliance (AOC) demonstrating that AWS was successfully validated against the PCI DSS standard.
AWS Responsibility Summary provides guidance to help AWS customers understand their responsibility in developing and operating a highly secure environment on AWS for handling payment card data.
AWS was evaluated by Coalfire, a third-party Qualified Security Assessor (QSA).
This refreshed PCI certification offers customers greater flexibility in deploying regulated workloads while reducing compliance overhead. Customers can access the PCI DSS certification through AWS Artifact. This self-service portal provides on-demand access to AWS compliance reports, streamlining audit processes.
AWS is excited to be the first cloud service provider to offer compliance reports to customers in NIST’s Open Security Controls Assessment Language (OSCAL), an open source, machine-readable (JSON) format for security information. The PCI DSS report package (which includes both the PCI DSS AOC and the AWS Responsibility Summary) in OSCAL format is now available separately in AWS Artifact, marking a milestone towards open, standards-based compliance automation. This machine-readable version of the PCI DSS report package enables workflow automation to reduce manual processing time and modernize security and compliance processes. Your use cases for this content are innovative and we want to hear about them through the contact information found in the OSCAL report package.
To learn more about our PCI programs and other compliance and security programs, see the AWS Compliance Programs page. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Compliance Support page.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
For the third year in a row, Amazon Web Services (AWS) is named as a Leader in the Information Services Group (ISG) Provider LensTM Quadrant report for Sovereign Cloud Infrastructure Services (EU), published on January 9, 2026. ISG is a leading global technology research, analyst, and advisory firm that serves as a trusted business partner to more than 900 clients. This ISG report evaluates 19 providers of sovereign cloud infrastructure services in the multi-public-cloud environment and examines how they address the key challenges that enterprise clients face in the European Union (EU). ISG defines Leaders as providers who represent innovative strength and competitive stability.
ISG rated AWS ahead of other leading cloud providers on both the competitive strength and portfolio attractiveness axes, with the highest score on portfolio attractiveness. Competitive strength was assessed on multiple factors, including degree of awareness, core competencies, and go-to-market strategy. Portfolio attractiveness was assessed on multiple factors, including scope of portfolio, portfolio quality, strategy and vision, and local characteristics.
According to ISG, “AWS’s infrastructure provides robust resilience and availability, supported by a sovereign-by-design architecture that ensures data residency and regional independence.”
Read the report to:
Discover why AWS was named as a Leader with the highest score on portfolio attractiveness by ISG.
Gain further understanding on how the AWS Cloud is sovereign-by-design and how it continues to offer more control and more choice without compromising on the full power of AWS.
Learn how AWS is delivering on its Digital Sovereignty Pledge and is investing in an ambitious roadmap of capabilities for data residency, granular access restriction, encryption, and resilience.
AWS’s recognition as a Leader in this report for the third consecutive year underscores our commitment to helping European customers and partners meet their digital sovereignty and resilience requirements. We are building on the strong foundation of security and resilience that has underpinned AWS services, including our long-standing commitment to customer control over data residency, our design principal of strong regional isolation, our deep European engineering roots, and our more than a decade of experience operating multiple independent clouds for the most critical and restricted workloads.
Today, we’re excited to announce expanded service availability for Dedicated Local Zones, giving customers more choice and control without compromise. In addition to the data residency, sovereignty, and data isolation benefits they already enjoy, the expanded service list gives customers additional options for compute, storage, backup, and recovery.
Dedicated Local Zones are AWS infrastructure fully managed by AWS, built for exclusive use by a customer or community, and placed in a customer-specified location or data center. They help customers across the public sector and regulated industries meet security and compliance requirements for sensitive data and applications through a private infrastructure solution configured to meet their needs. Dedicated Local Zones can be operated by local AWS personnel and offer the same benefits of AWS Local Zones, such as elasticity, scalability, and pay-as-you-go pricing, with added security and governance features.
Since being launched, Dedicated Local Zones have supported a core set of compute, storage, database, containers, and other services and features for local processing. We continue to innovate and expand our offerings based on what we hear from customers to help meet their unique needs.
More choice and control without compromise
The following new services and capabilities deliver greater flexibility for customers to run their most critical workloads while maintaining strict data residency and sovereignty requirements.
New generation instance types
To support complex workloads in AI and high-performance computing, customers can now use newer generation instance types, including Amazon Elastic Compute Cloud (Amazon EC2) generation 7 with accelerated computing capabilities.
AWS storage options
AWS storage options provide two storage classes including Amazon Simple Storage Service (Amazon S3) Express One Zone, which offers high-performance storage for customers’ most frequently accessed data, and Amazon S3 One Zone-Infrequent Access, which is designed for data that is accessed less frequently and is ideal for backups.
Advanced block storage capabilities are delivered through Amazon Elastic Block Store (Amazon EBS) gp3 and io1 volumes, which customers can use to store data within a specific perimeter to support critical data isolation and residency requirements. By using the latest AWS general purpose SSD volumes (gp3), customers can provision performance independently of storage capacity with an up to 20% lower price per gigabyte than existing gp2 volumes. For intensive, latency-sensitive transactional workloads, such as enterprise databases, provisioned IOPS SSD (io1) volumes provide the necessary performance and reliability.
Backup and recovery capabilities
We have added backup and recovery capabilities through Amazon EBS Local Snapshots, which provides robust support for disaster recovery, data migration, and compliance. Customers can create backups within the same geographical boundary as EBS volumes, helping meet data isolation requirements. Customers can also create AWS Identity and Access Management (IAM) policies for their accounts to enable storing snapshots within the Dedicated Local Zone. To automate the creation and retention of local snapshots, customers can use Amazon Data Lifecycle Manager (DLM).
Customers can use local Amazon Machine Images (AMIs) to create and register AMIs while maintaining underlying local EBS snapshots within Dedicated Local Zones, helping achieve adherence to data residency requirements. By creating AMIs from EC2 instances or registering AMIs using locally stored snapshots, customers maintain complete control over their data’s geographical location.
One of GovTech Singapore’s key focuses is on the nation’s digital government transformation and enhancing the public sector’s engineering capabilities. Our collaboration with GovTech Singapore involved configuring their Dedicated Local Zones with specific services and capabilities to support their workloads and meet stringent regulatory requirements. This architecture addresses data isolation and security requirements and ensures consistency and efficiency across Singapore Government cloud environments.
With the availability of the new AWS services with Dedicated Local Zones, government agencies can simplify operations and meet their digital sovereignty requirements more effectively. For instance, agencies can use Amazon Relational Database Service (Amazon RDS) to create new databases rapidly. Amazon RDS in Dedicated Local Zones helps simplify database management by automating tasks such as provisioning, configuring, backing up, and patching. This collaboration is just one example of how AWS innovates to meet customer needs and configures Dedicated Local Zones based on specific requirements.
Chua Khi Ann, Director of GovTech Singapore’s Government Digital Products division, who oversees the Cloud Programme, shared: “The deployment of Dedicated Local Zones by our Government on Commercial Cloud (GCC) team, in collaboration with AWS, now enables Singapore government agencies to host systems with confidential data in the cloud. By leveraging cloud-native services like advanced storage and compute, we can achieve better availability, resilience, and security of our systems, while reducing operational costs compared to on-premises infrastructure.”
Get started with Dedicated Local Zones
AWS understands that every customer has unique digital sovereignty needs, and we remain committed to offering customers the most advanced set of sovereignty controls and security features available in the cloud. Dedicated Local Zones are designed to be customizable, resilient, and scalable across different regulatory environments, so that customers can drive ongoing innovation while meeting their specific requirements.
Ready to explore how Dedicated Local Zones can support your organization’s digital sovereignty journey? Visit AWS Dedicated Local Zones to learn more.
Amazon Web Services (AWS) is introducing guidelines for network scanning of customer workloads. By following these guidelines, conforming scanners will collect more accurate data, minimize abuse reports, and help improve the security of the internet for everyone.
Network scanning is a practice in modern IT environments that can be used for either legitimate security needs or abused for malicious activity. On the legitimate side, organizations conduct network scans to maintain accurate inventories of their assets, verify security configurations, and identify potential vulnerabilities or outdated software versions that require attention. Security teams, system administrators, and authorized third-party security researchers use scanning in their standard toolkit for collecting security posture data. However, scanning is also performed by threat actors attempting to enumerate systems, discover weaknesses, or gather intelligence for attacks. Distinguishing between legitimate scanning activity and potentially harmful reconnaissance is a constant challenge for security operations.
When software vulnerabilities are found through scanning a given system, it’s particularly important that the scanner is well-intentioned. If a software vulnerability is discovered and attacked by a threat actor, it could allow unauthorized access to an organization’s IT systems. Organizations must effectively manage their software vulnerabilities to protect themselves from ransomware, data theft, operational issues, and regulatory penalties. At the same time, the scale of known vulnerabilities is growing rapidly, at a rate of 21% per year for the past 10 years as reported in the NIST National Vulnerability Database.
With these factors at play, network scanners need to scan and manage the collected security data with care. There are a variety of parties interested in security data, and each group uses the data differently. If security data is discovered and abused by threat actors, then system compromises, ransomware, and denial of service can create disruption and costs for system owners. With the exponential growth of data centers and connected software workloads providing critical services across energy, manufacturing, healthcare, government, education, finance, and transportation sectors, the impact of security data in the wrong hands can have significant real-world consequences.
Multiple parties
Multiple parties have vested interests in security data, including at least the following groups:
Organizations want to understand their asset inventories and patch vulnerabilities quickly to protect their assets.
Program auditors want evidence that organizations have robust controls in place to manage their infrastructure.
Cyber insurance providers want risk evaluations of organizational security posture.
Investors performing due diligence want to understand the cyber risk profile of an organization.
Security researchers want to identify risks and notify organizations to take action.
Threat actors want to exploit unpatched vulnerabilities and weaknesses for unauthorized access.
The sensitive nature of security data creates a complex ecosystem of competing interests, where an organization must maintain different levels of data access for different parties.
Motivation for the guidelines
We’ve described both the legitimate and malicious uses of network scanning, and the different parties that have an interest in the resulting data. We’re introducing these guidelines because we need to protect our networks and our customers; and telling the difference between these parties is challenging. There’s no single standard for the identification of network scanners on the internet. As such, system owners and defenders often don’t know who is scanning their systems. Each system owner is independently responsible for managing identification of these different parties. Network scanners might use unique methods to identify themselves, such as reverse DNS, custom user agents, or dedicated network ranges. In the case of malicious actors, they might attempt to evade identification altogether. This degree of identity variance makes it difficult for system owners to know the motivation ofparties performing network scanning.
To address this challenge, we’re introducing behavioral guidelines for network scanning. AWS seeks to provide network security for every customer; our goal is to screen out abusive scanning that doesn’t meet these guidelines. Parties that broadly network scan can follow these guidelines to receive more reliable data from AWS IP space. Organizations running on AWS receive a higher degree of assurance in their risk management.
When network scanning is managed according to these guidelines, it helps system owners strengthen their defenses and improve visibility across their digital ecosystem. For example, Amazon Inspector can detect software vulnerabilities and prioritize remediation efforts while conforming to these guidelines. Similarly, partners in AWS Marketplace use these guidelines to collect internet-wide signals and help organizations understand and manage cyber risk.
“When organizations have clear, data-driven visibility into their own security posture and that of their third parties, they can make faster, smarter decisions to reduce cyber risk across the ecosystem.” – Dave Casion, CTO, Bitsight
Of course, security works better together, so AWS customers can report abusive scanning to our Trust & Safety Center as type Network Activity > Port Scanning and Intrusion Attempts. Each report helps improve the collective protection against malicious use of security data.
The guidelines
To help ensure that legitimate network scanners can clearly differentiate themselves from threat actors, AWS offers the following guidance for scanning customer workloads. This guidance on network scanning complements the policies on penetration testing and vulnerability reporting. AWS reserves the right to limit or block traffic that appears non-compliant with these guidelines. A conforming scanner adheres to the following practices:
Observational
Perform no actions that attempt to create, modify, or delete resources or data on discovered endpoints.
Respect the integrity of targeted systems. Scans cause no degradation to system function and cause no change in system configuration.
Examples of non-mutating scanning include:
Initiating and completing a TCP handshake
Retrieving the banner from an SSH service
Identifiable
Provide transparency by publishing sources of scanning activity.
Implement a verifiable process for confirming the authenticity of scanning activities.
Examples of identifiable scanning include:
Supporting reverse DNS lookups to one of your organization’s public DNS zones for scanning Ips.
Publishing scanning IP ranges, organized by types of requests (such as service existence, vulnerability checks).
If HTTP scanning, have meaningful content in user agent strings (such as names from your public DNS zones, URL for opt-out)
Cooperative
Limit scan rates to minimize impact on target systems.
Provide an opt-out mechanism for verified resource owners to request cessation of scanning activity.
Honor opt-out requests within a reasonable response period.
Examples of cooperative scanning include:
Limit scanning to one service transaction per second per destination service.
Respect site settings as expressed in robots.txt and security.txt and other such industry standards for expressing site owner intent.
Confidential
Maintain secure infrastructure and data handling practices as reflected by industry-standard certifications such as SOC2.
Ensure no unauthenticated or unauthorized access to collected scan data.
Implement user identification and verification processes.
As more network scanners follow this guidance, system owners will benefit from reduced risk to their confidentiality, integrity, and availability. Legitimate network scanners will send a clear signal of their intention and improve their visibility quality. With the constantly changing state of networking, we expect that this guidance will evolve along with technical controls over time. We look forward to input from customers, system owners, network scanners and others to continue improving security posture across AWS and the internet.
If you have feedback about this post, submit comments in the Comments section below or contact AWS Support.
If you’ve ever spent hours manually digging through AWS CloudTrail logs, checking AWS Identity and Access Management (IAM) permissions, and piecing together the timeline of a security event, you understand the time investment required for incident investigation. Today, we’re excited to announce the addition of AI-powered investigation capabilities to AWS Security Incident Response that automate this evidence gathering and analysis work.
AWS Security Incident Response helps you prepare for, respond to, and recover from security events faster and more effectively. The service combines automated security finding monitoring and triage, containment, and now AI-powered investigation capabilities with 24/7 direct access to the AWS Customer Incident Response Team (CIRT).
While investigating a suspicious API call or unusual network activity, scoping and validation require querying multiple data sources, correlating timestamps, identifying related events, and building a complete picture of what happened. Security operations center (SOC) analysts devote a significant amount of time to each investigation, with roughly half of that effort spent manually gathering and piecing together evidence from various tools and complex logs. This manual effort can delay your analysis and response.
AWS is introducing an investigative agent to Security Incident Response, changing this paradigm and adding layers of efficiency. The investigative agent helps you reduce the time required to validate and respond to potential security events. When a case for a security concern is created, either by you or proactively by Security Incident Response, the investigative agent asks clarifying questions to make sure it understands the full context of the potential security event. It then automatically gathers evidence from CloudTrail events, IAM configurations, and Amazon Elastic Compute Cloud (Amazon EC2) instance details and even analyzes cost usage patterns. Within minutes, it correlates the evidence, identifies patterns, and presents you with a clear summary.
How it works in practice
Before diving into an example, let’s paint a clear picture of where the investigative agent lives, how it’s accessed, and its purpose and function. The investigative agent is built directly into Security Incident Response and is automatically available when you create a case. Its purpose is to act as your first responder—gathering evidence, correlating data across AWS services, and building a comprehensive timeline of events so you can quickly move from detection to recovery.
For example: you discover that AWS credentials for an IAM user in your account were exposed in a public GitHub repository. You need to understand what actions were taken with those credentials and properly scope the potential security event, including lateral movement and reconnaissance operations. You need to identify persistence mechanisms that might have been created and determine the appropriate containment steps. To get started, you create a case in the Security Incident Response console and describe the situation.
Here’s where the agent’s approach differs from traditional automation: it asks clarifying questions first. When were the credentials first exposed?What’s the IAM user name?Have you already rotated the credentials?Which AWS account is affected?
This interactive step gathers the appropriate details and metadata before it starts gathering evidence. Specifically, you’re not stuck with generic results—the investigation is tailored to your specific concern.
After the agent has what it needs, it investigates. It looks up CloudTrail events to see what API calls were made using the compromised credentials, pulls IAM user and role details to check what permissions were granted, identifies new IAM users or roles that were created, checks EC2 instance information if compute resources were launched, and analyzes cost and usage patterns for unusual resource consumption. Instead of you querying each AWS service, the agent orchestrates this automatically.
Within minutes, you get a summary, as shown in the following figure. The investigation summary includes a high-level summary and critical findings, which include the credential exposure pattern, observed activity and the timeframe, affected resources, and limiting factors.
This response was generated using AWS Generative AI capabilities. You are responsible for evaluating any recommendations in your specific context and implementing appropriate oversight and safeguards. Learn more about AWS Responsible AI requirements.
Note: The preceding example is representative output. Exact formatting will vary depending on findings.
The investigation summary includes various tabs for detailed information, such as technical findings with an events timeline, as shown in the following figure:
Figure 2 – Security event timeline
When seconds count, this transparency is paramount to a quick, high-fidelity, and accurate response—especially if you need to escalate to the AWS CIRT, a dedicated group of AWS security experts, or explain your findings to leadership, creating a single lens for stakeholders to view the incident.
When the investigation is complete, you have a high-resolution picture of what happened and can make informed decisions about containment, eradication, and recovery. For the preceding exposed credentials scenario, you might need to:
Delete the compromised access keys
Remove the newly created IAM role
Terminate the unauthorized EC2 instances
Review and revert associated IAM policy changes
Check for additional access keys created for other users.
When you engage with the CIRT, they can provide additional guidance on containment strategies based on the evidence the agent gathered.
What this means for your security operations
The leaked credentials scenario shows what the agent can do for a single incident. But the bigger impact is on how you operate day-to-day:
You spend less time on evidence collection. The investigative agent automates the most time-consuming part of investigations—gathering and correlating evidence from multiple sources. Instead of spending an hour on manual log analysis, you can spend most of that time on making containment decisions and preventing recurrence.
You can investigate in plain language. The investigative agent uses natural language processing (NLP), which you can use to describe what you’re investigating in plain language, such as unusual API calls from IP address X or data access from terminated employee’s credentials, and the agent translates that into the technical queries needed. You don’t need to be an expert in AWS log formats or know the exact syntax for querying CloudTrail.
You get a foundation for high-fidelity and accurate investigations. The investigative agent handles the initial investigation—gathering evidence, identifying patterns, and providing a comprehensive summary. If your case requires deeper analysis or you need guidance on complex scenarios, you can engage with the AWS CIRT, who can immediately build on the work the agent has already done, speeding up their response time. They see the same evidence and timeline, so they can focus on advanced threat analysis and containment strategies rather than starting from scratch.
Getting started
If you already have Security Incident Response enabled, the AI-powered investigation capabilities are available now—no additional configuration needed. Create your next security case and the agent will start working automatically.
If you’re new to Security Incident Response, here’s how to set it up:
Enable Security Incident Response through your AWS Organizations management account. This takes a few minutes through the AWS Management Console and provides coverage across your accounts.
Create a case. Describe what you’re investigating; you can do this through the Security Incident Response console or an API, or set up automatic case creation from Amazon GuardDuty or AWS Security Hub alerts.
Review the analysis. The agent presents its findings through the Security Incident Response console, or you can access them through your existing ticketing systems such as Jira or ServiceNow.
The investigative agent uses the AWS Support service-linked role to gather information from your AWS resources. This role is automatically created when you set up your AWS account and provides the necessary access for Support tools to query CloudTrail events, IAM configurations, EC2 details, and cost data. Actions taken by the agent are logged in CloudTrail for full auditability.
The investigative agent is included at no additional cost with Security Incident Response, which now offers metered pricing with a free tier covering your first 10,000 findings ingested per month. Beyond that, findings are billed at rates that decrease with volume. With this consumption-based approach, you can scale your security incident response capabilities as your needs grow.
How it fits with existing tools
Security Incident Response cases can be created by customers or proactively by the service. The investigative agent is automatically triggered when a new case is created, and cases can be managed through the console, API, or Amazon EventBridge integrations.
You can use EventBridge to build automated workflows that route security events from GuardDuty, Security Hub, and Security Incident Response itself to create cases and initiate response plans, enabling end-to-end detection-to-investigation pipelines. Before the investigative agent begins its work, the service’s auto-triage system monitors and filters security findings from GuardDuty and third-party security tools through Security Hub. It uses customer-specific information, such as known IP addresses and IAM entities, to filter findings based on expected behavior, reducing alert volume while escalating alerts that require immediate attention. This means the investigative agent focuses on alerts that actually need investigation.
Conclusion
In this post, I showed you how the new investigative agent in AWS Security Incident Response automates evidence gathering and analysis, reducing the time required to investigate security events from hours to minutes. The agent asks clarifying questions to understand your specific concern, automatically queries multiple AWS data sources, correlates evidence, and presents you with a comprehensive timeline and summary while maintaining full transparency and auditability.
With the addition of the investigative agent, Security Incident Response customers now get the speed and efficiency of AI-powered automation, backed by the expertise and oversight of AWS security experts when needed.
The AI-powered investigation capabilities are available today in all commercial AWS Regions where Security Incident Response operates. To learn more about pricing and features, or to get started, visit the AWS Security Incident Response product page.
If you have feedback about this post, submit comments in the Comments section below.