โŒ

Normal view

How the National Cyber Strategy Secures Our Digital Way of Life

6 March 2026 at 21:59

A Pivotal Moment for National Security

As the digital landscape undergoes profound shifts, the recently released National Cyber Strategy provides the essential foundation for enduring American leadership. By prioritizing the disruption of hostile actors, future-proofing networks, accelerating quantum readiness, and securing the AI frontier, the strategy provides the strategic clarity necessary to protect our digital way of life from sophisticated adversaries. Palo Alto Networks commends National Cyber Director Sean Cairncross for his leadership and looks forward to working with the administration to operationalize this strategy.

Each pillar of the strategy galvanizes meaningful action to advance our collective defense:

Shape Adversary Behavior (Pillar 1)

This signals a decisive shift toward the proactive disruption of malicious actors. The Trump Administration has made clear that the U.S. Government should impose real costs on adversaries to change their behavior. While the private sector is already executing discrete disruptions against malicious actors, coordination has historically been fragmented. The strategy identifies that increased collaboration with private sector entities, who possess unique insight into adversary behavior, can in turn enable more impactful deterrence.

Promote Common Sense Regulation (Pillar 2)

The strategy appropriately recognizes that complexity is the enemy of security. A focus on measurable improvements in cyber outcomes (versus check-the-box compliance exercises) collectively makes us all safer. While much attention is rightfully paid toward harmonizing incident reporting requirements, which Palo Alto Networks wholeheartedly supports, letโ€™s not stop there. The federal government can lead by example by consolidating and streamlining federal government software compliance certifications. For example, there should be logical reciprocity between FedRAMP High and DoW IL-5 certifications.

Modernize and Secure Federal Government Networks (Pillar 3)

In addition to the necessary attention on AI-powered cyber defense, cloud security and zero trust network architecture, Palo Alto Networks applauds the discrete focus on quantum-safe security ahead of โ€œQ-Day,โ€ the point where quantum computing capabilities will compromise legacy public key encryption that has underpinned cybersecurity for decades. As Federal CISO Mike Duffy recently stated, "Modernization without considering PQC readiness or cryptographic agility is really creating technical debt in the future, something that we donโ€™t want to see ever.โ€

To address this challenge, Palo Alto Networks provides a structured quantum-safe framework organized into four stages:

  • Continuous Discovery โ€“ Automating ecosystem ingestion to identify cryptographic dependencies.
  • Risk Assessment & Prioritization โ€“ Evaluating vulnerabilities to establish a data-driven remediation roadmap.
  • Comprehensive Remediation โ€“ Executing the transition to post-quantum algorithms across the architecture.
  • Governance & Crypto-Hygiene โ€“ Maintaining long-term visibility and management.

The bottom line is that 2035 is too late. Quantum readiness must accelerate today, and this strategy will set a critical North Star to drive the necessary urgency.

Secure Critical Infrastructure (Pillar 4)

Critical infrastructure resilience is central to our homeland security, economic security, public health and safety. Unfortunately, critical infrastructure entities are increasingly under assault from emboldened cyber adversaries.

In fact, Palo Alto Networks research shows some form of operational disruption in up to 86% of major cyber incidents. Our 2026 Global Incident Response Report underscores another sobering reality: These entities are under assault from all angles. In 87% of cyber incidents, attacks targeted multiple attack surfaces, which spanned the network, cloud, endpoints and identity.

Recognizing that you canโ€™t secure what you canโ€™t see, we need a national-level effort to identify, prioritize and harden the critical infrastructure that the American people depend upon. This strategy puts an important marker in the ground to revitalize those efforts.

Sustain Superiority in Critical and Emerging Technologies (Pillar 5)

Palo Alto Networks was pleased to see the strategy reinforces the core tenets of the AI Action Plan, emphasizing that "secure-by-design" principles for AI technologies are non-negotiable and that AI adoption and AI security can and must be inexorably linked.

Enterprises should be able to deploy AI confidently without fear of data leakage, model tampering or rogue AI agents. However, despite our research showing an 88% success rate of โ€œjailbreakingโ€ techniques against widely deployed AI models, only 6% of organizations currently have an AI security strategy. Itโ€™s time to flip this paradigm and put defenders back in the driverโ€™s seat in this AI-first moment.

To support this emerging consensus around the importance of promoting AI security, we developed the Secure AI by Design Policy Roadmap. This framework provides a four-part construct to evaluate the evolving dimensions of threats to AI systems. Palo Alto Networks is also proud to make its comprehensive AI security suite, Prismaยฎ AIRSโ„ข, available to all federal agencies at substantial discounts through GSAโ€™s OneGov Initiative.

Build Talent and Capacity (Pillar 6)

Recognizing Americaโ€™s cyber workforce as a โ€œstrategic asset,โ€ the strategy calls for a pragmatic and accessible pipeline for developing talent. The explicit recognition that we should take advantage of existing avenues across government, industry and academia is important. For example, Palo Alto Networks is proud of the impact of its Cybersecurity Academy โ€“ that provides free, NIST Framework-aligned curricula covering essential domains, such as cybersecurity fundamentals, enterprise and network security, cloud security, security operations and the AI/cybersecurity nexus.

Resources like this, and those for other entities, can form the basis of a renewed focus on cyber talent development.

Turning Strategic Vision Into Action

Palo Alto Networks views itself as more than a cybersecurity vendor. We see ourselves as an integrated national security partner of the federal government at a moment when defending our digital way of life demands all of us working together. To that end, we are ready to do our part to turn strategic vision into action.

This strategy should be applauded. Letโ€™s roll up our sleeves and get to work.

The post How the National Cyber Strategy Secures Our Digital Way of Life appeared first on Palo Alto Networks Blog.

Why Service Providers Must Become Secure AI Factories

The Pivot to Large-Scale Intelligence

For decades, Telecommunications Service Providers have been the central nervous system of the global economy, tasked with a singular, critical mission: connecting people.

The industry spent vast amounts of capital building networks that moved voice, then text and finally high-speed mobile data. We succeeded. According to GSMA's most recent report, there are 5.8 billion unique subscriptions. The world is connected.

But the mission is changing fast. We are no longer just moving data; we are now expected to host intelligence.

Todayโ€™s enterprises are drowning in data and desperate for AI-led capabilities to analyze and process the information. They are struggling with the immense capital costs, the scarcity of GPUs, and complex data sovereignty regulations that make public cloud options difficult for sensitive workloads.

We are no longer living in the communications age, or the internet age, or the social network era, not even in the generative AI era. We are entering the Agentic Era. In this new era, data is the raw resource, and AI agents and models are the machinery that refines it into value. The infrastructure required to do this โ€“ from massive data ingestion to complex training and high-volume real-time inference โ€“ is called the "AI Factory.โ€

And these AI factories are not being designed for human-speed operations, but rather for machine-speed operations.

This creates a generational opportunity for telecommunications service providers (SP). By building new (or transforming existing) data centers and edge locations into AI factories, SPs can offer hosted AI services that are high-performance, low-latency and compliant with regional requirements.

However, building an AI factory isn't just about racking GPUs. It is about realizing that an AI infrastructure presents a fundamentally new threat landscape that legacy security cannot handle. If the SPโ€™s AI factory is compromised (if models are poisoned, identities hijacked, training data exfiltrated) the damage to reputation and national infrastructure is incalculable.

To capture the AI opportunity, service providers need more than computing power; they need a blueprint for a secure AI architecture. At Palo Alto Networks, we view the security of the AI factory as a three-tiered layer cake, requiring holistic, integrated protection from the physical infrastructure up to the AI agents themselves.

The AI Threat Model Is a Structural Shift

For service providers building AI Factories, the challenge is not simply adding another workload to the data center. AI changes the risk equation entirely. It introduces new traffic patterns, new identities and new forms of autonomy that traditional network and core security architectures were never designed to govern.

  • Data Gravity Becomes Attack Surface: AI training and inference environments ingest massive volumes of data from distributed enterprise customers, partners and edge environments. This scale creates a new exposure layer. Malicious payloads, embedded model manipulation, and command-and-control traffic can hide within high-throughput AI data flows. Inspection models built for deterministic traffic patterns struggle when confronted with dynamic, AI-driven pipelines.
  • Non-Human Identities at Scale: An AI Factory is more than just infrastructure; it will be populated by autonomous agents. These agents retrieve data, call APIs, invoke tools and trigger workflows across networks and cloud environments. They require elevated privileges to function. For service providers, this means managing not just subscriber identities, but fleets of machine identities operating with delegated authority.
  • Agentic and Adversarial Threats: Attackers are also operationalizing AI. They probe for weaknesses faster, automate exploitation and increasingly target the AI systems themselves. Prompt injection can redirect an agentโ€™s mission. Data poisoning can subtly degrade model integrity. Rogue agents can be manipulated to access external tools or escalate privileges. These are not traditional perimeter attacks; they are attacks on reasoning, behavior and autonomy.

For service providers offering AI-as-a-Service, the implication is clear: Securing the AI Factory requires more than network defense. It requires real-time governance of models, agents and data flows, ensuring that autonomous systems operate within defined policy boundaries while maintaining performance and scale.

Next-gen platforms enable transformation.
The security of the AI factory required holistic, integrated protection from the physical infrastructure up to the AI agents themselves.

The Foundation โ€” Securing the High-Performance Infrastructure

The base of our cybersecurity stack is the physical and virtual infrastructure of the AI factory itself. This is a high-stakes environment. In a multitenant SP data center, you might have a financial institution fine-tuning a fraud detection model on one rack, and a government agency running inference on satellite imagery on the next. The barriers between these tenants must be absolute.

Foundational cybersecurity has two critical components: perimeter defense and internal segmentation.

The ML-Powered Perimeter

The front door of the AI factory must handle unprecedented throughput while performing deep inspection. Traditional firewalls, relying on static signatures, become bottlenecks and fail to catch novel threats hidden in massive data streams.

Palo Alto Networks addresses this with our flagship ML-Led Next-Generation Firewalls (NGFW). We have embedded machine learning directly into the core of the firewall. Instead of waiting for a patient zero to be identified and a signature created, our NGFWs analyze traffic patterns in real-time to identify and block unknown threats instantly. For an SP, this means you can provide the massive bandwidth required for AI data ingestion without compromising on security inspection at the edge.

Zero Trust Segmentation Inside the Factory

The perimeter is just the start. Once inside the data center, the biggest risk is the lateral movement threats and malware. If an attacker compromises a low-security tenant or a peripheral IoT device, they must not be able to jump to the sensitive GPU clusters or the model storage arrays.

In an AI factory, workloads are highly dynamic and virtualized. We provide robust segmentation across both hardware and software environments. We can enforce granular policies between virtual instances, containers and different stages of the AI pipeline (e.g., isolating training environments from inference operations). This allows a breach in one segment to be contained instantly, protecting the integrity of the entire factory.

The Engine โ€“ Securing AI Agents, Apps and Identities

The middle layer of the security stack is where the actual "work" of AI happens โ€“ the models, the LLMs, the agents. This is the newest frontier of cybersecurity and where traditional tools are most deficient.

This layer faces two distinct challenges: Protecting the integrity of the AI interaction and managing the identities of the nonhuman actors.

Securing AI Apps and Agents

As enterprises evolve from standalone LLMs to agentic AI systems that reason, call tools, access data, and take action across workflows, the challenge is no longer just what a model says; it is what an AI agent does.

How do you validate that an LLM powering your AI factory does not expose sensitive information, and that autonomous agents cannot be manipulated through jailbreak prompts, tool injection or malicious instructions? How do you prevent an AI agent from accessing unauthorized systems, escalating privileges, or executing unintended actions?

This is the role of Prismaยฎ AIRSโ„ขย โ€“ our security and governance platform for AI agents, apps, models and data. Prisma AIRS operates directly in the execution path of AI applications and autonomous agents. It enforces policy in real time, validates agent behavior, and blocks prompt injection, model manipulation and agent hijacking before they can impact the business.

Beyond filtering outputs, Prisma AIRS governs agent communications, tool access and data flows to prevent credential leakage, mission drift and unauthorized actions. For service providers delivering AI-as-a-Service, or enterprises deploying AI agents internally, Prisma AIRS enables integrity, compliance and continuous control as intelligent systems move from experimentation into mission-critical operations.

Built in alignment with emerging standards like the OWASP Agentic Top 10 Survival Guide, Prisma AIRS operationalizes best practices to defend against real-world agentic threats.

Governing Nonhuman Identity

Perhaps the most profound shift in the AI factory is who or what is doing the work. We are rapidly moving toward ecosystems of autonomous AI Agents. These agents need to authenticate to databases, authorize API calls to other services, and access privileged information just like a human employee.

If an attacker steals the credentials of a high-privilege AI agent, they own the factory.

This is why the Palo Alto Networks acquisition of CyberArk, the global leader in Identity Security, is so strategic for the AI era. CyberArk specializes in protecting privileged access, and crucially managing nonhuman identities. By integrating CyberArkโ€™s capabilities, we can ensure that every AI agent operating within the SPโ€™s factory is robustly authenticated, authorized for minimum necessary access, and its activities are monitored. We are securing the new digital workforce.

The Overwatch โ€“ Holistic, AI-Driven Threat Management

The top layer of the stack is about visibility and speed. An AI factory generates a deafening amount of telemetry data from networks, endpoints, clouds and identity systems. No human security operations center (SOC) can sift through this noise manually to find a sophisticated attack.

To fight AI-driven threats, you need AI-driven defense.

This is the role of Cortexยฎ, our flagship platform for holistic threat management. Cortex is designed to ingest billions of data points from across the entire Palo Alto Networks product portfolio and hundreds of types of third-party equipment, normalizing it into a single source of truth.

Cortex applies advanced AI and machine learning to this vast data lake to detect anomalies that signal a complex attack spanning different threat vectors. It might correlate an unusual login event from an AI agent (detected by the identity layer) with a subtle change in outbound traffic patterns at the firewall (layer 1), recognizing it as data exfiltration in progress.

For a Service Provider, Cortex provides the "single pane of glass" view over their entire AI factory operations, allowing them to detect, investigate and automatically respond to threats at machine speed, vastly reducing Mean Time to Respond (MTTR).

Building the Trust Foundation for the Agentic Era

The transition to becoming an AI factory is a necessary evolution for Service Providers seeking growth in the coming decade. Your ability to offer localized, sovereign, high-performance AI services will differentiate you from those who large-scale and cement your role as an indispensable partner to enterprises and governments.

But this opportunity is inextricably linked to trust. Your customers will not move their most sensitive data and IP into your AI factory unless they are certain it is secure against modern threats.

Security cannot be an afterthought bolted onto an AI infrastructure. It must be woven into the fabric of the factory, from the silicon to the software agents. By adopting a layered approach (securing the high-performance infrastructure with ML-led NGFWs, protecting models and identities with Prisma AIRS and CyberArk, while managing the entire landscape with Cortex) Service Providers can build the trusted foundations the AI era demands.

This week weโ€™ll be at Mobile World Congress talking about our security platform for AI Factories, along with five solutions and ecosystem partners. Come see us at in Hall 4, Stand #4D55.

The post Why Service Providers Must Become Secure AI Factories appeared first on Palo Alto Networks Blog.

Implementing data governance on AWS: Automation, tagging, and lifecycle strategy โ€“ Part 2

16 January 2026 at 21:26

In Part 1, we explored the foundational strategy, including data classification frameworks and tagging approaches. In this post, we examine the technical implementation approach and key architectural patterns for building a governance framework.

We explore governance controls across four implementation areas, building from foundational monitoring to advanced automation. Each area builds on the previous one, so you can implement incrementally and validate as you go:

  • Monitoring foundation: Begin by establishing your monitoring baseline. Set up AWS Config rules to track tag compliance across your resources, then configure Amazon CloudWatch dashboards to provide real-time visibility into your governance posture. By using this foundation, you can understand your current state before implementing enforcement controls.
  • Preventive controls: Build proactive enforcement by deploying AWS Lambda functions that validate tags at resource creation time. Implement Amazon EventBridge rules to trigger real-time enforcement actions and configure service control policies (SCPs) to establish organization-wide guardrails that prevent non-compliant resource deployment.
  • Automated remediation: Reduce manual intervention by setting up AWS Systems Manager Automation Documents that respond to compliance violations. Configure automated responses that correct common issues like missing tags or improper encryption and implement classification-based security controls that automatically apply appropriate protections based on data sensitivity.
  • Advanced features: Extend your governance framework with sophisticated capabilities. Deploy data sovereignty controls to help ensure regulatory compliance across AWS Regions, implement intelligent lifecycle management to optimize costs while maintaining compliance, and establish comprehensive monitoring and reporting systems that provide stakeholders with clear visibility into your governance effectiveness.

Prerequisites

Before beginning implementation, ensure you have AWS Command Line Interface (AWS CLI) installed and configured with appropriate credentials for your target accounts. Set AWS Identity and Access Managment (IAM) permissions so that you can create roles, Lambda functions, and AWS Config rules. Finally, basic familiarity with AWS CloudFormation or Terraform will be helpful, because weโ€™ll use CloudFormation throughout our examples.

Tag governance controls

Implementing tag governance requires multiple layers of controls working together across AWS services. These controls range from preventive measures that validate resources at creation to detective controls that monitor existing resources. This section describes each control type, starting with preventive controls that act as first line of defense.

Preventive controls

Preventive controls help ensure resources are properly tagged at creation time. By implementing Lambda functions triggered by AWS CloudTrail events, you can validate tags before resources are created, preventing non-compliant resources from being deployed:

# AWS Lambda function for preventive tag enforcement def enforce_resource_tags(event, context): ย ย  ย 
	required_tags = ['DataClassification', 'DataOwner', 'Environment'] ย ย  ย  ย ย  ย 

	# Extract resource details from the event ย ย  ย 
	resource_tags = 
event['detail']['requestParameters'].get('Tags', {}) ย ย  ย  ย ย  ย 

	# Validate required tags are present ย ย  ย 
	missing_tags = [tag for tag in required_tags if tag not in resource_tags] ย ย  ย  ย ย  ย 

	if missing_tags:
		# Send alert to security team
		# Log non-compliance for compliance reporting ย ย  ย  ย  ย 
		raise Exception(f"Missing required tags: {missing_tags}")

	return {โ€˜statusโ€™: โ€˜compliantโ€™}

For complete, production-ready implementation, see Implementing Tag Policies with AWS Organizations and EventBridge event patterns for resource monitoring.

Organization-wide policy enforcement

AWS Organizations tag policies provide a foundation for consistent tagging across your organization. These policies define standard tag formats and values, helping to ensure consistency across accounts:

{ ย ย 
    "tags": { ย ย  ย 
        "DataClassification": { ย ย  ย  ย 
            "tag_key": { ย ย  ย  ย  ย 
                "@@assign": "DataClassification" ย ย  ย  ย 
            }, ย ย  ย  ย 
            "tag_value": { ย ย  ย  ย  ย 
                "@@assign": ["L1", "L2", "L3"] ย ย  ย  ย 
            }, ย ย  ย  ย 
            "enforced_for": { ย ย  ย  ย  ย 
                "@@assign": [ ย ย  ย  ย  ย  ย 
                    "s3:bucket", ย ย  ย  ย  ย  ย 
                    "ec2:instance", ย ย  ย  ย  ย  ย 
                    "rds:db", ย ย  ย  ย  ย  ย 
                    "dynamodb:table" ย ย  ย  ย  ย 
                ] ย ย  ย  ย 
            } ย ย  ย 
        } ย ย 
    } 
}

Detailed implementation guidance: Getting started with tag policies & Best practices for using tag policies

Tag-based access control

Tag-based access control gives you detailed permissions using attribute-based access control (ABAC). By using this approach, you can define permissions based on resource attributes rather than creating individual IAM policies for each use case:

{ ย ย  ย 
    "Version": "2012-10-17", ย ย  ย 
    "Statement": [ ย ย  ย  ย  ย 
        { ย ย  ย  ย  ย  ย  ย 
            "Effect": "Allow", ย ย  ย  ย  ย  ย  ย 
            "Action": ["s3:GetObject", "s3:PutObject"], ย ย  ย  ย  ย  ย  ย 
            "Resource": "*", ย ย  ย  ย  ย  ย  ย 
            "Condition": { ย ย  ย  ย  ย  ย  ย  ย  ย 
                "StringEquals": { ย ย  ย  ย  ย  ย  ย  ย  ย  ย  ย 
                    "aws:ResourceTag/DataClassification": "L1", ย ย  ย  ย  ย  ย  ย  ย  ย  ย  ย 
                    "aws:ResourceTag/Environment": "Prod" ย ย  ย  ย  ย  ย  ย  ย  ย 
                } ย ย  ย  ย  ย  ย  ย 
            } ย ย  ย  ย  ย 
        } ย ย  ย 
    ] 
}

Multi-account governance strategy

While implementing tag governance within a single account is straightforward, most organizations operate in a multi-account environment. Implementing consistent governance across your organization requires additional controls:

# This SCP prevents creation of resources without required tags 
OrganizationControls: ย ย 
	SCPPolicy: ย ย  ย 
		Type: AWS::Organizations::Policy ย ย  ย 
		Properties: ย ย  ย  ย 
			Content: ย ย  ย  ย  ย 
				Version: "2012-10-17" ย ย  ย  ย  ย 
				Statement: ย ย  ย  ย  ย  ย 
					- 	Sid: EnforceTaggingOnResources ย ย  ย  ย  ย  ย  ย 
						Effect: Deny ย ย  ย  ย  ย  ย  ย 
						Action: ย ย  ย  ย  ย  ย  ย  ย 
							- "ec2:RunInstances" ย ย  ย  ย  ย  ย  ย  ย 
							- "rds:CreateDBInstance" ย ย  ย  ย  ย  ย  ย  ย 
							- "s3:CreateBucket" ย ย  ย  ย  ย  ย  ย 
						Resource: "*" ย ย  ย  ย  ย  ย  ย 
						Condition: ย ย  ย  ย  ย  ย  ย  ย 
							'Null': ย ย  ย  ย  ย  ย  ย  ย  ย 
								'aws:RequestTag/DataClassification': true ย ย  ย  ย  ย  ย  ย  ย  ย 
								'aws:RequestTag/Environment': true

For more information, see implementation guidance for SCPs.

Integration with on-premises governance frameworks

Many organizations maintain existing governance frameworks for their on-premises infrastructure. Extending these frameworks to AWS requires careful integration and applicability analysis. The following example shows how to use AWS Service Catalog to create a portfolio of AWS resources that align with your on-premises governance standards.

# AWS Service Catalog portfolio for on-premises aligned resources 
ServiceCatalogIntegration: ย ย 
	Portfolio: ย ย  ย 
		Type: AWS::ServiceCatalog::Portfolio ย ย  ย 
		Properties: ย ย  ย  ย 
			DisplayName: Enterprise-Aligned Resources ย ย  ย  ย 
			Description: Resources that comply with existing governance framework ย ย  ย  ย 
			ProviderName: Enterprise IT ย ย 

# Product that maintains on-prem naming conventions and controls ย ย 
	CompliantProduct: ย ย  ย 
		Type: AWS::ServiceCatalog::CloudFormationProduct ย ย  ย 
		Properties: ย ย  ย  ย 
			Name: Compliant-Resource-Bundle ย ย  ย  ย 
			Owner: Enterprise Architecture ย ย  ย  ย 
			Tags: ย ย  ย  ย  ย 
				- 	Key: OnPremMapping ย ย  ย  ย  ย  ย 
					Value: "EntArchFramework-v2"

Automating security controls based on classification

After data is classified, use these classifications to automate security controls and use AWS Config to track and validate that resources are properly tagged through defined rules that assess your AWS resource configurations, including a built-in required-tags rule. For non-compliant resources, you can use Systems Manager to automate the remediation process.

With proper tagging in place, you can implement automated security controls using EventBridge and Lambda. By using this combination, you can create a cost-effective and scalable infrastructure for enforcing security policies based on data classification. For example, when a resource is tagged as high impact, you can use EventBridge to trigger a Lambda function to enable required security measures.

def apply_security_controls(event, context): ย ย  ย 
	resource_type = event['detail']['resourceType'] ย ย  ย 
	tags = event['detail']['tags'] ย ย  ย  ย ย  ย 

	if tags['DataClassification'] == 'L1': ย ย  ย  ย  ย 
		# Apply Level 1 security controls ย ย  ย  ย  ย 
		enable_encryption(resource_type) ย ย  ย  ย  ย 
		apply_strict_access_controls(resource_type) ย ย  ย  ย  ย 
		enable_detailed_logging(resource_type) ย ย  ย 
	elif tags['DataClassification'] == 'L2': ย ย  ย  ย  ย 
		# Apply Level 2 security controls ย ย  ย  ย  ย 
		enable_standard_encryption(resource_type) ย ย  ย  ย  ย 
		apply_basic_access_controls(resource_type)

This example automation applies security controls consistently, reducing human error and maintaining compliance. Code-based controls ensure policies match your data classification.

Implementation resources:

Data sovereignty and residency

Data sovereignty and residency requirements help you comply with regulations like GDPR. Such controls can be implemented to restrict data storage and processing to specific AWS Regions:

# Config rule for region restrictions 
AWSConfig: ย ย 
	ConfigRule: ย ย  ย 
		Type: AWS::Config::ConfigRule ย ย  ย 
		Properties: ย ย  ย  ย 
			ConfigRuleName: s3-bucket-region-check ย ย  ย  ย 
			Description: Checks if S3 buckets are in allowed regions ย ย  ย  ย 
			Source: ย ย  ย  ย  ย 
				Owner: AWS ย ย  ย  ย  ย 
				SourceIdentifier: S3_BUCKET_REGION ย ย  ย  ย 
			InputParameters: ย ย  ย  ย  ย 
				allowedRegions: ย ย  ย  ย  ย  ย 
					- eu-west-1 ย ย  ย  ย  ย  ย 
					- eu-central-1

Note: This example uses eu-west-1 and eu-central-1 because these Regions are commonly used for GDPR compliance, providing data residency within the European Union. Adjust these Regions based on your specific regulatory requirements and business needs. For more information, see Meeting data residency requirements on AWS and Controls that enhance data residence protection.

Disaster recovery integration with governance controls

While organizations often focus on system availability and data recovery, maintaining governance controls during disaster recovery (DR) scenarios is important for compliance and security. To implement effective governance in your DR strategy, start by using AWS Config rules to check that DR resources maintain the same governance standards as your primary environment:

AWSConfig: ย ย 
	ConfigRule: ย ย  ย 
		Type: AWS::Config::ConfigRule ย ย  ย 
		Properties: ย ย  ย  ย 
			ConfigRuleName: dr-governance-check ย ย  ย  ย 
			Description: Ensures DR resources maintain governance controls ย ย  ย  ย 
			Source: ย ย  ย  ย  ย 
				Owner: AWS ย ย  ย  ย  ย 
				SourceIdentifier: REQUIRED_TAGS ย ย  ย  ย 
			Scope: ย ย  ย  ย  ย 
				ComplianceResourceTypes: ย ย  ย  ย  ย  ย 
					- "AWS::S3::Bucket" ย ย  ย  ย  ย  ย 
					- "AWS::RDS::DBInstance" ย ย  ย  ย  ย  ย 
					- "AWS::DynamoDB::Table" ย ย  ย  ย 
			InputParameters: ย ย  ย  ย  ย 
				tag1Key: "DataClassification" ย ย  ย  ย  ย 
				tag1Value: "L1,L2,L3" ย ย  ย  ย  ย 
				tag2Key: "Environment" ย ย  ย  ย  ย 
				tag2Value: "DR"

For your most critical data (classified as Level 1 in part 1 of this post), implement cross-Region replication while maintaining strict governance controls. This helps ensure that sensitive data remains protected even during failover scenarios:

Cross-Region: ย ย 
	ReplicationRule: ย ย  ย 
		Type: AWS::S3::Bucket ย ย  ย 
		Properties: ย ย  ย  ย 
			ReplicationConfiguration: ย ย  ย  ย  ย 
				Role: !GetAtt ReplicationRole.Arn ย ย  ย  ย  ย 
				Rules: ย ย  ย  ย  ย  ย 
					- 	Status: Enabled ย ย  ย  ย  ย  ย  ย 
						TagFilters: ย ย  ย  ย  ย  ย  ย  ย 
							- 	Key: "DataClassification" ย ย  ย  ย  ย  ย  ย  ย  ย 
								Value: "L1" ย ย  ย  ย  ย  ย  ย 
						Destination: ย ย  ย  ย  ย  ย  ย  ย 
							Bucket: !Sub "arn:aws:s3:::${DRBucket}" ย ย  ย  ย  ย  ย  ย  ย 
							EncryptionConfiguration: ย ย  ย  ย  ย  ย  ย  ย  ย 
								ReplicaKmsKeyID: !Ref DRKMSKey

Automated compliance monitoring

By combining AWS Config for resource compliance, CloudWatch for metrics and alerting, and Amazon Macie for sensitive data discovery, you can create a robust compliance monitoring framework that automatically detects and responds to compliance issues:

Figure 1: Compliance monitoring architecture

Figure 1: Compliance monitoring architecture

This architecture (shown in Figure 1) demonstrates how AWS services work together to provide compliance monitoring:

  • AWS Config, CloudTrail, and Macie monitor AWS resources
  • CloudWatch aggregates monitoring data
  • Alerts and dashboards provide real-time visibility

The following CloudFormation template implements these controls:

Resources: ย ย 
	EncryptionRule: ย ย  ย 
		Type: AWS::Config::ConfigRule ย ย  ย 
		Properties: ย ย  ย  ย 
			ConfigRuleName: s3-bucket-encryption-enabled ย ย  ย  ย 
			Source: ย ย  ย  ย  ย 
				Owner: AWS ย ย  ย  ย  ย 
				SourceIdentifier: 
S3_BUCKET_SERVER_SIDE_ENCRYPTION_ENABLED ย ย 

	MacieJob: ย ย  ย 
		Type: AWS::Macie::ClassificationJob ย ย  ย 
		Properties: ย ย  ย  ย 
			JobType: ONE_TIME ย ย  ย  ย 
			S3JobDefinition: ย ย  ย  ย  ย 
				BucketDefinitions: ย ย  ย  ย  ย  ย 
				- 	AccountId: !Ref AWS::AccountId ย ย  ย  ย  ย  ย  ย 
					Buckets: ย ย  ย  ย  ย  ย  ย  ย 
						- !Ref DataBucket ย ย  ย  ย 
				ScoreFilter: ย ย  ย  ย  ย 
					Minimum: 75 ย ย 

	SecurityAlarm: ย ย  ย 
		Type: AWS::CloudWatch::Alarm ย ย  ย 
		Properties: ย ย  ย  ย 
			AlarmName: UnauthorizedAccessAttempts ย ย  ย  ย 
			MetricName: UnauthorizedAPICount ย ย  ย  ย 
			Namespace: SecurityMetrics ย ย  ย  ย 
			Statistic: Sum ย ย  ย  ย 
			Period: 300 ย ย  ย  ย 
			EvaluationPeriods: 1 ย ย  ย  ย 
			Threshold: 3 ย ย  ย  ย 
			AlarmActions: ย ย  ย  ย  ย 
				- 	!Ref SecurityNotificationTopic ย ย  ย  ย 
			ComparisonOperator: GreaterThanThreshold

These controls provide real-time visibility into your security posture, automate responses to potential security events, and use Macie for sensitive data discovery and classification. For a complete monitoring setup, review List of AWS Config Managed Rules and Using Amazon CloudWatch dashboards.

Using AWS data lakes for governance

Modern data governance strategies often use data lakes to provide centralized control and visibility. AWS provides a comprehensive solution through the Modern Data Architecture Accelerator (MDAA), which you can use to help you rapidly deploy and manage data platform architectures with built-in security and governance controls. Figure 2 shows an MDAA reference architecture.

Figure 2: MDAA reference architecture

Figure 2: MDAA reference architecture

For detailed implementation guidance and source code, see Accelerate the Deployment of Secure and Compliant Modern Data Architectures for Advanced Analytics and AI.

Access patterns and data discovery

Understanding and managing access patterns is important for effective governance. Use CloudTrail and Amazon Athena to analyze access patterns:

SELECT ย ย 
	useridentity.arn, ย ย 
	eventname, ย ย 
	requestparameters.bucketname, ย ย 
	requestparameters.key, ย ย 
	COUNT(*) as access_count 
FROM cloudtrail_logs 
WHERE eventname IN ('GetObject', 'PutObject') 
GROUP BY 1, 2, 3, 4 
ORDER BY access_count DESC 
LIMIT 100;

This query helps identify frequently accessed data and unusual patterns in access behavior. These insights help you to:

  • Optimize storage tiers based on access frequency
  • Refine DR strategies for frequently accessed data
  • Identify of potential security risks through unusual access patterns
  • Fine-tune data lifecycle policies based on usage patterns

For sensitive data discovery, consider integrating Macie to automatically identify and protect PII across your data estate.

Machine learning model governance with SageMaker

As organizations advance in their data governance journey, many are deploying machine learning models in production, necessitating governance frameworks that extend to machine learning (ML) operations. Amazon SageMaker offers advanced tools that you can use to maintain governance over ML assets without impeding innovation.

SageMaker governance tools work together to provide comprehensive ML oversight:

  • Role Manager provides fine-grained access control for ML roles
  • Model Cards centralize documentation and lineage information
  • Model Dashboard offers organization-wide visibility into deployed models
  • Model Monitor automates drift detection and quality control

The following example configures SageMaker governance controls:

# Basic/High-level ML governance setup with role and monitoring SageMakerRole: ย ย 
	Type: AWS::IAM::Role ย ย 
	Properties: ย ย  ย 
		# Allow SageMaker to use this role ย ย  ย 
		AssumeRolePolicyDocument: ย ย  ย  ย 
			Statement: ย ย  ย  ย  ย 
				- 	Effect: Allow ย ย  ย  ย  ย  ย 
					Principal: ย ย  ย  ย  ย  ย  ย 
						Service: sagemaker.amazonaws.com ย ย  ย  ย  ย  ย 
					Action: sts:AssumeRole ย ย  ย 
		# Attach necessary permissions ย ย  ย 
		ManagedPolicyArns: ย ย  ย  ย 
				- 	arn:aws:iam::aws:policy/AmazonSageMakerFullAccess 

ModelMonitor: ย ย 
	Type: AWS::SageMaker::MonitoringSchedule ย ย 
	Properties: ย ย  ย 
		# Set up hourly model monitoring ย ย  ย 
		MonitoringScheduleName: hourly-model-monitor ย ย  ย 
		ScheduleConfig: ย ย  ย  ย 
			ScheduleExpression: 'cron(0 * * * ? *)' ย # Run hourly

This example demonstrates two essential governance controls: role-based access management for secure service interactions and automated hourly monitoring for ongoing model oversight. While these technical implementations are important, remember that successful ML governance requires integration with your broader data governance framework, helping to ensure consistent controls and visibility across your entire data and analytics ecosystem. For more information, see Model governance to manage permissions and track model performance.

Cost optimization through automated lifecycle management

Effective data governance isnโ€™t just about securityโ€”itโ€™s also about managing cost efficiently. Implement intelligent data lifecycle management based on classification and usage patterns, as shown in Figure 3:

Figure 3: Tag-based lifecycle management in Amazon S3

Figure 3: Tag-based lifecycle management in Amazon S3

Figure 3 illustrates how tags drive automated lifecycle management:

  • New data enters Amazon Simple Storage Service (Amazon S3) with the tag DataClassification: L2
  • Based on classification, the data starts in Standard/INTELLIGENT_TIERING
  • After 90 days, the data transitions to Amazon S3 Glacier storage for cost-effective archival
  • The RetentionPeriod tag (84 months) determines final expiration

Hereโ€™s the implementation of the preceding lifecycle rules:

LifecycleConfiguration: ย ย 
	Rules: ย ย  ย 
		- 	ID: IntelligentArchive ย ย  ย  ย 
        	Status: Enabled ย ย  ย  ย 
            Transitions: ย ย  ย  ย  ย 
				- 	StorageClass: INTELLIGENT_TIERING ย ย  ย  ย  ย  ย 
                	TransitionInDays: 0 ย ย  ย  ย  ย 
               	- 	StorageClass: GLACIER ย ย  ย  ย  ย  ย 
                	TransitionInDays: 90 ย ย  ย  ย 
			Prefix: /data/ ย ย  ย  ย 
			TagFilters: ย ย  ย  ย  ย 
				- 	Key: DataClassification ย ย  ย  ย  ย  ย 
                	Value: L2 ย ย  ย 
   		- 	ID: RetentionPolicy ย ย  ย  ย 
        	Status: Enabled ย ย  ย  ย 
            ExpirationInDays: 2555 ย # 7 years ย ย  ย  ย 
            TagFilters: ย ย  ย  ย  ย 
				- 	Key: RetentionPeriod ย ย  ย  ย  ย  ย 
                	Value: "84" ย # 7 years in months

S3 Lifecycle automatically optimizes storage costs while maintaining compliance with retention requirements. For example, data initially stored in Amazon S3 Intelligent-Tiering automatically moves to Glacier after 90 days, significantly reducing storage costs while helping to ensure data remains available when needed. For more information, seeManaging the lifecycle of objects and Managing storage costs with Amazon S3 Intelligent-Tiering.

Conclusion

Successfully implementing data governance on AWS requires both a structured approach and adherence to key best practices. As you progress through your implementation journey, keep these fundamental principles in mind:

  • Start with a focused scope and gradually expand. Begin with a pilot project that addresses high-impact, low-complexity use cases. By using this approach, you can demonstrate quick wins while building experience and confidence in your governance framework.
  • Make automation your foundation. Apply AWS services such as Amazon EventBridge for event-driven responses, implement automated remediation for common issues, and create self-service capabilities that balance efficiency with compliance. This automation-first approach helps ensure scalability and consistency in your governance framework.
  • Maintain continuous visibility and improvement. Regular monitoring, compliance checks, and framework updates are essential for long-term success. Use feedback from your operations team to refine policies and adjust controls as your organizationโ€™s needs evolve.

Common challenges to be aware of:

  • Initial resistance to change from teams used to manual processes
  • Complexity in handling legacy systems and data
  • Balancing security controls with operational efficiency
  • Maintaining consistent governance across multiple AWS accounts and regions

For more information, implementation support, and guidance, see:

By following this approach and remaining mindful of potential challenges, you can build a robust, scalable data governance framework that grows with your organization while maintaining security, compliance, and efficient data operations.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Tushar Jain Omar Ahmed
Omar Ahmed is an Auto and Manufacturing Solutions Architect who specializes in analytics. Omarโ€™s journey in cloud computing began as an AWS data center operations technician, where he developed hands on infrastructure expertise. Outside of work, he enjoys motorsports, gaming, and swimming.
Will Black Omar Mahmoud
Omar is a Solutions Architect helping small-medium businesses with their cloud journey. He specializes in Amazon Connect and next-gen developer services like Kiro. Omar began at AWS as a data center operations technician, gaining hands-on cloud infrastructure experience. Outside work, Omar enjoys gaming, hiking, and soccer.
Fritz Kunstler Changil Jeong
Changil Jeong is a Solutions Architect at Amazon Web Services (AWS) partnering with Independent software vendor customers on their cloud transformation journey, with strong interests in security. He joined AWS as an SDE apprentice before transitioning to SA. Previously served in the U.S. Army as a financial and budgeting analyst and worked at a large IT consulting firm as a SaaS security analyst.
Brian Ruf Paige Broderick
Paige Broderick is a Solutions Architect at Amazon Web Services (AWS) who works with Enterprise customers to help them achieve their AWS objectives. She specializes in cloud operations, focusing on governance and using AWS to develop smart manufacturing solutions. Outside of work, Paige is an avid runner and is likely training for her next marathon.

Implementing data governance on AWS: Automation, tagging, and lifecycle strategy โ€“ Part 1

16 January 2026 at 21:26

Generative AI and machine learning workloads create massive amounts of data. Organizations need data governance to manage this growth and stay compliant. While data governance isnโ€™t a new concept, recent studies highlight a concerning gap: a Gartner study of 300 IT executives revealed that only 60% of organizations have implemented a data governance strategy, with 40% still in planning stages or uncertain where to begin. Furthermore, a 2024 MIT CDOIQ survey of 250 chief data officers (CDOs) found that only 45% identify data governance as a top priority.

Although most businesses recognize the importance of data governance strategies, regular evaluation is important to ensure these strategies evolve with changing business needs, industry requirements, and emerging technologies. In this post, we show you a practical, automation-first approach to implementing data governance on Amazon Web Services (AWS) through a strategic and architectural guideโ€”whether youโ€™re starting at the beginning or improving an existing framework.

In this two-part series, we explore how to build a data governance framework on AWS thatโ€™s both practical and scalable. Our approach aligns with what AWS has identified as the core benefits of data governance:

  • Classify data consistently and automate controls to improve quality
  • Give teams secure access to the data they need
  • Monitor compliance automatically and catch issues early

In this post, we cover strategy, classification framework, and tagging governanceโ€”the foundation you need to get started. If you donโ€™t already have a governance strategy, we provide a high-level overview of AWS tools and services to help you get started. If you have a data governance strategy, the information in this post can assist you in evaluating its effectiveness and understanding how data governance is evolving with new technologies.

In Part 2, we explore the technical architecture and implementation patterns with conceptual code examples, and throughout both parts, youโ€™ll find links to production-ready AWS resources for detailed implementation.

Prerequisites

Before implementing data governance on AWS, you need the right AWS setup and buy-in from your teams.

Technical foundation

Start with a well-structured AWS Organizations setup for centralized management. Make sure AWS CloudTrail and AWS Config are enabled across accountsโ€”youโ€™ll need these for monitoring and auditing. Your AWS Identity and Access Management (IAM) framework should already define roles and permissions clearly.

Beyond these services, youโ€™ll use several AWS tools for automation and enforcement. The AWS service quick reference table that follows lists everything used throughout this guide.

Organizational readiness

Successful implementation of data governance requires clear organizational alignment and preparation across multiple dimensions.

  • Define roles and responsibilities. Data owners classify data and approve access requests. Your platform team handles AWS infrastructure and builds automation, while security teams set controls and monitor compliance. Application teams then implement these standards in their daily workflows.
  • Document your compliance requirements. List the regulations you must followโ€”GDPR, PCI-DSS, SOX, HIPAA, or others. Create a data classification framework that aligns with your business risk. Document your tagging standards and naming conventions so everyone follows the same approach.
  • Plan for change management. Get executive support from leaders who understand why governance matters. Start with pilot projects to demonstrate value before rolling out organization-wide. Provide role-based training and maintain up-to-date governance playbooks. Establish feedback mechanisms so teams can report issues and suggest improvements.

Key performance indicators (KPIs) to monitor

To measure the effectiveness of your data governance implementation, track the following essential metrics and their target objectives.

  • Resource tagging compliance: Aim for 95%, measured through AWS Config rules with weekly monitoring, focusing on critical resources and sensitive data classifications.
  • Mean time to respond to compliance issues: Target less than 24 hours for critical issues. Tracked using CloudWatch metrics with automated alerting for high-priority non-compliance events
  • Reduction in manual governance tasks: Target reduction of 40% in the first year. Measured through automated workflow adoption and remediation success rates.
  • Storage cost optimization based on data classification: Target 15โ€“20% reduction through intelligent tiering and lifecycle policies, monitored monthly by classification level.

With these technical and organizational foundations in place, youโ€™re ready to implement a sustainable data governance framework.

AWS services used in this guide โ€“ Quick reference

This implementation uses the following AWS services. Some are prerequisites, while others are introduced throughout the guide.

Category

Services

Description

Foundation

AWS Organizations

Multi-account management structure that enables centralized policy enforcement and governance across your entire AWS environment.

AWS Identity and Access Management (IAM)

Controls who can access what resources through roles, policies, and permissionsโ€”the foundation of your security model.

Monitoring and auditing

AWS CloudTrail

Records every API call made in your AWS accounts, creating a complete audit trail of who did what, when, and from where.

AWS Config

Continuously monitors resource configurations and evaluates them against rules you define (such as requiring that all S3 buckets much be encrypted). When it finds resources that donโ€™t meet your rules, it flags them as non-compliant so you can fix them manually or automatically.

Amazon CloudWatch

Aggregates metrics, logs, and events from across AWS for real-time monitoring, dashboards, and automated alerting on governance non-compliance.

Automation and enforcement

Amazon EventBridge

Acts as a central notification system that watches for specific events in your AWS environment (such as when an S3 bucket has been created) and automatically triggers actions in response (such as by running a Lambda function to check if it has the required tags). Think of it as an if this happens, then do that automation engine.

AWS Lambda

Runs your governance code (tag validation, security controls, remediation) in response to events without managing servers.

AWS Systems Manager

Automates operational tasks across your AWS resources. In governance, itโ€™s primarily used to automatically fix non-compliant resourcesโ€”for example, if AWS Config detects an unencrypted database, Systems Manager can run a pre-defined script to enable encryption without manual intervention.

Data protection

Amazon Macie

Uses machine learning to automatically discover, classify, and protect sensitive data like personal identifiable information (PII) across your S3 buckets.

AWS Key Management Service (AWS KMS)

Manages encryption keys for protecting data at rest, essential for high-impact data classifications.

Analytics & Insights

Amazon Athena

Serverless query service that analyzes data in Amazon S3 using SQLโ€”perfect for querying CloudTrail logs to understand access patterns.

Standardization

AWS Service Catalog

Creates catalogs of pre-approved, governance-compliant resources that teams can deploy through self-service.

ML Governance

Amazon SageMaker

Provides specialized tools for governing machine learning operations including model monitoring, documentation, and access control.

Understanding the data governance challenge

Organizations face complex data management challenges, from maintaining consistent data classification to ensuring regulatory compliance across their environments. Your strategy should maintain security, ensure compliance, and enable business agility through automation. While this journey can be complex, breaking it down into manageable components makes it achievable.

The foundation: Data classification framework

Data classification is a foundational step in cybersecurity risk management and data governance strategies. Organizations should use data classification to determine appropriate safeguards for sensitive or critical data based on their protection requirements. Following the NIST (National Institute of Standards and Technology) framework, data can be categorized based on the potential impact to confidentiality, integrity, and availability of information systems:

  • High impact: Severe or catastrophic adverse effect on organizational operations, assets, or individuals
  • Moderate impact: Serious adverse effect on organizational operations, assets, or individuals
  • Low impact: Limited adverse effect on organizational operations, assets, or individuals

Before implementing controls, establishing a clear data classification framework is essential. This framework serves as the backbone of your security controls, access policies, and automation strategies. The following is an example of how a company subject to the Payment Card Industry Data Security Standard (PCI-DSS) might classify data:

  • Level 1 โ€“ Most sensitive data:
    • Examples: Financial transaction records, customer PCI data, intellectual property
    • Security controls: Encryption at rest and in transit, strict access controls, comprehensive audit logging
  • Level 2 โ€“ Internal use data:
    • Examples: Internal documentation, proprietary business information, development code
    • Security controls: Standard encryption, role-based access control
  • Level 3 โ€“ Public data:
    • Examples: Marketing materials, public documentation, press releases
    • Security controls: Integrity checks, version, control

To help with data classification and tagging, AWS created AWS Resource Groups, a service that you can use to organize AWS resources into groups using criteria that you define as tags. If youโ€™re using multiple AWS accounts across your organization, AWS Organizations supports tag policies, which you can use to standardize the tags attached to the AWS resources in an organizationโ€™s account. The workflow for using tagging is shown in Figure 1. For more information, see Guidance for Tagging on AWS.

Figure 1: Workflow for tagging on AWS for a multi-account environment

Figure 1: Workflow for tagging on AWS for a multi-account environment

Your tag governance strategy

A well-designed tagging strategy is fundamental to automated governance. Tags not only help organize resources but also enable automated security controls, cost allocation, and compliance monitoring.

Figure 2: Tag governance workflow

Figure 2: Tag governance workflow

As shown in Figure 2, tag policies use the following process:

  1. AWS validates tags when you create resources.
  2. Non-compliant resources trigger automatic remediation, while compliant resources deploy normally.
  3. Continuous monitoring catches variation from your policies.

The following tagging strategy enables automation:

{ ย ย 
    "MandatoryTags": { ย ย  ย 
        "DataClassification": ["L1", "L2", "L3"], ย ย  ย 
        "DataOwner": "<Department/Team Name>", ย ย  ย 
        "Compliance": ["PCI", "SOX", "GDPR", "None"], ย ย  ย 
        "Environment": ["Prod", "Dev", "Test", "Stage"], ย ย  ย 
        "CostCenter": "<Business Unit Code>" ย ย 
    }, ย ย 
    "OptionalTags": { ย ย  ย 
        "BackupFrequency": ["Daily", "Weekly", "Monthly"], ย ย  ย 
        "RetentionPeriod": "<Time in Months>", ย ย  ย 
        "ProjectCode": "<Project Identifier>", ย ย  ย 
        "DataResidency": "<Region/Country>" ย ย 
    } 
}

While AWS Organizations tag policies provide a foundation for consistent tagging, comprehensive tag governance requires additional enforcement mechanisms, which we explore in detail in Part 2.

Conclusion

This first part of the two-part series established the foundational elements of implementing data governance on AWS, covering data classification frameworks, effective tagging strategies, and organizational alignment requirements. These fundamentals serve as building blocks for scalable and automated governance approaches. Part 2 focuses on technical implementation and architectural patterns, including monitoring foundations, preventive controls, and automated remediation. The discussion extends to tag-based security controls, compliance monitoring automation, and governance integration with disaster recovery strategies. Additional topics include data sovereignty controls and machine learning model governance with Amazon SageMaker, supported by AWS implementation examples.

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Tushar Jain Omar Ahmed
Omar Ahmed is an Auto and Manufacturing Solutions Architect who specializes in analytics. Omarโ€™s journey in cloud computing began as an AWS data center operations technician, where he developed hands on infrastructure expertise. Outside of work, he enjoys motorsports, gaming, and swimming.
Will Black Omar Mahmoud
Omar is a Solutions Architect helping small-medium businesses with their cloud journey. He specializes in Amazon Connect and next-gen developer services like Kiro. Omar began at AWS as a data center operations technician, gaining hands-on cloud infrastructure experience. Outside work, Omar enjoys gaming, hiking, and soccer.
Fritz Kunstler Changil Jeong
Changil Jeong is a Solutions Architect at Amazon Web Services (AWS) partnering with Independent software vendor customers on their cloud transformation journey, with strong interests in security. He joined AWS as an SDE apprentice before transitioning to SA. Previously served in the U.S. Army as a financial and budgeting analyst and worked at a large IT consulting firm as a SaaS security analyst.
Brian Ruf Paige Broderick
Paige Broderick is a Solutions Architect at Amazon Web Services (AWS) who works with Enterprise customers to help them achieve their AWS objectives. She specializes in cloud operations, focusing on governance and using AWS to develop smart manufacturing solutions. Outside of work, Paige is an avid runner and is likely training for her next marathon.

Palo Alto Networks Announces Support for NVIDIA Enterprise AI Factory

6 January 2026 at 00:01

Artificial intelligence has shifted to being the primary engine for market leadership. To compete, enterprises are shifting from general-purpose computing to AI factories, specialized infrastructures designed to manage the entire lifecycle of AI. However, this transition requires robust security without sacrificing performance and efficiency.

We are proud to announce that Palo Alto Networks Prismaยฎ AIRSโ„ข, accelerated on the NVIDIA BlueField data processing unit (DPU), is now part of the NVIDIA Enterprise AI Factory validated design.

The integrated solution embeds zero trust security directly into the AI infrastructure, providing comprehensive protection without impacting AI performance. By deploying Palo Alto Networks Prismaยฎ AIRSโ„ข Network Intercept directly onto the NVIDIA BlueField and extending to the cloud, Prisma AIRS establishes an essential zero trust governance fabric for the AI factory, enabling enterprises to accelerate innovation while maintaining control.

This critical architectural shift enables optimal AI performance and infrastructure efficiency by offloading security processing to an isolated domain, while leveraging the DPU's hardware acceleration via NVIDIA DOCA to enforce security policies at line speed. The implementation also leverages real-time workload information captured using DOCA Argus, which is then passed to Cortex XSIAMยฎ where it is used for AI-driven responses using the Cortex XSOARยฎ orchestration platform.

Rich Campagna, SVP Product Management, Palo Alto Networks said:

The AI Factory is the new engine for value creation, and securing it is a board-level imperative. The validation of Palo Alto Networks Prisma AIRS accelerated with NVIDIA BlueField within the NVIDIA Enterprise AI Factory enables a new security architecture for the AI era. We are embedding trust directly into the infrastructure, giving leaders the confidence to safeguard their proprietary intelligence and deploy AI bravely.

Kevin Deierling, senior vice president of Networking at NVIDIA said:

AI is transforming every industry and security must evolve to protect AI factories. To be scalable, security must be distributed and embedded within the AI infrastructure. This is achieved with NVIDIA BlueField running Palo Alto Networks Prisma AIRS to deliver robust, runtime security for the AI factory, with optimal AI performance and efficiency.

Deploy AI Bravely with a Future-Proof Foundation

The Future of Secure AI Factories

NVIDIA AI Factory with Prisma AIRS and Strata.

In addition to deploying Palo Alto Networks Prisma AIRS on NVIDIA BlueField in a distributed model, itโ€™s essential to maintain a centralized Hyperscale Security Firewall (HSF) cluster at the ingress and egress points of the AI factory to enforce a defense-in-depth strategy. Beyond network segmentation, individual workloads can selectively route traffic through hyperscale clusters to detect advanced application-layer threats and prevent lateral movement. These hyperscale firewall clusters scale elastically with demand, delivering session resiliency and the high availability required for critical AI operations.

This architecture fundamentally improves the Total Cost of Ownership (TCO) for AI infrastructure. By isolating security functions on BlueField, enterprises enable 100% of host computing resources to be dedicated to AI applications. This elimination of resource contention allows the AI Factory to maximize token throughput and capital efficiency.

This validated design is the blueprint for immediate efficiency. It provides a seamless path for enterprises to shift from general-purpose clusters to secure AI factory infrastructure without costly overhauls. More importantly, this collaboration establishes an unparalleled roadmap for future-proofing your investment. By securing operations with the high-performance NVIDIA BlueField-3 today, the architecture is inherently ready for the next generation, NVIDIA BlueField-4. This forward compatibility helps AI factories immediately handle gigascale demands, scaling up to 6X the compute power and doubling the bandwidth when BlueField-4 becomes available.

The inclusion of the Palo Alto Networks Prisma AIRS platform in the NVIDIA Enterprise AI Factory Validated Design bolsters enterprise AI security. By establishing the zero trust governance fabric of Prisma AIRS runtime security on NVIDIA BlueField, organizations gain a comprehensive defense. Proprietary and sensitive data is secured throughout the entire stack, and models are protected from adversarial threats, such as prompt injection attacks. With Prisma AIRS, the world's most comprehensive AI security platform, leaders gain the confidence to innovate and deploy AI bravely. This validated design is the essential blueprint for securely accelerating your market leadership without compromising security.

Join our "How to Secure the AI Factory" breakout session atย NVIDIA GTC 2026, March 16-19, in San Jose, CA to hear more about this transformative solution and accelerate your AI innovation securely.

The post Palo Alto Networks Announces Support for NVIDIA Enterprise AI Factory appeared first on Palo Alto Networks Blog.

From the Hill: The AI-Cybersecurity Imperative in Financial Services

18 December 2025 at 15:00

The transformative potential of artificial intelligence (AI) across industries is undeniable. But realizing AI's true value hinges on three cybersecurity imperatives: Understanding the AI-cybersecurity nexus, harnessing AI to supercharge cyber defense, and embedding security into AI tools from the ground up through Secure AI by Design.

Nowhere is this convergence more urgent than in financial services. Sitting at the center of our global economy, financial institutions face a dual mandate: Embrace AI for cybersecurity and cybersecurity for AI.

I was honored to cover these key principals in my testimony before the House Committee on Financial Services, led by Chairman French Hill. The hearing, entitled โ€œFrom Principles to Policy: Enabling 21st Century AI Innovation in Financial Servicesโ€ convened witnesses from Palo Alto Networks, Google, NASDAQ, Zillow and Public Citizen. Together, we examined AI use cases in the financial services and housing sectors, including those specific to cybersecurity. We assessed how existing laws and frameworks apply in the age of AI.

The Defense Advantage Is AI-Powered Security Operations

Attacks have become faster, with the time from compromise to data exfiltration now 100 times faster than four years ago. The financial sector bears disproportionate risk, given the value of its data and interconnected systems, while firms contend with evolving regulatory expectations, talent shortages and the persistent tendency to elevate cybersecurity only after an incident.

Generative and agentic AI intensify these pressures by accelerating every phase of the attack chain, from deepfake-driven fraud to tailored spear phishing campaigns. Our researchers at Unit 42ยฎ have found that agentic AI, autonomous systems that can reason and act without human intervention, can compress what was once a multiday ransomware campaign into roughly 25 minutes.

To keep pace, financial institutions must pivot to AI-driven defenses that operate at machine speed.

Security operations centers (SOC) have long been overwhelmed by traditional alerts and fragmented data. Security teams, forced into manual triage across dozens of disparate tools, face an inefficient model that leaves vulnerabilities exposed, burns out analysts and makes it impossible to operate at the speed necessary to outpace modern attacks.

The average enterprise SOC ingests data from 83 security solutions across 29 vendors. In 75% of breaches, logging existed that should have flagged anomalous behavior, but critical signals were buried. With 90% of SOCs still relying on manual processes, adversaries have the clear advantage.

AI-driven SOCs flip this paradigm, acting as a force multiplier to substantially reduce detection and response times. To illustrate the scale of this necessity, consider our own security operations. Palo Alto Networks SOC analyzes over 90 billion events daily. Without AI, this would be an impossible task for human analysts. But by applying AI, we distill that down to a single actionable incident.

Financial institutions migrating to AI-driven SOC platforms are seeing transformative results:

  • One customer reduced the Mean Time to Respond (MTTR) from one day to 14 minutes.
  • Another prevented 22,831 threats and processed 113,271 threat indicators in less than 5 seconds.
  • A large bank saved 180 hours per year by automating security information and event management reporting; 500 hours through automated data collection; 360 hours by automating four Chief Technology Officer playbooks; and 240 hours with automated threat intelligence enrichment.

These improvements are critical to stopping threat actors. But none of this would be possible without AI.

Securing the New AI Attack Surface

As AI adoption grows, it will further expand the attack surface, creating new vectors targeting training data and model environments. AI's rapid growth is outpacing the adoption of security measures designed to protect it. Nearly three-quarters of S&P 500 companies now flag AI as a material risk in their public disclosures, up from just 12% in 2023.

Traditional security tools rely on static rules that miss advanced attacks, like multistep prompt injections or adversarial manipulations. Autonomous AI agents can take unpredictable actions that are difficult to monitor with legacy methods.

Rapid AI adoption has exposed organizations' infrastructure, data, models, applications and agents to unique threats. Unlike traditional cyber exploits that target software vulnerabilities, AI-specific attacks can manipulate the foundation of how an AI system learns and operates.

A Secure AI by Design

Even with an understanding of the risks, many organizations struggle with the lack of clarity on what effective AI security looks like in practice. Recognizing the gap between intent and execution, Palo Alto Networks developed a Secure AI by Design policy roadmap that provides organizations with a comprehensive roadmap that integrates security throughout the entire AI lifecycle.

A proactive stance ensures security is a feature, not an afterthought, crucial for building trust, maintaining compliance and mitigating risks. The approach addresses four imperatives organizations most pressingly face in AI adoption:

1. Secure the use of external AI tools.

2. Secure the underlying AI infrastructure and data.

3. Safely build and deploy AI applications.

4. Monitor and control AI agents.

The Path Forward

For financial institutions, Secure AI by Design must be anchored in enterprise governance. Institutions should maintain risk-tiered AI inventories, enforce strict access controls and implement testing commensurate with risk. Governance structures should enable board oversight and align with established model risk practices.

Policymakers also have a critical role to play in promoting AI-driven security operations, championing voluntary Secure AI by Design frameworks, ensuring policies safeguard innovation, enabling controlled experimentation and strengthening public-private collaboration.

Ultimately, the financial institutions that will thrive will recognize cybersecurity as the foundation that makes innovation possible. By embracing AI-driven defenses and securing AI systems from the ground up, the sector can confidently unlock AI's transformative potential while safeguarding the trust and stability that underpin the global economy.

Read the full testimony to learn more about how cybersecurity can enable AI innovation in financial services.

The post From the Hill: The AI-Cybersecurity Imperative in Financial Services appeared first on Palo Alto Networks Blog.

GRC for Security Managers: From Checklists to Influence

By: BHIS
27 January 2025 at 17:00

This webcast was originally aired on January 16, 2025. In this video, Kelli K. Tarala and CJ Cox discuss the challenges and strategies for improving governance, risk, and compliance (GRC) [โ€ฆ]

The post GRC for Security Managers: From Checklists to Influence appeared first on Black Hills Information Security, Inc..

Cyber Risk Lessons We Can Learn From Hurricane Preparedness

By: BHIS
14 November 2024 at 16:00

Risk is real. To better understand cybersecurity risk, letโ€™s compare cyber risks to risks in the natural world from hurricanes. We can learn lessons from hurricanes and unnamed storms in [โ€ฆ]

The post Cyber Risk Lessons We Can Learn From Hurricane Preparedness appeared first on Black Hills Information Security, Inc..

โŒ