There’s a clear trend emerging with many organizations transitioning from legacy SIEMs to Google SecOps. While the Google SIEM platform is powerful, in our experience working with enterprise clients, that power only reveals itself when security leaders make three early decisions correctly:
Detection strategy: Whether to migrate existing rules or start fresh with a green-field approach.
Data onboarding: How to scale ingestion across multi-cloud environments without breaking pipelines.
Operating model: Building workflows that prevent “alert debt” from piling up on day one.
The strategic message is clear. Treat SIEM detection management with the same diligence you treat core security architecture, and augment your analysts with AI-powered triage so your humans can focus on higher-order investigations.
Here’s a practical checklist for discovery, migration, and operational success, designed for CISOs and SOC leaders evaluating a move to Google SecOps.
The tl;dr version of the Google SIEM migration checklist
Phase
Key focus
Pre-Migration
Inventory, pain-point assessment, business justification
Migration
Tool selection, data ingestion, rule/dashboard migration, Integration, governance & risk
Post-Migration
Measurement of success, continuous improvement, cost optimisation, governance & reporting
Full Google SecOps migration checklist
Let’s dive into the details for each phase of the migration process.
Pre-migration checklist: Establishing the baseline
Inventory current environment
Catalogue all data sources feeding Splunk: log types, volumes (GB/day), retention policies, on-prem vs cloud vs multi-cloud.
Map all current detections, dashboards, reports, playbooks, SOAR workflows.
Identify any compliance/regulatory retention obligations (audit logs, legal hold).
Establish current licensing costs, infrastructure (forwarders, indexers), staffing.
Assess SIEM performance & pain points
Are you seeing cost escalation vs benefit (slower detection, high false positives, low automation)?
Is the SIEM struggling with data volume growth, scalability, multi-cloud telemetry?
Are SOC analysts spending more time on infrastructure/configuration than investigations?
Are you able to integrate newer requirements (cloud workloads, containers, IoT/OT, multi-cloud) effectively? This 451 Research report indicates many orgs run multiple SIEMs due to tool sprawl.
Define business & security objectives
What do you hope to achieve? E.g., faster detection/response, lower cost, improved coverages, cloud alignment.
What are the key metrics: mean time to detect (MTTD), mean time to respond (MTTR), cost-per-alert, false positive rate, regulatory coverage, etc.
What is your target SOC maturity in e.g., 12-24 months? Are you planning a cloud-first strategy, heavier automation/AI, less on-prem infrastructure?
Build the migration justification
Prepare a comparative TCO/ROI: legacy SIEM vs cloud-native. Google SecOps materials claim e.g., “ingest and analyse your data at Google speed and scale” and highlight cost benefit.
Understand what it will cost to migrate: re-write detections, dashboards, data flows, training, potential downtime.
Present risk assessment: What happens if you don’t migrate (risk of obsolete tool, scaling failure, cost spirals)? The “Great SIEM Migration” guide argues that legacy tools may become “dinosaurs”.
Migration-phase checklist: Executing the transition
Select migration path & vendor/partner support
Decide: full rip & replace vs phased migration vs augmentation (run new platform in parallel).
Evaluate tooling for data-migration, rule conversion, playbook migration.
Data ingestion, normalization & compatibility
Ensure: all of your log types/sources in Splunk are supported by the new platform. Google SecOps supports ingestion of Splunk CIM logs.
Plan for data mapping: Splunk field names, dashboards, custom fields → new schema.
Address historic data: Will you migrate archives? Will you keep Splunk as store-only? Community posts warn that mapping old archives can be complex.
Validate performance: test ingestion, query latency, retention policies on the new platform.
Detection rules, dashboards, SOAR workflows
Catalogue existing detection rules, dashboards, SOAR playbooks in Splunk.
Determine which can be reused, which need rewriting. Ensure parity: detection coverage, mapping to MITRE ATT&CK, business use-cases. Splunk claims strong out-of-box detection library.
Build and test new rules/playbooks in Google SecOps; validate they meet or exceed current performance (MTTD, MTTR, false positives).
Ensure analyst training and new workflows are adopted: new UI, new query language, new incident-investigation flows (Google SecOps offers “Gemini in security operations” natural-language assistant).
Integration & ecosystem fit
Ensure that Google SecOps integrates with your existing tool-stack (EDR, identity, network, cloud logs, SOAR, threat intel). Google advertises 300+ SOAR integrations.
Confirm multi-cloud/on-prem data ingestion: check vendor statements.
Validate APIs, custom connectors, forwarder architecture. Splunk vs Google SecOps comparison note: Splunk emphasizes hybrid flexibility.
Governance, compliance & retention
Check how historic data will be retained, archived, accessed, both for compliance (audits/regulators) and investigations.
Communicate to stakeholders: SOC analysts, business units, auditors. Ensure training and change-management.
Set benchmarks and metrics: Time to detect/resolve in new platform vs old; cost per alert; staff utilisation; alert volumes; false positives.
Post-migration checklist: Optimizing & sustaining value
Validate outcomes & measure success
Measure MTTD, MTTR, alert volumes, analyst productivity pre- and post-migration.
Compare actual cost savings vs business case.
Assess detection coverage: Are all critical use-cases still covered? Are any gaps emerging?
Run periodic health checks (some vendors like CardinalOps offer detection-rule health monitoring with MITRE ATT&CK coverage for Google SecOps).
Continuous improvement & SOC maturity evolution
SOC maturity doesn’t stop at migration. Use freed-up resources to focus on advanced use-cases (threat hunting, proactive detection, automation, investigations).
Ensure audit/compliance reports reflect the new tooling (document changes, validate controls).
Set up periodic reviews of tool performance, vendor roadmap, SOC maturity.
Final thoughts
Migrating to Google SecOps isn’t a simple platform swap, it’s a redesign of how your SOC operates. The upside: cost efficiency, scale, and automation can be immediate. The risks: migration complexity, content gaps, and operational disruption are real and must be managed deliberately.
As a CISO or SOC leader, treat this as a transformation program. Use the table and/or the full Checklist above to drive decisions; follow a strategic landing plan to sequence work; and anchor on the three non-negotiables outlined above:
A clear detection strategy (migrate only if the value is there; rebuild the rest in YARA-L),
Data onboarding at scale with a parser matrix and cost guardrails, and
An operating model that prevents alert debt from day one through automation and measurable KPIs.
If you want help getting there faster, we can provide a SIEM jumpstart (curated + bespoke YARA-L rules, MITRE gap analysis and coverage, detection reviews, continuous improvement with Intezer engineers), a parser/ingestion plan for multi-cloud, and of course, Intezer Forensic AI SOC’s triage to meet on day-one, 100% alert coverage with full auditability so your analysts focus on the few cases that truly need their context and expertise.
At re:Invent 2025, AWS unveiled its latest AI- and automation-enabled innovations to strengthen cloud security for customers to grow their business. Organizations are likely to increase security spending from $213 billion in 2025 to $377 billion by 2028 as they adopt generative AI. This 77% increase highlights the importance organizations place on securing their AI investments as they expand their digital footprints.
AWS uses artificial intelligence, machine learning, and automation to help you secure your environments proactively. These advancements include AI security agents, machine-learning and automation-driven threat detection, and agent-centric identity and access management. Together, they unify defense-in-depth across the application, infrastructure, network, and data layers to protect organizations from a wide spectrum of threats, vulnerabilities, and misconfigurations that could disrupt business operations.
AI security agents
AWS is embedding AI agents directly into security workflows to perform code reviews, collate incident response signals, and secure agentic access.
AWS Security Agentis a frontier agent that proactively secures applications throughout the development lifecycle. It conducts automated security reviews tailored to organizational requirements and delivers context-aware penetration testing on demand. By continuously validating security from design to deployment, it helps prevent vulnerabilities early in development.
AWS Security Incident Response delivers agentic AI-powered investigation capabilities designed to help enhance and accelerate security event response and recovery.
AgentCore Identitynow offers authentication that provides enhanced access controls for AI agents, which restricts their interactions to authorized services and data based on specific user permissions and attributes. Enabling granular boundaries for how AI agents interact with enterprise applications reduces the risk of unauthorized access or data exposure.
ML and automation-driven threat detection
Machine learning models and automation now accelerate threat detection across more AWS environments, surfacing otherwise hard to see correlations, such as for sophisticated multistage attacks, at scale. These latest advancements save time by automatically correlating signals into consolidated sequences.
GuardDuty extended threat detection for EC2 and ECSuses advanced AI and ML algorithms to identify sophisticated, multi-stage attacks targeting AWS accounts, workloads, and data in virtual machine, container, and serverless workloads.
GuardDuty malware protection for AWS Backup automatically scans EC2, EBS, and S3 backups for malware. It helps you identify your last known clean backup to minimize business disruption during recovery and supports incremental scanning of net new data in between backups.
Intelligent access controls are redefining how organizations manage identities and permissions. These controls automate policy generation and improve your zero trust maturity level, making it easier for you to use AWS services.
IAM policy autopilot helps AI coding assistants quickly create baseline IAM policies that teams can refine as the application evolves, so organizations can build faster.
Outbound identity Federation helps IAM customers to securely federate their AWS identities to external services, making it easy to authenticate AWS workloads with cloud providers, SaaS platforms, and self-hosted applications.
Private access sign-in routes 100% of console traffic through VPC endpoints instead of public internet, using intelligent routing to maintain security without compromising performance.
These AI and ML advancements transform security from reactive manual processes to proactive, scalable protection. You can use them to operationalize threat hunting and advance your security posture, even as you grow your digital real estate.
The confidence organizations place in cloud-native security validates this approach. The AWS-sponsored report of 2,800 IT and security decision makers and practitioners revealed that 81% agree that their primary cloud provider’s native security and compliance capabilities exceed what their team could deliver independently. Additionally, 56% responded that the public cloud was better positioned to deliver security as opposed to 37% that selected on-premises, and 51% believe the public cloud is better positioned to meet regulations versus 41% that responded on-premises.
Cloud is the foundation on which customers build their businesses, and AWS continues to deliver security innovations that reinforce that foundation.
If you have feedback about this post, submit comments in the Comments section below.
In this post, we introduce the AWS provider for the Secrets Store CSI Driver, a new AWS Secrets Manager add-on for Amazon Elastic Kubernetes Service (Amazon EKS) that you can use to fetch secrets from Secrets Manager and parameters from AWS Systems Manager Parameter Store and mount them as files in Kubernetes pods. The add-on is straightforward to install and configure, works on Amazon Elastic Compute Cloud (Amazon EC2) instances and hybrid nodes, and includes the latest security updates and bugfixes. It provides a secure and reliable way to retrieve your secrets in Kubernetes workloads.
Amazon EKS add-ons provide installation and management of a curated set of add-ons for EKS clusters. You can use these add-ons to help ensure that your EKS clusters are secure and stable and reduce the number of steps required to install, configure, and update add-ons.
Secrets Manager helps you manage, retrieve, and rotate database credentials, application credentials, OAuth tokens, API keys, and other secrets throughout their lifecycles. By using Secrets Manager to store credentials, you can avoid using hard-coded credentials in application source code, helping to avoid unintended or inadvertent access.
New EKS add-on: AWS provider for the Secrets Store CSI Driver
We recommend installing the provider as an Amazon EKS add-on instead of the legacy installation methods (Helm, kubectl) to reduce the amount of time it takes to install and configure the provider. The add-on can be installed in several ways: using eksctl—which you will use in this post—the AWS Management Console, the Amazon EKS API, AWS CloudFormation, or the AWS Command Line Interface (AWS CLI).
Security considerations
The open-source Secrets Store CSI Driver maintained by the Kubernetes community enables mounting secrets as files in Kubernetes clusters. The AWS provider relies on the CSI driver and mounts secrets as file in your EKS clusters. Security best practice recommends caching secrets in memory where possible. If you prefer to adopt the native Kubernetes experience, please follow the steps in this blog post. If you prefer to cache secrets in memory, we recommend using the AWS Secrets Manager Agent.
IAM principals require Secrets Manager permissions to get and describe secrets. If using Systems Manager Parameter Store, principals also require Parameter Store permissions to get parameters. Resource policies on secrets serve as another access control mechanism, and AWS principals must be explicitly granted permissions to access individual secrets if they’re accessing secrets from a different AWS account (see Access AWS Secrets Manager secrets from a different account). The Amazon EKS add-on provides security features including support for using FIPS endpoints. AWS provides a managed IAM policy, AWSSecretsManagerClientReadOnlyAccess, which we recommend using with the EKS add-on.
Solution walkthrough
In the following sections, you’ll create an EKS cluster, create a test secret in Secrets Manager, install the Amazon EKS add-on, and use it to retrieve the test secret and mount it as a file in your cluster.
Prerequisites
AWS credentials, which must be configured in your environment to allow AWS API calls and are required to allow access to Secrets Manager
With the prerequisites in place, you’re ready to run the commands in the following steps in your terminal:
Create an EKS cluster
Create a shell variable in your terminal with the name of your cluster:
CLUSTER_NAME="my-test-cluster”
Create an EKS cluster:
eksctl create cluster $CLUSTER_NAME
eksctl will automatically use a recent version of Kubernetes and create the resources needed for the cluster to function. This command typically takes about 15 minutes to finish setting up the cluster.
Create a test secret
Create a secret named addon_secret in Secrets Manager:
Create an AWS Identity and Access Management (IAM) role that the EKS Pod Identity service principal can assume and save it in a shell variable (replace <region> with the AWS Region configured in your environment):
Note: AWS provides a managed policy for client-side consumption of secrets through Secrets Manager: AWSSecretsManagerClientReadOnlyAccess. This policy grants access to get and describe secrets for the secrets in your account. If you want to further follow the principle of least privilege, create a custom policy scoped down to only the secrets you want to retrieve.
Attach the managed policy to the IAM role that you just created:
aws iam attach-role-policy \
--role-name nginx-deployment-role \
--policy-arn arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess
Deploy your SecretProviderClass (make sure you’re in the same directory as the spc.yaml you just created):
kubectl apply -f spc.yaml
To learn more about the SecretProviderClass, see the GitHub readme for the provider.
Deploy your pod to your EKS cluster
For brevity, we’ve omitted the content of the Kubernetes deployment file. The following is an example deployment file for Pod Identity in the GitHub repository for the provider—use this file to deploy your pod:
In this post, you learned how to use the new Amazon EKS add-on for the AWS Secrets Store CSI Driver provider to securely retrieve your secrets and parameters and mount them as files in your Kubernetes clusters. The new EKS add-on provides benefits such as the latest security patches and bug fixes, tighter integration with Amazon EKS, and reduces the time it takes to install and configure the AWS Secrets Store CSI Driver provider. The add-on is validated by EKS to work with EC2 instances and hybrid nodes.
Self-replicating worm “Shai-Hulud” has compromised hundreds of software packages in a supply chain attack targeting the npm ecosystem. We discuss scope and more.