โŒ

Normal view

Comprehensive Google SecOps migration checklist for CISOs and SOC leaders

10 December 2025 at 13:49

Thereโ€™s a clear trend emerging with many organizations transitioning from legacy SIEMs to Google SecOps. While the Google SIEM platform is powerful, in our experience working with enterprise clients, that power only reveals itself when security leaders make three early decisions correctly:

  • Detection strategy: Whether to migrate existing rules or start fresh with a green-field approach.
  • Data onboarding: How to scale ingestion across multi-cloud environments without breaking pipelines.
  • Operating model: Building workflows that prevent โ€œalert debtโ€ from piling up on day one.

The strategic message is clear. Treat SIEM detection management with the same diligence you treat core security architecture, and augment your analysts with AI-powered triage so your humans can focus on higher-order investigations.

Hereโ€™s a practical checklist for discovery, migration, and operational success, designed for CISOs and SOC leaders evaluating a move to Google SecOps.

NOTE: This blog post is relevant to anyone considering a Chronicle SIEM migration as Google SecOps is the new Google branding for Chronicle.

The tl;dr version of the Google SIEM migration checklistย 

PhaseKey focus
Pre-MigrationInventory, pain-point assessment, business justification
MigrationTool selection, data ingestion, rule/dashboard migration, Integration, governance & risk
Post-MigrationMeasurement of success, continuous improvement, cost optimisation, governance & reporting

Full Google SecOps migration checklist

Letโ€™s dive into the details for each phase of the migration process.

Pre-migration checklist: Establishing the baseline

  1. Inventory current environment
    • Catalogue all data sources feeding Splunk: log types, volumes (GB/day), retention policies, on-prem vs cloud vs multi-cloud.
    • Map all current detections, dashboards, reports, playbooks, SOAR workflows.
    • Identify any compliance/regulatory retention obligations (audit logs, legal hold).
    • Establish current licensing costs, infrastructure (forwarders, indexers), staffing.
  2. Assess SIEM performance & pain points
    • Are you seeing cost escalation vs benefit (slower detection, high false positives, low automation)?
    • Is the SIEM struggling with data volume growth, scalability, multi-cloud telemetry?
    • Are SOC analysts spending more time on infrastructure/configuration than investigations?
    • Are you able to integrate newer requirements (cloud workloads, containers, IoT/OT, multi-cloud) effectively? This 451 Research report indicates many orgs run multiple SIEMs due to tool sprawl.
  3. Define business & security objectives
    • What do you hope to achieve? E.g., faster detection/response, lower cost, improved coverages, cloud alignment.
    • What are the key metrics: mean time to detect (MTTD), mean time to respond (MTTR), cost-per-alert, false positive rate, regulatory coverage, etc.
    • What is your target SOC maturity in e.g., 12-24 months? Are you planning a cloud-first strategy, heavier automation/AI, less on-prem infrastructure?
  4. Build the migration justification
    • Prepare a comparative TCO/ROI: legacy SIEM vs cloud-native. Google SecOps materials claim e.g., โ€œingest and analyse your data at Google speed and scaleโ€ and highlight cost benefit.
    • Understand what it will cost to migrate: re-write detections, dashboards, data flows, training, potential downtime.
    • Present risk assessment: What happens if you donโ€™t migrate (risk of obsolete tool, scaling failure, cost spirals)? The โ€œGreat SIEM Migrationโ€ guide argues that legacy tools may become โ€œdinosaursโ€.

Migration-phase checklist: Executing the transition

  1. Select migration path & vendor/partner support
  2. Data ingestion, normalization & compatibility
    • Ensure: all of your log types/sources in Splunk are supported by the new platform. Google SecOps supports ingestion of Splunk CIM logs.
    • Plan for data mapping: Splunk field names, dashboards, custom fields โ†’ new schema.
    • Address historic data: Will you migrate archives? Will you keep Splunk as store-only? Community posts warn that mapping old archives can be complex.
    • Validate performance: test ingestion, query latency, retention policies on the new platform.
  3. Detection rules, dashboards, SOAR workflows
    • Catalogue existing detection rules, dashboards, SOAR playbooks in Splunk.
    • Determine which can be reused, which need rewriting. Ensure parity: detection coverage, mapping to MITRE ATT&CK, business use-cases. Splunk claims strong out-of-box detection library.
    • Build and test new rules/playbooks in Google SecOps; validate they meet or exceed current performance (MTTD, MTTR, false positives).
    • Ensure analyst training and new workflows are adopted: new UI, new query language, new incident-investigation flows (Google SecOps offers โ€œGemini in security operationsโ€ natural-language assistant).
  4. Integration & ecosystem fit
    • Ensure that Google SecOps integrates with your existing tool-stack (EDR, identity, network, cloud logs, SOAR, threat intel). Google advertises 300+ SOAR integrations.
    • Confirm multi-cloud/on-prem data ingestion: check vendor statements.
    • Validate APIs, custom connectors, forwarder architecture. Splunk vs Google SecOps comparison note: Splunk emphasizes hybrid flexibility.
  5. Governance, compliance & retention
    • Check how historic data will be retained, archived, accessed, both for compliance (audits/regulators) and investigations.
    • Confirm where the data resides (region/residency rules), encryption, access controls. Google SecOps claims to treat all data as first-party.
    • Align on SLAs, incident response metrics, roles & responsibilities.
    • Define cut-over strategy: Will Splunk be decommissioned or kept in read-only mode? Define freeze date, dual-runs, parallel operations.
  6. Risk management & business continuity
    • Define fallback/rollback plans: If the new platform fails, do you have the old SIEM in warm standby?
    • Monitor for data loss/misalignment during migration (NXLog warns of risks).
    • Communicate to stakeholders: SOC analysts, business units, auditors. Ensure training and change-management.
    • Set benchmarks and metrics: Time to detect/resolve in new platform vs old; cost per alert; staff utilisation; alert volumes; false positives.

Post-migration checklist: Optimizing & sustaining value

  1. Validate outcomes & measure success
    • Measure MTTD, MTTR, alert volumes, analyst productivity pre- and post-migration.
    • Compare actual cost savings vs business case.
    • Assess detection coverage: Are all critical use-cases still covered? Are any gaps emerging?
    • Run periodic health checks (some vendors like CardinalOps offer detection-rule health monitoring with MITRE ATT&CK coverage for Google SecOps).
  2. Continuous improvement & SOC maturity evolution
    • SOC maturity doesnโ€™t stop at migration. Use freed-up resources to focus on advanced use-cases (threat hunting, proactive detection, automation, investigations).
    • Tune detection rules, remove noise, refine playbooks.
    • Leverage AI/natural-language features (Google SecOps touts โ€œGemini in security operationsโ€).
    • Plan for future: hybrid/multi-cloud expansions, new telemetry sources, OT/IoT, supply-chain threats.
  3. Decommission legacy infrastructure & optimise cost
    • If the migration path included decommissioning the old SIEM (or reducing its role), ensure you turn off unneeded licences/infra.
    • Monitor the cost model of the new platform: ingestion volumes, retention policiesโ€”ensure you donโ€™t inadvertently pay for excess.
    • Re-allocate resources: freed licences, server hardware, staff time โ€” invest into SOC capability rather than maintenance.
  4. Governance, audit and stakeholder reporting
    • Update your SOC governance frameworks: incident-response playbooks, escalation paths, KPIs aligned with the new platform.
    • Communicate to board/executive leadership key outcomes: improved detection/response, cost rationalization, strategic alignment.
    • Ensure audit/compliance reports reflect the new tooling (document changes, validate controls).
    • Set up periodic reviews of tool performance, vendor roadmap, SOC maturity.

Final thoughts

Migrating to Google SecOps isnโ€™t a simple platform swap, itโ€™s a redesign of how your SOC operates. The upside: cost efficiency, scale, and automation can be immediate. The risks: migration complexity, content gaps, and operational disruption are real and must be managed deliberately.

As a CISO or SOC leader, treat this as a transformation program. Use the table and/or the full Checklist above to drive decisions; follow a strategic landing plan to sequence work; and anchor on the three non-negotiables outlined above:

  1. A clear detection strategy (migrate only if the value is there; rebuild the rest in YARA-L),
  2. Data onboarding at scale with a parser matrix and cost guardrails, and
  3. An operating model that prevents alert debt from day one through automation and measurable KPIs.

If you want help getting there faster, we can provide a SIEM jumpstart (curated + bespoke YARA-L rules, MITRE gap analysis and coverage, detection reviews, continuous improvement with Intezer engineers), a parser/ingestion plan for multi-cloud, and of course, Intezer Forensic AI SOCโ€™s triage to meet on day-one, 100% alert coverage with full auditability so your analysts focus on the few cases that truly need their context and expertise.

Learn more about how Intezer can help you with your SecOps migration.

The post Comprehensive Google SecOps migration checklist for CISOs and SOC leaders appeared first on Intezer.

AWS launches AI-enhanced security innovations at re:Invent 2025

8 December 2025 at 19:41

At re:Invent 2025, AWS unveiled its latest AI- and automation-enabled innovations to strengthen cloud security for customers to grow their business. Organizations are likely to increase security spending from
$213 billion in 2025 to $377 billion by 2028 as they adopt generative AI. This 77% increase highlights the importance organizations place on securing their AI investments as they expand their digital footprints.

AWS uses artificial intelligence, machine learning, and automation to help you secure your environments proactively. These advancements include AI security agents, machine-learning and automation-driven threat detection, and agent-centric identity and access management. Together, they unify defense-in-depth across the application, infrastructure, network, and data layers to protect organizations from a wide spectrum of threats, vulnerabilities, and misconfigurations that could disrupt business operations.

AI security agents

AWS is embedding AI agents directly into security workflows to perform code reviews, collate incident response signals, and secure agentic access.

  • AWS Security Agent is a frontier agent that proactively secures applications throughout the development lifecycle. It conducts automated security reviews tailored to organizational requirements and delivers context-aware penetration testing on demand. By continuously validating security from design to deployment, it helps prevent vulnerabilities early in development.
  • AWS Security Incident Response delivers agentic AI-powered investigation capabilities designed to help enhance and accelerate security event response and recovery.
  • AgentCore Identity now offers authentication that provides enhanced access controls for AI agents, which restricts their interactions to authorized services and data based on specific user permissions and attributes. Enabling granular boundaries for how AI agents interact with enterprise applications reduces the risk of unauthorized access or data exposure.

ML and automation-driven threat detection

Machine learning models and automation now accelerate threat detection across more AWS environments, surfacing otherwise hard to see correlations, such as for sophisticated multistage attacks, at scale. These latest advancements save time by automatically correlating signals into consolidated sequences.

Agent-centric identity and access management

Intelligent access controls are redefining how organizations manage identities and permissions. These controls automate policy generation and improve your zero trust maturity level, making it easier for you to use AWS services.

  • IAM policy autopilot helps AI coding assistants quickly create baseline IAM policies that teams can refine as the application evolves, so organizations can build faster.
  • Outbound identity Federation helps IAM customers to securely federate their AWS identities to external services, making it easy to authenticate AWS workloads with cloud providers, SaaS platforms, and self-hosted applications.
  • Private access sign-in routes 100% of console traffic through VPC endpoints instead of public internet, using intelligent routing to maintain security without compromising performance.
  • Login for AWS local development lets developers use their existing console credentials to programmatically access AWS.

Transforming security through AI

These AI and ML advancements transform security from reactive manual processes to proactive, scalable protection. You can use them to operationalize threat hunting and advance your security posture, even as you grow your digital real estate.

The confidence organizations place in cloud-native security validates this approach. The AWS-sponsored report of 2,800 IT and security decision makers and practitioners revealed that 81% agree that their primary cloud providerโ€™s native security and compliance capabilities exceed what their team could deliver independently. Additionally, 56% responded that the public cloud was better positioned to deliver security as opposed to 37% that selected on-premises, and 51% believe the public cloud is better positioned to meet regulations versus 41% that responded on-premises.

Cloud is the foundation on which customers build their businesses, and AWS continues to deliver security innovations that reinforce that foundation.

If you have feedback about this post, submit comments in the Comments section below.

Lise Feng

Lise Feng

Lise is a Seattle-based PR Manager focused on AWS security services and customers. Outside of work, she enjoys cooking and watching most contact sports.

The Browser Defense Playbook: Stopping the Attacks That Start on Your Screen

3 December 2025 at 01:00

85% of daily work occurs in the browser. Unit 42 outlines key security controls and strategies to make sure yours is secure.

The post The Browser Defense Playbook: Stopping the Attacks That Start on Your Screen appeared first on Unit 42.

How to use the Secrets Store CSI Driver provider Amazon EKS add-on with Secrets Manager

26 November 2025 at 19:54

In this post, we introduce the AWS provider for the Secrets Store CSI Driver, a new AWS Secrets Manager add-on for Amazon Elastic Kubernetes Service (Amazon EKS) that you can use to fetch secrets from Secrets Manager and parameters from AWS Systems Manager Parameter Store and mount them as files in Kubernetes pods. The add-on is straightforward to install and configure, works on Amazon Elastic Compute Cloud (Amazon EC2) instances and hybrid nodes, and includes the latest security updates and bugfixes. It provides a secure and reliable way to retrieve your secrets in Kubernetes workloads.

The AWS provider for the Secrets Store CSI Driver is an open source Kubernetes DaemonSet.

Amazon EKS add-ons provide installation and management of a curated set of add-ons for EKS clusters. You can use these add-ons to help ensure that your EKS clusters are secure and stable and reduce the number of steps required to install, configure, and update add-ons.

Secrets Manager helps you manage, retrieve, and rotate database credentials, application credentials, OAuth tokens, API keys, and other secrets throughout their lifecycles. By using Secrets Manager to store credentials, you can avoid using hard-coded credentials in application source code, helping to avoid unintended or inadvertent access.

New EKS add-on: AWS provider for the Secrets Store CSI Driver

We recommend installing the provider as an Amazon EKS add-on instead of the legacy installation methods (Helm, kubectl) to reduce the amount of time it takes to install and configure the provider. The add-on can be installed in several ways: using eksctlโ€”which you will use in this postโ€”the AWS Management Console, the Amazon EKS API, AWS CloudFormation, or the AWS Command Line Interface (AWS CLI).

Security considerations

The open-source Secrets Store CSI Driver maintained by the Kubernetes community enables mounting secrets as files in Kubernetes clusters. The AWS provider relies on the CSI driver and mounts secrets as file in your EKS clusters. Security best practice recommends caching secrets in memory where possible. If you prefer to adopt the native Kubernetes experience, please follow the steps in this blog post. If you prefer to cache secrets in memory, we recommend using the AWS Secrets Manager Agent.

IAM principals require Secrets Manager permissions to get and describe secrets. If using Systems Manager Parameter Store, principals also require Parameter Store permissions to get parameters. Resource policies on secrets serve as another access control mechanism, and AWS principals must be explicitly granted permissions to access individual secrets if theyโ€™re accessing secrets from a different AWS account (see Access AWS Secrets Manager secrets from a different account). The Amazon EKS add-on provides security features including support for using FIPS endpoints. AWS provides a managed IAM policy, AWSSecretsManagerClientReadOnlyAccess, which we recommend using with the EKS add-on.

Solution walkthrough

In the following sections, youโ€™ll create an EKS cluster, create a test secret in Secrets Manager, install the Amazon EKS add-on, and use it to retrieve the test secret and mount it as a file in your cluster.

Prerequisites

  1. AWS credentials, which must be configured in your environment to allow AWS API calls and are required to allow access to Secrets Manager
  2. AWS CLI v2 or higher
  3. Your preferred AWS Region must be available in your environment. Use the following command to set your preferred region:
    aws configure set default.region <preferred_region>
    
  4. The kubectl and eksctl command-line tools
  5. A Kubernetes deployment file hosted in the GitHub repo for the provider

With the prerequisites in place, youโ€™re ready to run the commands in the following steps in your terminal:

Create an EKS cluster

  1. Create a shell variable in your terminal with the name of your cluster:
    CLUSTER_NAME="my-test-clusterโ€
    
  2. Create an EKS cluster:
    eksctl create cluster $CLUSTER_NAME 
    

eksctl will automatically use a recent version of Kubernetes and create the resources needed for the cluster to function. This command typically takes about 15 minutes to finish setting up the cluster.

Create a test secret

Create a secret named addon_secret in Secrets Manager:

aws secretsmanager create-secret \
  --name addon_secret \
  --secret-string "super secret!"

Set up the Secrets Store CSI Driver provider EKS add-on

Install the Amazon EKS add-on:

eksctl create addon \
  --cluster $CLUSTER_NAME \
  --name aws-secrets-store-csi-driver-provider

Create an IAM role

Create an AWS Identity and Access Management (IAM) role that the EKS Pod Identity service principal can assume and save it in a shell variable (replace <region> with the AWS Region configured in your environment):

ROLE_ARN=$(aws --region <region> --query Role.Arn --output text iam create-role --role-name nginx-deployment-role --assume-role-policy-document '{
    "Version": "2012-10-17",
    "Statement": [
        {
            "Effect": "Allow",
            "Principal": {
                "Service": "pods.eks.amazonaws.com"
            },
            "Action": [
                "sts:AssumeRole",
                "sts:TagSession"
            ]
        }
    ]
}')

Attach a managed policy to the IAM role

Note: AWS provides a managed policy for client-side consumption of secrets through Secrets Manager: AWSSecretsManagerClientReadOnlyAccess. This policy grants access to get and describe secrets for the secrets in your account. If you want to further follow the principle of least privilege, create a custom policy scoped down to only the secrets you want to retrieve.

Attach the managed policy to the IAM role that you just created:

aws iam attach-role-policy \
  --role-name nginx-deployment-role \
  --policy-arn arn:aws:iam::aws:policy/AWSSecretsManagerClientReadOnlyAccess

Set up the EKS Pod Identity Agent

Note: The add-on provides two methods of authentication: IAM roles for service accounts (IRSA) and EKS Pod Identity. In this solution, youโ€™ll use EKS Pod Identity.

  1. After youโ€™ve installed the add-on in your cluster, install the EKS Pod Identity Agent add-on for authentication:
    eksctl create addon \
      --cluster $CLUSTER_NAME \
      --name eks-pod-identity-agent
    
  2. Create an EKS Pod Identity association for the cluster:
    eksctl create podidentityassociation \
        --cluster $CLUSTER_NAME \
        --namespace default \
        --region <region> \
        --service-account-name nginx-pod-identity-deployment-sa \
        --role-arn $ROLE_ARN \
        --create-service-account true
    

Set up your SecretProviderClass

The SecretProviderClass is a YAML file that defines which secrets and parameters to mount as files in your cluster.

  1. Create a minimal SecretProviderClass called spc.yaml for the test secret with the following content:
    apiVersion: secrets-store.csi.x-k8s.io/v1
    kind: SecretProviderClass
    metadata:
      name: nginx-pod-identity-deployment-aws-secrets
    spec:
      provider: aws
      parameters:
        objects: |
          - objectName: "addon_secret"
            objectType: "secretsmanager"
        usePodIdentity: "true"
    
  2. Deploy your SecretProviderClass (make sure youโ€™re in the same directory as the spc.yaml you just created):
    kubectl apply -f spc.yaml
    

To learn more about the SecretProviderClass, see the GitHub readme for the provider.

Deploy your pod to your EKS cluster

For brevity, weโ€™ve omitted the content of the Kubernetes deployment file. The following is an example deployment file for Pod Identity in the GitHub repository for the providerโ€”use this file to deploy your pod:

kubectl apply -f https://raw.githubusercontent.com/aws/secrets-store-csi-driver-provider-aws/main/examples/ExampleDeployment-PodIdentity.yaml

This will mount addon_secret at /mnt/secrets-store in your cluster.

Retrieve your secret

  1. Print the value of addon_secret to confirm that the secret was mounted successfully:
    kubectl exec -it $(kubectl get pods | awk '/nginx-pod-identity-deployment/{print $1}' | head -1) -- cat /mnt/secrets-store/addon_secret
    
  2. You should see the following output:
    super secret!
    

Youโ€™ve successfully fetched your test secret from Secrets Manager using the new Amazon EKS add-on and mounted it as a file in your Kubernetes cluster.

Clean up

Run the following commands to clean up the resources that you created in this tutorial:

aws secretsmanager delete-secret \
  --secret-id addon_secret \
  --force-delete-without-recovery

aws iam delete-role --role-name nginx-deployment-role

eksctl delete cluster $CLUSTER_NAME

Conclusion

In this post, you learned how to use the new Amazon EKS add-on for the AWS Secrets Store CSI Driver provider to securely retrieve your secrets and parameters and mount them as files in your Kubernetes clusters. The new EKS add-on provides benefits such as the latest security patches and bug fixes, tighter integration with Amazon EKS, and reduces the time it takes to install and configure the AWS Secrets Store CSI Driver provider. The add-on is validated by EKS to work with EC2 instances and hybrid nodes.

Further reading

If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.

Angad Misra

Angad Misra

Angad is a Software Engineer on the AWS Secrets Manager team. When he isnโ€™t building secure, reliable, and scalable software from first principles, he enjoys a good latte, live music, playing guitar, exploring the great outdoors, cooking, and lazing around with his cat, Freyja.

โŒ