Normal view

Received — 29 January 2026 Microsoft Security Blog

New Microsoft Data Security Index report explores secure AI adoption to protect sensitive data

29 January 2026 at 18:00

Generative AI and agentic AI are redefining how organizations innovate and operate, unlocking new levels of productivity, creativity and collaboration across industry teams. From accelerating content creation to streamlining workflows, AI offers transformative benefits that empower organizations to work smarter and faster. These capabilities, however, also introduce new dimensions of data risk—as AI adoption grows, so does the urgency for effective data security that keeps pace with AI innovation. In the 2026 Microsoft Data Security Index report, we explored one of the most pressing questions facing today’s organizations: How can we harness the power of AI while safeguarding sensitive data?

47% of surveyed organizations are​ implementing controls focused on generative AI workloads

To fully realize the potential of AI, organizations must pair innovation with responsibility and robust data security. This year, the Data Security Index report builds upon the responses of more than 1,700 security leaders to highlight three critical priorities for protecting organizational data and securing AI adoption:

  1. Moving from fragmented tools to unified data security.
  2. Managing AI-powered productivity securely.
  3. Strengthening data security with generative AI itself.

By consolidating solutions for better visibility and governance controls, implementing robust controls processes to protect data in AI-powered workflows, and using generative AI agents and automation to enhance security programs, organizations can build a resilient foundation for their next wave of generative AI-powered productivity and innovation. The result is a future where AI both drives efficiency and acts as a powerful ally in defending against data risk, unlocking growth without compromising protection.

In this article we will delve into some of the Data Security Index report’s key findings that relate to generative AI and how they are being operationalized at Microsoft. The report itself has a much broader focus and depth of insight.

1. From fragmented tools to unified data security

Many organizations still rely on disjointed tools and siloed controls, creating blind spots that hinder the efficacy of security teams. According to the 2026 Data Security Index, decision-makers cite poor integration, lack of a unified view across environments, and disparate dashboards as their top challenges in maintaining proper visibility and governance. These gaps make it harder to connect insights and respond quickly to risks—especially as data volumes and data environment complexity surge. Security leaders simply aren’t getting the oversight they need.

Why it matters
Consolidating tools into integrated platforms improves visibility, governance, and proactive risk management.

To address these challenges, organizations are consolidating tools, investing in unified platforms like Microsoft Purview that bring operations together while improving holistic visibility and control. These integrated solutions frequently outperform fragmented toolsets, enabling better detection and response, streamlined management, and stronger governance.

As organizations adopt new AI-powered technologies, many are also leaning into emerging disciplines like Microsoft Purview Data Security Posture Management (DSPM) to keep pace with evolving risks. Effective DSPM programs help teams identify and prioritize data‑exposure risks, detect access to sensitive information, and enforce consistent controls while reducing complexity through unified visibility. When DSPM provides proactive, continuous oversight, it becomes a critical safeguard—especially as AI‑powered data flows grow more dynamic across core operations.

More than 80% of surveyed organizations are implementing or developing DSPM strategies

We’re trying to use fewer vendors. If we need 15 tools, we’d rather not manage 15 vendor solutions. We’d prefer to get that down to five, with each vendor handling three tools.”

—Global information security director in the hospitality and travel industry

2. Managing AI-powered productivity securely

Generative AI is already influencing data security incident patterns: 32% of surveyed organizations’ data security incidents involve the use of generative AI tools. Understandably, surveyed security leaders have responded to this trend rapidly. Nearly half (47%) the security leaders surveyed in the 2026 Data Security Index are implementing generative AI-specific controls—an increase of 8% since the 2025 report. This helps enable innovation through the confident adoption of generative AI apps and agents while maintaining security.

A banner chart that says "32% of surveyed organizations' data security incidents involve use of AI tools."

Why it matters
Generative AI boosts productivity and innovation, but both unsanctioned and sanctioned AI tools must be managed. It’s essential to control tool use and monitor how data is accessed and shared with AI.

In the full report, we explore more deeply how AI-powered productivity is changing the risk profile of enterprises. We also explore several mechanisms, both technical and cultural, already helping maintain trust and reduce risk without sacrificing productivity gains or compliance.

3. Strengthening data security with generative AI

The 2026 Data Security Index indicates that 82% of organizations have developed plans to embed generative AI into their data security operations, up from 64% the previous year. From discovering sensitive data and detecting critical risks to investigating and triaging incidents, as well as refining policies, generative AI is being deployed for both proactive and reactive use cases at scale. The report explores how AI is changing the day-to-day operations across security teams, including the emergence of AI-assisted automation and agents.

alt text

Why it matters
Generative AI automates risk detection, scales protection, and accelerates response—amplifying human expertise while maintaining oversight.

Our generative AI systems are constantly observing, learning, and making recommendations for modifications with far more data than would be possible with any kind of manual or quasi-manual process.”

—Director of IT in the energy industry

Turning recommendations into action

As organizations confront the challenges of data security in the age of AI, the 2026 Data Security Index report offers three clear imperatives: unifying data security, increasing generative AI oversight, and using AI solutions to improve data security effectiveness.

  1. Unified data security requires continuous oversight and coordinated enforcement across your data estate. Achieving this scenario demands mechanisms that can discover, classify, and protect sensitive information at scale while extending safeguards to endpoints and workloads. Microsoft Purview DSPM operationalizes this principle through continuous discovery, classification, and protection of sensitive data across cloud, software as a service (SaaS), and on-premises assets.
  2. Responsible AI adoption depends on strict (but dynamic) controls and proactive data risk management. Organizations must enforce automated mechanisms that prevent unauthorized data exposure, monitor for anomalous usage, and guide employees toward sanctioned tools and responsible practices. Microsoft enforces these principles through governance policies supported by Microsoft Purview Data Loss Prevention and Microsoft Defender for Cloud Apps. These solutions detect, prevent, and respond to risky generative AI behaviors that increase the likelihood of data exposure, policy violations, or unsafe outputs, ensuring innovation aligns with security and compliance requirements.
  3. Modern security operations benefit from automation that accelerate detection and response alongside strong oversight. AI-powered agents can streamline threat investigation, recommend policies, and reduce manual workload while maintaining human oversight for accountability. We deliver this capability through Microsoft Security Copilot, embedded across Microsoft Sentinel, Microsoft Entra, Microsoft Intune, Microsoft Purview, and Microsoft Defender. These agents automate threat detection, incident investigation, and policy recommendations, enabling faster response and continuous improvement of security posture.

Stay informed, stay productive, stay protected

The insights we’ve covered here only scratch the surface of what the Microsoft Data Security Index reveals.The full report dives deeper into global trends, detailed metrics, and real-world perspectives from security leaders across industries and the globe. It provides specificity and context to help you shape your generative AI strategy with confidence.

If you want to explore the data behind these findings, see how priorities vary by region, and uncover actionable recommendations for secure AI adoption, read the full 2026 Microsoft Data Security Index to access comprehensive research, expert commentary, and practical guidance for building a security-first foundation for innovation.

Learn more

Learn more about the Microsoft Purview unified data security solutions.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post New Microsoft Data Security Index report explores secure AI adoption to protect sensitive data appeared first on Microsoft Security Blog.

Received — 27 January 2026 Microsoft Security Blog

Microsoft named a Leader in IDC MarketScape for Unified AI Governance Platforms

As organizations rapidly embrace generative and agentic AI, ensuring robust, unified governance has never been more critical. That’s why Microsoft is honored to be named a Leader in the 2025-2026 IDC MarketScape for Worldwide Unified AI Governance Platforms (Vendor Assessment (#US53514825, December 2025). We believe this recognition highlights our commitment to making AI innovation safe, responsible, and enterprise-ready—so you can move fast without compromising trust or compliance.

A graphic showing Microsoft's position in the Leaders section of the IDC report.
Figure 1. IDC MarketScape vendor analysis model is designed to provide an overview of the competitive fitness of technology and suppliers in a given market. The research methodology utilizes a rigorous scoring methodology based on both qualitative and quantitative criteria that results in a single graphical illustration of each supplier’s position within a given market. The Capabilities score measures supplier product, go-to-market and business execution in the short term. The Strategy score measures alignment of supplier strategies with customer requirements in a three- to five-year timeframe. Supplier market share is represented by the size of the icons.

The urgency for a unified AI governance strategy is being driven by stricter regulatory demands, the sheer complexity of managing AI systems across multiple AI platforms and multicloud and hybrid environments, and leadership concerns for risk related to negative brand impact. Centralized, end-to-end governance platforms help organizations reduce compliance bottlenecks, lower operational risks, and turn governance into a strategic driver for responsible AI innovation. In today’s landscape, unified AI governance is not just a compliance obligation—it is critical infrastructure for trust, transparency, and sustainable business transformation.

Our own approach to AI is anchored to Microsoft’s Responsible AI standard, backed by a dedicated Office of Responsible AI. Drawing from our internal experience in building, securing, and governing AI systems, we translate these learnings directly into our AI management tools and security platform. As a result, customers benefit from features such as transparency notes, fairness analysis, explainability tools, safety guardrails, regulatory compliance assessments, agent identity, data security, vulnerability identification, and protection against cyberthreats like prompt-injection attacks. These tools enable them to develop, secure, and govern AI that aligns with ethical principles and is built to help support compliance with regulatory requirements. By integrating these capabilities, we empower organizations to make ethical decisions and safeguard their business processes throughout the entire AI lifecycle.

Microsoft’s AI Governance capabilities aim to provide integrated and centralized control for observability, management, and security across IT, developer, and security teams, ensuring integrated governance within their existing tools. Microsoft Foundry acts as our main control point for model development, evaluation, deployment, and monitoring, featuring a curated model catalog, machine learning oeprations, robust evaluation, and embedded content safety guardrails. Microsoft Agent 365, which was not yet available at the time of the IDC publication, provides a centralized control plane for IT, helping teams confidently deploy, manage, and secure their agentic AI published through Microsoft 365 Copilot, Microsoft Copilot Studio, and Microsoft Foundry.

Deeply embedded security systems are integral to Microsoft’s AI governance solution. Integrations with Microsoft Purview provide real-time data security, compliance, and governance tools, while Microsoft Entra provides agent identity and controls to manage agent sprawl and prevent unauthorized access to confidential resources. Microsoft Defender offers AI-specific posture management, threat detection, and runtime protection. Microsoft Purview Compliance Manager automates adherence to more than 100 regulatory frameworks. Granular audit logging and automated documentation bolster regulatory and forensic capabilities, enabling organizations in regulated industries to innovate with AI while maintaining oversight, secure collaboration, and consistent policy enforcement.

Guidance for security and governance leaders and CISOs

To empower organizations in advancing their AI transformation initiatives, it is crucial to focus on the following priorities for establishing a secure, well-governed, and scalable AI framework. The guidance below provides Microsoft’s recommendations for fulfilling these best practices:

CISO guidanceWhat it meansHow Microsoft delivers
Adopt a unified, end‑to‑end governance platformEstablish a comprehensive, integrated governance system covering traditional machine learning, generative AI, and agentic AI. Ensure unified oversight from development through deployment and monitoring.Microsoft enables observability and governance at every layer across IT, developer, and security teams to provide an integrated and cohesive governance platform that enables teams to play their part from within the tools they use. Microsoft Foundry acts as the developer control plane, connecting model development, evaluation, security controls, and continuous monitoring. Microsoft Agent 365 is the control plane for IT, enabling discovery, security, deployment, and observability for agentic AI in the enterprise. Microsoft Purview, Entra, and Defender integrate to deliver consistent full-stack governance across data, identity, threat protection, and compliance.
Industry‑leading responsible AI infrastructureImplement responsible AI practices as a foundational part of engineering and operations, with transparency and fairness built in.Microsoft embeds its Responsible AI Standards into our engineering processes, supported by the Office of Responsible AI. Automatic generation of model cards and built-in fairness mechanisms set Microsoft apart as a strategic differentiator, pairing technical controls with mature governance processes. Microsoft’s Responsible AI Transparency Report provides visibility to how we develop and deploy AI models and systems responsibility and provides a model for customers to emulate our best practices.
Advanced security and real‑time protectionProvide robust, real-time defense against emerging AI security threats, especially for regulated industries.Microsoft’s platform features real-time jailbreak detection, encrypted agent-to-agent communication, tamper-evident audit logs for model and agent actions, and deep integration with Defender to provide AI-specific threat detection, security posture management, and automated incident response capabilities. These capabilities are especially critical for regulated sectors.
Automated compliance at scaleAutomate compliance processes, enable policy enforcement throughout the AI lifecycle, and support audit readiness across hybrid and multicloud environments.Microsoft Purview streamlines compliance adherence for regulatory requirements and provides comprehensive support for hybrid and multicloud deployments—giving customers repeatable and auditable governance processes.

We believe we are differentiated in the AI governance space by delivering a unified, end-to-end platform that embeds responsible AI principles and robust security at every layer—from agents and applications to underlying infrastructure. Through native integration of Microsoft Foundry, Microsoft Agent 365, Purview, Entra, and Defender, organizations benefit from centralized oversight and observability across the layers of the organization with consistent protection and operationalized compliance across the AI lifecycle. Our comprehensive approach removes disparate and disconnected tooling, enabling organizations to build trustworthy, transparent, and secure AI solutions that can start secure and stay secure. We believe this approach uniquely differentiates Microsoft as a leader in operationalizing responsible, secure, and auditable AI at scale.

Strengthen your security strategy with Microsoft AI governance solutions

Agentic and generative AI are reshaping business processes, creating a new frontier for security and governance. Organizations that act early and prioritize governance best practices—unified governance platforms, build-in responsible AI tooling, and integrated security—will be best positioned to innovate confidently and maintain trust.

Microsoft approaches AI governance with a commitment to embedding responsible practices and robust security at every layer of the AI ecosystem. Our AI governance and security solutions empower customers with built-in transparency, fairness, and compliance tools throughout engineering and operations. We believe this approach allows organizations to benefit from centralized oversight, enforce policies consistently across the entire AI lifecycle, and achieve audit readiness—even in the rapidly changing landscape of generative and agentic AI.

Explore more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Microsoft named a Leader in IDC MarketScape for Unified AI Governance Platforms appeared first on Microsoft Security Blog.

Received — 16 January 2026 Microsoft Security Blog

Microsoft named an overall leader in KuppingerCole Leadership Compass for Generative AI Defense

15 December 2025 at 19:05

Today, we are proud to share that Microsoft has been recognized as an overall leader in the KuppingerCole Leadership Compass for Generative AI Defense (GAD), an independent report from a leading European analyst firm. This recognition reinforces the work we’ve been doing to deliver enterprise-ready Security and Governance capabilities for AI, and reflects our commitment to helping customers secure AI at scale.

Figure 1: KuppingerCole Generative AI Defense Leadership Compass chart highlighting Microsoft as the top Overall Leader, with other vendors including Palo Alto Networks, Cisco, F5, NeuralTrust, IBM, and others positioned as challengers or followers.

At Microsoft, our approach to Generative AI Defense is grounded in a simple principle: security is a core primitive which must be embedded everywhere – across AI apps, agents, platforms, and infrastructure. Microsoft delivers this through a comprehensive and integrated approach that provides visibility, protection, and governance across the full AI stack.

Our capabilities and controls help organizations address the most pressing challenges CISOs and security leaders face as AI adoption accelerates. We protect against agent sprawl and resource access with identity-first controls like Entra Agent ID and lifecycle governance, alongside network-layer controls that surface hidden shadow AI risks.  We prevent sensitive data leaks with Microsoft Purview’s real-time data loss prevention, classification, and inference safeguards. We defend against new AI threats and vulnerabilities with Microsoft Defender’s runtime protection, posture management, and AI-driven red teaming. Finally, we help organizations stay in compliance with evolving AI regulations with built-in support for frameworks like the EU AI Act, NIST AI RMF, and ISO 42001, so teams can confidently innovate while meeting governance requirements. Foundational security is also built into Microsoft 365 Copilot and Microsoft Foundry, with identity controls, data safeguards, threat protection, and compliance integrated from the start.

Guidance for Security Leaders and CISOs

For CISOs enabling their organizations to accelerate their AI transformation journeys, the following priorities are essential to building a secure, governed, and scalable AI foundation.  This guidance reflects a combination of key recommendations from KuppingerCole and Microsoft’s perspective on how we deliver on those recommendations:

CISO GuidanceWhat It MeansHow Microsoft Delivers
Map AI usage across the enterpriseEstablish full visibility into every AI tool, agent, and model in use to understand risk exposure and security requirements.Agent365 provides a unified registry for AI agents with full lifecycle governance. Foundry Control Plane gives developers full observability and governance of their entire AI fleet across clouds. And with integrated security signals and controls from signals from Microsoft Entra, Purview, and Defender, Security Dashboard for AI brings posture, configuration, and risk insights together into a single, comprehensive view of your AI estate.
Adopt identity-first controlsManage agents and other identities with the same rigor as privileged accounts, enforcing strong authentication, least privilege, and continuous monitoring.Microsoft Entra Agent ID assigns secure, unique identities to agents, applies conditional access policies, and enforces lifecycle controls to prevent agent sprawl and eliminate over-permissioned access.
Enforce data governance and DLP for AI interactionsProtect sensitive information to both inputs and outputs, applying consistent policies that align with evolving regulatory and compliance requirements.Microsoft Purview delivers real-time DLP for AI prompts and outputs, preserves sensitivity label, applies insider risk controls for agents, and provides compliance templates aligned with the EU AI Act, NIST AI RMF, ISO 42001, and more.
Build a layered GAD architectureCombine prompt security, model integrity monitoring, output filtering, and runtime protection instead of relying on any single control.Microsoft Defender provides runtime protection for agents, correlates threat signals, including those from Microsoft Foundry’s Prompt Shields, with threat intelligence, and strengthens security through posture management and attack path analysis for AI workloads.
Prioritize integrated, enterprise-ready solutionsChoose platforms that unify policy enforcement, monitoring, and compliance across environments to reduce operational complexity and improve security outcomes.Microsoft Security integrates capabilities across Microsoft Entra, Purview, and Defender, deeply integrated with Microsoft 365, Copilot Studio, and Foundry, providing centralized governance, consistent policy enforcement, and operationalized oversight across your AI ecosystem.

What differentiates Microsoft is the comprehensive set of security capabilities woven into the Microsoft AI agents, apps, and platform. Shared capabilities across Microsoft Entra, Purview, and Defender deliver consistent protection for IT, developers, and security teams, while tools such as Microsoft Agent 365, Foundry Control Plane, and Security Dashboard for AI integrate security and observability directly where AI applications and agents are built, deployed, and governed. Together, these capabilities, including our latest capabilities from Ignite, help organizations deploy AI securely, reduce operational complexity, and strengthen trust across their environment.

Closing Thoughts

Agentic AI is transforming how organizations work, and with that shift comes a new security frontier. As AI becomes embedded across business processes, taking a proactive approach to defense-in-depth, governance, and integrated AI security is essential. Organizations that act early will be better positioned to innovate confidently and maintain trust.

At Microsoft, we recognize that securing AI requires purpose-built, enterprise-ready protection. With Microsoft Security for AI, organizations can safeguard sensitive data, protect against emerging AI threats, detect and remediate vulnerabilities, maintain compliance with evolving regulations, and strengthen trust as AI adoption accelerates. In this rapidly evolving landscape, AI defense is not optional, it is foundational to protecting innovation and ensuring enterprise readiness.

Explore more

Updated Dec 15, 2025

Version 1.0

The post Microsoft named an overall leader in KuppingerCole Leadership Compass for Generative AI Defense appeared first on Microsoft Security Blog.

Received — 14 January 2026 Microsoft Security Blog

Microsoft named a Leader in IDC MarketScape for Unified AI Governance Platforms

As organizations rapidly embrace generative and agentic AI, ensuring robust, unified governance has never been more critical. That’s why Microsoft is honored to be named a Leader in the 2025-2026 IDC MarketScape for Worldwide Unified AI Governance Platforms (Vendor Assessment (#US53514825, December 2025). We believe this recognition highlights our commitment to making AI innovation safe, responsible, and enterprise-ready—so you can move fast without compromising trust or compliance.

A graphic showing Microsoft's position in the Leaders section of the IDC report.
Figure 1. IDC MarketScape vendor analysis model is designed to provide an overview of the competitive fitness of technology and suppliers in a given market. The research methodology utilizes a rigorous scoring methodology based on both qualitative and quantitative criteria that results in a single graphical illustration of each supplier’s position within a given market. The Capabilities score measures supplier product, go-to-market and business execution in the short term. The Strategy score measures alignment of supplier strategies with customer requirements in a three- to five-year timeframe. Supplier market share is represented by the size of the icons.

The urgency for a unified AI governance strategy is being driven by stricter regulatory demands, the sheer complexity of managing AI systems across multiple AI platforms and multicloud and hybrid environments, and leadership concerns for risk related to negative brand impact. Centralized, end-to-end governance platforms help organizations reduce compliance bottlenecks, lower operational risks, and turn governance into a strategic driver for responsible AI innovation. In today’s landscape, unified AI governance is not just a compliance obligation—it is critical infrastructure for trust, transparency, and sustainable business transformation.

Our own approach to AI is anchored to Microsoft’s Responsible AI standard, backed by a dedicated Office of Responsible AI. Drawing from our internal experience in building, securing, and governing AI systems, we translate these learnings directly into our AI management tools and security platform. As a result, customers benefit from features such as transparency notes, fairness analysis, explainability tools, safety guardrails, regulatory compliance assessments, agent identity, data security, vulnerability identification, and protection against cyberthreats like prompt-injection attacks. These tools enable them to develop, secure, and govern AI that aligns with ethical principles and is built to help support compliance with regulatory requirements. By integrating these capabilities, we empower organizations to make ethical decisions and safeguard their business processes throughout the entire AI lifecycle.

Microsoft’s AI Governance capabilities aim to provide integrated and centralized control for observability, management, and security across IT, developer, and security teams, ensuring integrated governance within their existing tools. Microsoft Foundry acts as our main control point for model development, evaluation, deployment, and monitoring, featuring a curated model catalog, machine learning oeprations, robust evaluation, and embedded content safety guardrails. Microsoft Agent 365, which was not yet available at the time of the IDC publication, provides a centralized control plane for IT, helping teams confidently deploy, manage, and secure their agentic AI published through Microsoft 365 Copilot, Microsoft Copilot Studio, and Microsoft Foundry.

Deeply embedded security systems are integral to Microsoft’s AI governance solution. Integrations with Microsoft Purview provide real-time data security, compliance, and governance tools, while Microsoft Entra provides agent identity and controls to manage agent sprawl and prevent unauthorized access to confidential resources. Microsoft Defender offers AI-specific posture management, threat detection, and runtime protection. Microsoft Purview Compliance Manager automates adherence to more than 100 regulatory frameworks. Granular audit logging and automated documentation bolster regulatory and forensic capabilities, enabling organizations in regulated industries to innovate with AI while maintaining oversight, secure collaboration, and consistent policy enforcement.

Guidance for security and governance leaders and CISOs

To empower organizations in advancing their AI transformation initiatives, it is crucial to focus on the following priorities for establishing a secure, well-governed, and scalable AI framework. The guidance below provides Microsoft’s recommendations for fulfilling these best practices:

CISO guidanceWhat it meansHow Microsoft delivers
Adopt a unified, end‑to‑end governance platformEstablish a comprehensive, integrated governance system covering traditional machine learning, generative AI, and agentic AI. Ensure unified oversight from development through deployment and monitoring.Microsoft enables observability and governance at every layer across IT, developer, and security teams to provide an integrated and cohesive governance platform that enables teams to play their part from within the tools they use. Microsoft Foundry acts as the developer control plane, connecting model development, evaluation, security controls, and continuous monitoring. Microsoft Agent 365 is the control plane for IT, enabling discovery, security, deployment, and observability for agentic AI in the enterprise. Microsoft Purview, Entra, and Defender integrate to deliver consistent full-stack governance across data, identity, threat protection, and compliance.
Industry‑leading responsible AI infrastructureImplement responsible AI practices as a foundational part of engineering and operations, with transparency and fairness built in.Microsoft embeds its Responsible AI Standards into our engineering processes, supported by the Office of Responsible AI. Automatic generation of model cards and built-in fairness mechanisms set Microsoft apart as a strategic differentiator, pairing technical controls with mature governance processes. Microsoft’s Responsible AI Transparency Report provides visibility to how we develop and deploy AI models and systems responsibility and provides a model for customers to emulate our best practices.
Advanced security and real‑time protectionProvide robust, real-time defense against emerging AI security threats, especially for regulated industries.Microsoft’s platform features real-time jailbreak detection, encrypted agent-to-agent communication, tamper-evident audit logs for model and agent actions, and deep integration with Defender to provide AI-specific threat detection, security posture management, and automated incident response capabilities. These capabilities are especially critical for regulated sectors.
Automated compliance at scaleAutomate compliance processes, enable policy enforcement throughout the AI lifecycle, and support audit readiness across hybrid and multicloud environments.Microsoft Purview streamlines compliance adherence for regulatory requirements and provides comprehensive support for hybrid and multicloud deployments—giving customers repeatable and auditable governance processes.

We believe we are differentiated in the AI governance space by delivering a unified, end-to-end platform that embeds responsible AI principles and robust security at every layer—from agents and applications to underlying infrastructure. Through native integration of Microsoft Foundry, Microsoft Agent 365, Purview, Entra, and Defender, organizations benefit from centralized oversight and observability across the layers of the organization with consistent protection and operationalized compliance across the AI lifecycle. Our comprehensive approach removes disparate and disconnected tooling, enabling organizations to build trustworthy, transparent, and secure AI solutions that can start secure and stay secure. We believe this approach uniquely differentiates Microsoft as a leader in operationalizing responsible, secure, and auditable AI at scale.

Strengthen your security strategy with Microsoft AI governance solutions

Agentic and generative AI are reshaping business processes, creating a new frontier for security and governance. Organizations that act early and prioritize governance best practices—unified governance platforms, build-in responsible AI tooling, and integrated security—will be best positioned to innovate confidently and maintain trust.

Microsoft approaches AI governance with a commitment to embedding responsible practices and robust security at every layer of the AI ecosystem. Our AI governance and security solutions empower customers with built-in transparency, fairness, and compliance tools throughout engineering and operations. We believe this approach allows organizations to benefit from centralized oversight, enforce policies consistently across the entire AI lifecycle, and achieve audit readiness—even in the rapidly changing landscape of generative and agentic AI.

Explore more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Microsoft named a Leader in IDC MarketScape for Unified AI Governance Platforms appeared first on Microsoft Security Blog.

Received — 11 January 2026 Microsoft Security Blog

Microsoft named an overall leader in KuppingerCole Leadership Compass for Generative AI Defense

15 December 2025 at 19:05

Today, we are proud to share that Microsoft has been recognized as an overall leader in the KuppingerCole Leadership Compass for Generative AI Defense (GAD), an independent report from a leading European analyst firm. This recognition reinforces the work we’ve been doing to deliver enterprise-ready Security and Governance capabilities for AI, and reflects our commitment to helping customers secure AI at scale.

Figure 1: KuppingerCole Generative AI Defense Leadership Compass chart highlighting Microsoft as the top Overall Leader, with other vendors including Palo Alto Networks, Cisco, F5, NeuralTrust, IBM, and others positioned as challengers or followers.

At Microsoft, our approach to Generative AI Defense is grounded in a simple principle: security is a core primitive which must be embedded everywhere – across AI apps, agents, platforms, and infrastructure. Microsoft delivers this through a comprehensive and integrated approach that provides visibility, protection, and governance across the full AI stack.

Our capabilities and controls help organizations address the most pressing challenges CISOs and security leaders face as AI adoption accelerates. We protect against agent sprawl and resource access with identity-first controls like Entra Agent ID and lifecycle governance, alongside network-layer controls that surface hidden shadow AI risks.  We prevent sensitive data leaks with Microsoft Purview’s real-time data loss prevention, classification, and inference safeguards. We defend against new AI threats and vulnerabilities with Microsoft Defender’s runtime protection, posture management, and AI-driven red teaming. Finally, we help organizations stay in compliance with evolving AI regulations with built-in support for frameworks like the EU AI Act, NIST AI RMF, and ISO 42001, so teams can confidently innovate while meeting governance requirements. Foundational security is also built into Microsoft 365 Copilot and Microsoft Foundry, with identity controls, data safeguards, threat protection, and compliance integrated from the start.

Guidance for Security Leaders and CISOs

For CISOs enabling their organizations to accelerate their AI transformation journeys, the following priorities are essential to building a secure, governed, and scalable AI foundation.  This guidance reflects a combination of key recommendations from KuppingerCole and Microsoft’s perspective on how we deliver on those recommendations:

CISO GuidanceWhat It MeansHow Microsoft Delivers
Map AI usage across the enterpriseEstablish full visibility into every AI tool, agent, and model in use to understand risk exposure and security requirements.Agent365 provides a unified registry for AI agents with full lifecycle governance. Foundry Control Plane gives developers full observability and governance of their entire AI fleet across clouds. And with integrated security signals and controls from signals from Microsoft Entra, Purview, and Defender, Security Dashboard for AI brings posture, configuration, and risk insights together into a single, comprehensive view of your AI estate.
Adopt identity-first controlsManage agents and other identities with the same rigor as privileged accounts, enforcing strong authentication, least privilege, and continuous monitoring.Microsoft Entra Agent ID assigns secure, unique identities to agents, applies conditional access policies, and enforces lifecycle controls to prevent agent sprawl and eliminate over-permissioned access.
Enforce data governance and DLP for AI interactionsProtect sensitive information to both inputs and outputs, applying consistent policies that align with evolving regulatory and compliance requirements.Microsoft Purview delivers real-time DLP for AI prompts and outputs, preserves sensitivity label, applies insider risk controls for agents, and provides compliance templates aligned with the EU AI Act, NIST AI RMF, ISO 42001, and more.
Build a layered GAD architectureCombine prompt security, model integrity monitoring, output filtering, and runtime protection instead of relying on any single control.Microsoft Defender provides runtime protection for agents, correlates threat signals, including those from Microsoft Foundry’s Prompt Shields, with threat intelligence, and strengthens security through posture management and attack path analysis for AI workloads.
Prioritize integrated, enterprise-ready solutionsChoose platforms that unify policy enforcement, monitoring, and compliance across environments to reduce operational complexity and improve security outcomes.Microsoft Security integrates capabilities across Microsoft Entra, Purview, and Defender, deeply integrated with Microsoft 365, Copilot Studio, and Foundry, providing centralized governance, consistent policy enforcement, and operationalized oversight across your AI ecosystem.

What differentiates Microsoft is the comprehensive set of security capabilities woven into the Microsoft AI agents, apps, and platform. Shared capabilities across Microsoft Entra, Purview, and Defender deliver consistent protection for IT, developers, and security teams, while tools such as Microsoft Agent 365, Foundry Control Plane, and Security Dashboard for AI integrate security and observability directly where AI applications and agents are built, deployed, and governed. Together, these capabilities, including our latest capabilities from Ignite, help organizations deploy AI securely, reduce operational complexity, and strengthen trust across their environment.

Closing Thoughts

Agentic AI is transforming how organizations work, and with that shift comes a new security frontier. As AI becomes embedded across business processes, taking a proactive approach to defense-in-depth, governance, and integrated AI security is essential. Organizations that act early will be better positioned to innovate confidently and maintain trust.

At Microsoft, we recognize that securing AI requires purpose-built, enterprise-ready protection. With Microsoft Security for AI, organizations can safeguard sensitive data, protect against emerging AI threats, detect and remediate vulnerabilities, maintain compliance with evolving regulations, and strengthen trust as AI adoption accelerates. In this rapidly evolving landscape, AI defense is not optional, it is foundational to protecting innovation and ensuring enterprise readiness.

Explore more

Updated Dec 15, 2025

Version 1.0

The post Microsoft named an overall leader in KuppingerCole Leadership Compass for Generative AI Defense appeared first on Microsoft Security Blog.

❌