Reading view

  •  

We need to act with urgency to address the growing AI divide

Microsoft announces at the India AI Impact Summit it ion pace to invest USD $50 billion by the end of the decade to help bring AI to countries across the Global South  

Artificial intelligence is diffusing at an impressive speed, but its adoption around the world remains profoundly uneven. As Microsoft’s latest AI Diffusion Report shows, AI usage in the Global North is roughly twice that of the Global South. And this divide continues to widen. This disparity impacts not only national and regional economic growth, but whether AI can deliver on its broader promise of expanding opportunity and prosperity around the world.

The India AI Impact Summit rightly has placed this challenge at the center of its agenda. For more than a century, unequal access to electricity exacerbated a growing economic gap between the Global North and South. Unless we act with urgency, a growing AI divide will perpetuate this disparity in the century ahead.

Solutions will not come easily. The needs are multifaceted, and will require substantial investments and hard work by governments, the private sector, and nonprofit organizations. But the opportunity is clear. If AI is deployed broadly and used well by a young and growing population, it offers a real prospect for catch-up economic growth for the Global South. It might even provide the biggest such opportunity of the 21st century.

As a company, we are committed to playing an ambitious and constructive role in supporting this opportunity. This week in Delhi, we’re sharing that Microsoft is on pace to invest $50 billion by the end of the decade to help bring AI to countries across the Global South. This is based on a five-part program to drive AI impact, consisting of the following:

  • Building the infrastructure needed for AI diffusion
  • Empowering people through technology and skills for schools and nonprofits
  • Strengthening multilingual and multicultural AI capabilities
  • Enabling local AI innovations that address community needs
  • Measuring AI diffusion to guide future AI policies and investments

One thing that is clear this week at the summit in India is that success will require many deep partnerships. These must span borders and bring people and organizations together across the public, private, and nonprofit sectors.

1. Building the infrastructure needed for AI diffusion

Infrastructure is a prerequisite for AI diffusion, requiring reliable electricity, connectivity, and compute capacity. To help address infrastructure gaps and support the growing needs of the Global South, Microsoft has steadily increased its investments in AI-enabling infrastructure across these regions. In our last fiscal year alone, Microsoft invested more than  $8 billion in datacenter infrastructure serving the Global South. This includes new infrastructure in India, Mexico, and countries in Africa, South America, Southeast Asia, and the Middle East.

We’re coupling our investments in datacenters with an ambitious effort to help close the Global South’s connectivity divide. We’ve been pursuing aggressively a global goal to extend internet access to 250 million people in unserved and underserved communities in the Global South, including 100 million people in Africa.

As we announced in November, we’ve already reached 117 million people across Africa through partnerships with organizations such as Cassava Technologies, Mawingu, and others that are building last‑mile networks across rural and urban communities alike. We’re closing in on our global goal of reaching 250 million people and will share an update on that progress soon.

We’re investing in AI infrastructure with sensitivity to digital sovereignty needs. We recognize that in a fragmented world, we must offer customers attractive choices for the use of our offerings. This includes sovereign controls in the public cloud, private sovereign offerings, and close collaboration with national partners.

We pursue all this with commitments to protect cybersecurity, privacy, and resilience. In the age of AI, we ensure that our customers’ AI-based innovations and intellectual property remain in their hands and under their control, rather than being transferred to AI providers.

Critically, we balance our focus on national sovereignty with our efforts to support digital trust and stability across borders. The Global South requires enormous investments to fund infrastructure for datacenters, connectivity, and electricity. It is difficult to imagine meeting all these needs without foreign direct investment, including from international technology firms.

This need is part of what informed our announcement last week at the Munich Security Conference of the new Trusted Tech Alliance. This new partnership brings together 16 leading technology companies from 11 countries and four continents. We’ve agreed together that we will adhere to five core principles designed to ensure trust in technology. Ultimately, we believe the Global South—as well as the rest of the world—needs both to protect its digital sovereignty and benefit from new investments and the best digital innovations the world has to offer.

2. Empowering people through technology and skills for schools and nonprofits

Ultimately, datacenters, connectivity, and electricity provide only part of the digital infrastructure a nation needs. History shows that the ability to provide access to technology and technology skills are equally important for economic development.

As a company, we’re focused on this in multiple ways. One critical aspect of our work is based on programs to provide cloud, AI, and other digital technologies to schools and nonprofits across the Global South. Another is our work to advance broad access to AI skills. In our last fiscal year, Microsoft invested more than $2 billion in these programs in the Global South. This includes direct financial grants, technology donations, skilling programs, and below-market product discounts.

AI skills are foundational to ensuring that AI expands opportunity and enables people to pursue more impactful real-world applications. With the launch of Microsoft Elevate in July, we committed to helping 20 million people in and beyond the Global South earn in-demand AI skilling credentials by 2028. After training 5.6 million people across India in 2025, we advanced this work by setting a goal last December to equip 20 million people in India with essential AI skills by 2030.

As part of that commitment, today we are announcing the launch of Elevate for Educators in India to strengthen the capacity of two million teachers across more than 200,000 schools, vocational institutes, and higher education settings. Our goal is to help the country’s teaching workforce lead confidently in an AI‑driven future. The program will be delivered in partnership with India’s national education and workforce training authorities, expanding equitable AI opportunities for eight million students.

Through Microsoft Elevate, we’re also working to introduce new educator credentials and a global professional learning community that enables teachers to share best practices with peers worldwide. This effort will involve large-scale capacity building initiatives, including AI Ambassadors, Educator Academies, AI Productivity Labs, and Centers of Excellence. It will equip 25,000 institutions with inclusive AI infrastructure while integrating AI learning pathways into major government platforms.

3. Strengthening multilingual and multicultural AI capabilities

Language is another major barrier to AI diffusion across the Global South, particularly in regions where digitally underrepresented languages prevail and access to essential services depends on local-language communication. For billions of people worldwide, AI systems perform less consistently in the languages they rely on most than in English.

That’s why we’re announcing this week new steps to increase our investments across the AI lifecycle, from data and models to evaluation and deployment, to strengthen multilingual and multicultural capabilities and support more inclusive AI systems that will better serve the Global South.

First, we’re investing upstream in language data and model capability. This includes support for LINGUA Africa, which builds on what we learned through LINGUA Europe: that investing in language data and model capability in partnership with local communities can materially improve AI performance for underrepresented languages.

Through LINGUA Africa—a $5.5 million open call led by the Masakhane African Languages Hub, Microsoft’s AI for Good Lab, and the Gates Foundation, with additional support from the UK government—we are prioritizing open, responsibly sourced data across text, speech, and vision as well as use-case-driven AI model development. By enabling African languages in high-impact sectors like education, food security, health, and government services, LINGUA Africa aims to ensure AI advances translate into tangible improvements in people’s daily lives.

Second, we’re advancing multilingual and multicultural evaluation tools. We’re helping expand the MLCommons AILuminate benchmark to include major Indic and Asian languages, enabling more reliable measurement of AI safety and security beyond English.

Today, even when automated evaluation tools expand language coverage, they too often rely on machine translation or English-first model behavior, with predictable failures when local expressions shift meaning. Partnering with academic and government institutions in India, Japan, Korea, and Singapore, and with industry, Microsoft is co-leading AILuminate’s multilingual, multicultural, and multimodal expansion that builds from the ground up. With a pilot dataset of 7,000 high-quality text-and-image prompts for Hindi, Tamil, Malay, Japanese, and Korean, we’re developing tools that reflect how risks manifest in local linguistic and cultural contexts, not just how they appear after translation.

Microsoft Research is also advancing Samiksha, a community-centered method for evaluating AI behavior in real-world contexts, in collaboration with Karya and The Collective Intelligence Project in India. Samiksha encodes local language use, culturally specific communication norms, and locally relevant use cases directly into core testing artifacts by surfacing failure modes that English-first evaluations routinely miss.

Finally, we’re working to scale content provenance for linguistic diversity. For trusted AI deployment, the ecosystem benefits from tools to identify the provenance of digital content like images, audio, or video, distinguishing whether it’s AI-generated. With partners in the Coalition for Content Provenance and Authenticity (C2PA), Microsoft is helping extend content provenance standards beyond an English-ready baseline. This includes forthcoming support for multiple Indic languages across metadata, specifications, and UX guidance, alongside efforts to support mobile-first deployment. With these investments, hundreds of millions more people in India will be better equipped to identify synthetic media in their primary language.

4. Enabling local AI innovations that address community needs

As India’s guiding sutras for the AI Impact Summit recognize, AI must be applied to address pressing challenges in collaboration with people and organizations in the Global South. Microsoft’s increasing investments prioritize locally defined problems, locally grounded expertise, and real-world impact. Our goal is straightforward: to ensure that AI solutions are not only technically sound, but socially relevant and sustainable.

Today, Microsoft is announcing a new AI initiative to strengthen food security across Sub-Saharan Africa, starting in Kenya and designed to scale across the region. Across Global South communities, food security and sustainable agriculture are critical to resilience and progress. In collaboration with NASA Harvest, the government of Kenya, the East Africa Grain Council, UNDP AI Hub for Sustainable Development, and FAO, our AI for Good Lab will use AI on top of satellite data to provide critical, timely food security insights. This builds on what we’ve learned in helping to address rice farming challenges in India, where severe groundwater depletion prompted 150,000 farmers in Punjab to adopt water-saving methods. In collaboration with The Nature Conservancy, Microsoft’s AI for Good Lab developed a classification system with satellite imagery to empower policymakers to track adoption of sustainable rice farming practices, target interventions, and measure water management impacts at scale.

Through Project Gecko, Microsoft Research is also co-designing AI technologies with local communities in East Africa and South Asia to support agriculture. This work includes the Paza family of automatic speech recognition models that can operate on mobile devices across six Kenyan languages, multilingual Copilots, and a Multimodal Critical Thinking (MMCT) Agent that can reason over community-generated video, voice, and text. Microsoft also launched PazaBench—the first automatic speech recognition leaderboard, with initial coverage of 39 African languages—and developed two playbooks for multilingual and multicultural capabilities, Paza and Vibhasha. Likewise, our AI for Good Lab developed a reproducible pipeline for adapting open-weight large language models to low-resource languages, demonstrating measurable gains for languages such as Chichewa, Inuktitut, and Māori.

5. Measuring AI diffusion to guide future AI policies and investments

Finally, accelerating diffusion requires a firm understanding of where AI is being used, how it is being adopted, and where gaps persist. Building on our AI Diffusion Reports and Microsoft GitHub’s long track record of contributing to the OECD AI Policy Observatory, the WIPO Global Innovation Index, and other cross‑country analyses, we’re increasing our investments in research and data sharing to track AI diffusion.

We’re advancing new methods for sharing AI adoption metrics. For example, based on models used in public code repositories hosted on Microsoft GitHub and privacy-preserving aggregated usage signals from Azure Foundry, we’re scaling this work through contributions to the forthcoming Global AI Adoption Index developed by the World Bank.

Signals from the global developer community that builds, adapts, and deploys AI-enabled software round out adoption research. At 24 million, the Indian developer community is the second largest national community on GitHub, where developers learn about and collaborate with the world on AI. The Indian community is also the fastest growing among the top 30 largest economies, with growth at more than 26 percent each year since 2020 and a recent surge of over 36 percent in annual growth as of Q4 2025. Indian developers rank second globally in open-source contributions, second in GitHub Education users, and second in contributions to public generative AI projects, with readiness to use tools like GitHub Copilot across academic, enterprise, and public interest settings enabling AI diffusion.

Insights from this evidence base help inform investments in infrastructure, language capabilities, skilling, or beyond, supporting more targeted and effective interventions to expand AI’s benefits. They also create a common empirical baseline to track progress over time—so AI diffusion becomes something we can measure and shape, not just observe.

Sustaining impact at scale through coordinated global action

For AI to diffuse broadly and deliver meaningful impact across regions, several conditions matter. As a company, we are focused on the need for accessible AI infrastructure, systems that work reliably in real-world contexts, and technologies that can be applied toward local challenges and opportunities. Microsoft is committed to working with partners to advance this work, including sharing data to track progress.

The post We need to act with urgency to address the growing AI divide appeared first on Microsoft On the Issues.

  •  

Password Managers Vulnerable to Vault Compromise Under Malicious Server

Researchers at ETH Zurich have tested the security of Bitwarden, LastPass, Dashlane, and 1Password password managers.

The post Password Managers Vulnerable to Vault Compromise Under Malicious Server appeared first on SecurityWeek.

  •  

Dior, Louis Vuitton, Tiffany Fined $25 Million in South Korea After Data Breaches

Luxury brands were among the dozens of major companies whose Salesforce instances were targeted by Scattered LAPSUS$ Hunters.

The post Dior, Louis Vuitton, Tiffany Fined $25 Million in South Korea After Data Breaches appeared first on SecurityWeek.

  •  

The Human Element: Turning Threat Actor OPSEC Fails into Investigative Breakthroughs

Blogs

Blog

The Human Element: Turning Threat Actor OPSEC Fails into Investigative Breakthroughs

In this post, we explore how the psychological traps of operational security can unmask even the most sophisticated actors.

SHARE THIS:
Default Author Image
February 13, 2026
Table Of Contents

The threat intelligence landscape is often dominated with talks of sophisticated TTPs (tactics, tools, and procedures), zero-day vulnerabilities, and ransomware. While these technical threats are formidable, they are still managed by human beings, and it is the human element that often provides the most critical breakthroughs in attributing these attacks and de-anonymizing the threat actors behind them.

In our latest webinar, “OPSEC Fails: The Secret Weapon for People-Centric OSINT”,  Flashpoint was joined by Joshua Richards, founder of OSINT Praxis. Josh shared an intriguing case study where an attacker’s digital breadcrumbs led to a life-saving intervention. 

Here is how OSINT techniques, leveraged by Flashpoint’s expansive data capabilities, can dismantle illegal threat actor campaigns by turning a technical investigation into a human one.

Leveraging OPSEC as a Mindset

In a technical context, OPSEC is a risk management process that identifies seemingly innocuous pieces of information that, when gathered by an adversary, could be pieced together to reveal a larger, sensitive picture.

In the webinar, we break down the OPSEC mindset into three core pillars that every practitioner, and threat actor, must navigate. When these pillars fail, the investigation begins.

  • Analyzing the Signature: Every human has a digital signature, such as the way they type (stylometry), the times they are active, and the tools they prefer.
  • Identity Masking & Persona Management: This involves ensuring that your investigative identity has zero overlap with your real life. A common failure includes using the same browser for personal use and investigative research, which allows cookies to bridge the two identities.
  • Traffic Obfuscation: Even with a VPN, certain behaviors such as posting on a dark web forum and then using that same connection to check personal banking can expose an IP address, linking it to a practitioner or threat actor.

“Effective OPSEC isn’t about the tools you use; it’s about what breadcrumbs you are leaving behind that hackers, investigation subjects, or literally anyone could find about you.”

Joshua Richards, founder of Osint Praxis

Leveraging the Mindset for CTI

Understanding the OPSEC mindset allows security teams to think like the target. When we know the psychological traps attackers fall in, we know exactly where to look for their mistakes.

AssumptionThe Mindset TrapThe Investigative Reality
Insignificant“I’m not a high-value target; no one is looking for me.”Automated Aggression: Hackers use scripts to scan millions of accounts. You aren’t “chosen”; you are “discovered” via automation.
Invisible“I don’t have a LinkedIn or X account, so I don’t have a footprint.”Shadow Data: Public birth records, property taxes, and historical data breaches create a footprint you didn’t even build yourself.
Invincible“I have 2FA and complex passwords; I’m unhackable.”Session Hijacking: Infostealer malware steals “session tokens” (cookies). This allows an actor to be you in a browser without ever needing your 2FA code.

During the webinar, Joshua shares a masterclass in how leveraging these concepts can turn a vague dark web threat into a real-world arrest. Check out the on-demand webinar to see exactly how the investigation started on Torum, a dark web forum, and ended with an arrest that saved the lives of two individuals.

Turn the Tables Using Flashpoint

The insights shared in this session powerfully illustrate that even the most dangerous threat actors are rarely as anonymous as they believe. Their downfall isn’t usually a failure of their technical prowess, but a failure of their mindset. By understanding these OSINT techniques, intelligence practitioners can transform a sea of digital noise into a clear path toward attribution.

The most effective way to dismantle threats is to bridge the gap between technical indicators and human behavior. Whether your teams are conducting high-stakes OSINT or protecting your own organization’s digital footprint, every breadcrumb counts. By leveraging Flashpoint’s expansive threat intelligence collections and real-time data, you can stay one step ahead of adversaries. Request a demo to learn more.

Request a demo today.

The post The Human Element: Turning Threat Actor OPSEC Fails into Investigative Breakthroughs appeared first on Flashpoint.

  •  

N-Day Vulnerability Trends: The Shrinking Window of Exposure and the Rise of “Turn-Key” Exploitation

Blogs

Blog

N-Day Vulnerability Trends: The Shrinking Window of Exposure and the Rise of “Turn-Key” Exploitation

In this post we explore the data-driven shrinkage of the Time to Exploit (TTE) window from 745 days to just 44, and examine why N-day vulnerabilities have become the “turn-key” weapon of choice for modern threat actors.

SHARE THIS:
Default Author Image
February 11, 2026

The race between defenders and threat actors has entered a new, more volatile phase: the rapidly accelerating exploitation of N-day vulnerabilities. Different from zero-days, N-day vulnerabilities are known security flaws that have been publicly disclosed but remain unpatched or unmitigated on an organization’s systems.

Historically, enterprises operated under the assumption of a “patching grace period,” the designated window of time allowed for a vendor to test and deploy a fix before a system is considered non-compliant or at high risk. However, this window is effectively collapsing, with Flashpoint finding that N-days now represent over 80% of all Known Exploited Vulnerabilities (KEVs) tracked over the past four years.

The Collapse of the Time to Exploit (TTE) Window

The most sobering trend for security operations (SecOps) and exposure management teams is the dramatic reduction in Time to Exploit (TTE). In 2020, the average TTE, the time between a vulnerability’s disclosure and its first observed exploitation, was 745 days. By 2025, Flashpoint found that this window has now plummeted to an average of just 44 days.

202520242023202220212020
Average TTE44115296405518745

This contraction represents a strategic shift in adversary tempo. Attackers are no longer waiting for complex, bespoke exploits; they are moving at breakneck speeds to weaponize public disclosures.

N-Days Provide a “Turn-Key” Exploit Advantage

Adversaries have gained a significant advantage through the rapid weaponization of researcher-published Proof-of-Concept (PoC) code. When a fully functional exploit is released alongside a vulnerability disclosure, it becomes a “turn-key” solution for attackers. By combining these ready-made exploits with internet-wide scanning tools like Shodan or FOFA, even unsophisticated threat actors can conduct mass exploitation across large segments of the internet in hours.

A prime example of this path of least resistance approach was observed in the leaked internal chat logs of the BlackBasta ransomware group. Analysis revealed that of the 65 CVEs discussed by the group, 54 were already known KEVs. Rather than spending resources on original zero-day research, threat actors are simply leveraging known, yet unpatched and exploitable vulnerabilities for their campaigns.

Defensive Software is a Primary Target for N-Days

The very software designed to protect enterprise firewalls, VPN gateways, and edge networking devices is consistently the most targeted category for both N-day and zero-day exploitation.

Because cybersecurity devices must be internet-facing to function, they provide a constant, unauthenticated attack surface. In 2025 alone, Flashpoint observed 37 N-days and 52 zero-days specifically targeting security and perimeter software. The requirement for these systems to remain open to external traffic means they will continue to be disproportionately targeted by advanced persistent threat (APT) groups and cybercriminals alike.

Attributing N-Day Attacks

While tracking the “how” of an attack is critical, tracking who is responsible remains a fragmented challenge for the industry. Attribution is often hampered by naming fatigue, where different vendors assign their own designated unique monikers to the same actor. For instance, the widely known threat actor group Lazarus has over 40 distinct designations across the industry, including “Diamond Sleet,” “NICKEL ACADEMY,” and “Guardians of Peace”.

Despite these naming complexities, global activity patterns remain clear. China remains the most active nation-state actor in the vulnerability exploitation space, consistently outpacing Russia, Iran, and North Korea in both the volume and scope of their campaigns.

Obstacles for Enterprise Security: Asset Blindness and the CVE Dependency Trap

Why are organizations struggling to keep pace? The primary factor isn’t a lack of effort, but a lack of visibility.

1. The Asset Inventory Gap

The single greatest breakthrough an enterprise can achieve is not a new AI tool, but a complete asset inventory. Most large organizations are lucky to have an accurate inventory of even 25% of their total assets. Without knowing what you own, vulnerability scans can take days or weeks to return results that the adversary is already using to probe your network.

2. The CVE Blindspot

Most traditional security tools are CVE-dependent. However, thousands of vulnerabilities are disclosed every year that never receive an official CVE ID. These “missing” vulnerabilities represent a massive blindspot for standard scanners. Intelligence-led exposure management requires looking beyond the CVE ecosystem into proprietary databases like Flashpoint’s VulnDB™, which tracks over 105,000 vulnerabilities that public sources miss.

Move Towards Intelligence-Led Exposure Management Using Flashpoint

To survive in an era where weaponization can happen in under 24 hours, organizations must shift from reactive patching to a threat-informed and proactive security approach. This means:

  • Prioritizing by Exploitability and Threat Actor Activity: Focus on vulnerabilities that are remotely exploitable and have known public exploits, rather than just high CVSS scores.
  • Adopting an Asset-Inventory Approach: Moving away from slow, periodic scans in favor of continuous asset mapping that allows for immediate triage.
  • Operationalizing Intelligence: Embedding real-time threat data directly into SOC and IR workflows to reduce the “mean time to action”.

The goal of exposure management is to look at your organization through the adversary’s lens. By understanding which N-days threat actors are actually discussing and weaponizing in the wild, defenders can finally start to close the window of exposure before a potential compromise can occur.

Flashpoint’s vulnerability threat intelligence can help your organization go from reactive to proactive. Request a demo today and gain access to quality vulnerability intelligence that enables intelligence-led exposure management.

Request a demo today.

The post N-Day Vulnerability Trends: The Shrinking Window of Exposure and the Rise of “Turn-Key” Exploitation appeared first on Flashpoint.

  •  

Building a safer digital future, together

As we mark Safer Internet Day 2026, we’re reflecting on a simple but enduring principle: safety must be designed into online services, not bolted on. Microsoft’s work in this space spans more than two decades—from technology solutions like PhotoDNA to our investments in responsible gaming, public-private partnerships, and empowering users through education. This foundation guides our approach as we help individuals and families navigate a rapidly evolving landscape shaped by new technologies and new risks and as we innovate with next-generation AI offerings. At a moment when 91% of people tell us they worry about harms introduced by AI, our commitment to responsible innovation has never been more important—especially for our youngest users.

Read on for more about our longstanding efforts to create a safer digital environment, plus key findings from our Global Online Safety Survey and new examples of our work to empower families and communities through tools, research, and educational resourcesincluding the latest release in Minecraft Education’s CyberSafe series 

Ten years of safety research 

2026 marks the tenth year of our annual Global Online Safety Survey research. For a decade, we have invested in surveying teens and adults around the world about their experiences and perceptions of life onlineaiming to provide fresh insights to support our collective work. That’s 130,000+ interviews across 37 countries, with the results available on our website. Ten years later, respondents tell us that they feel more connected and more productive, but less safe online.  

This year’s Global Online Safety Survey also highlights the complexity of the digital environment young people now inhabit. Teens’ exposure to risk rose again, with hate speech (35%), scams (29%), and cyberbullying (23%) among the most commonly experienced harms. At the same time, teens demonstrated striking resilience: 72% talked to someone after experiencing a risk, and reporting behavior increased for the second consecutive year. But worries abouthe misuse of AI continueunderscoring again why safety-by-design for AI is essential, not optional. Find the full results and country-level summaries here. 

Year on year, the research has told a story of evolving online safety risks and of the real-world impact. In 2026, the call to action is more urgent than everunless industry can deliver safe and age-appropriate experiences, young people risk losing access to technology. At Microsoft, spanning across our teams from Windows to Xbox, we have sought to continuously evolve our approach and to lead industry in advancing tailored and thoughtful safety solutions  

Evolving to meet the moment 

Looking ahead, we know we need to continue to build strong guardrails to tackle acute risks and to leverage our experience while being informed by new research, new perspectives, and new technologiesThe application process closed yesterday for our first AI Futures Youth Councilto be comprised of teens from across the US and EUWe’re looking forward to bringing those teens together soon for a first meeting to get their direct feedback on the role they want emerging technology to play in their lives and how we can best support their safety.  

Microsoft has partnered with Cyberlite on a second youth-centered initiative to understand how teens aged 13–17 are engaging with AI companions. Through codesign workshops with students in India and Singapore, we’re capturing young people’s own perspectives on the benefits, risks, and emotional dimensions of AI use—insights that will directly inform educational resources for teens, parents, and educators. Early findings from the first workshop in December 2025 show that young people value AI as a judgment free space while also recognizing the tradeoffs: privacy risks, overreliance, and erosion of critical thinking loom larger for them than bad advice.  

We’re also thinking about how we define safety in the next era of Windows, leveraging the Family Safety controls that have been integrated for over a decade. As many countries have raised the local age for digital consent, more parents will have the option to enable parental controls for teens up to the age of 18—leveraging these tools as part of a holistic approach to digital parenting. And to help parents set up and understand Family Safety, we’ve developed a short new guide. 

Safety is also about transparency, empowerment, and education. At Xbox, bringing the joy of gaming to everyone means remaining transparent about the many ways we innovate so players, parents, and caregivers can feel confident that Xbox continues to be a place for positive play. You can read more about our recently published Xbox Transparency Report and the tools and resources available to players on the Xbox Wire blog 

We’re also excited to announce the latest release in Minecraft Education’s CyberSafe series: CyberSafe: Bad Connection? This series of immersive Minecraft worlds and educational resources is free and helps translate complex risks into fun learning experiences that meet young people in their favorite blocky world. Bad Connection?—the fifth in the series—reflects our commitment to evolving to meet new and challenging risks, with a focus on tackling serious risks related to online recruitment and radicalization. Learn more about how to access this new Minecraft world here.  

The CyberSafe series has reached more than 80 million downloads since 2022 through a partnership between Minecraft Education, Xbox, and Microsoft, helping a generation of young players build the agency, resilience, and digital citizenship they need to navigate an increasingly online world. As part of our commitment to ensure people have the knowledge and skills they need to benefit from technology and stay safe, Microsoft Elevate is empowering educators and students with tools and guidance to build safer, more responsible digital habits, recognizing that AI is transforming how people learn, work, and connect. Our commitment to helping young people access technology safely is also why we’ve partnered with organizations, like the National 4-H Council to prepare young people for an AI-powered world through AI literacy and digital safety curriculum and game-based learning with Minecraft Education. 

As we look ahead, our goal is clear: build technology that is safe by design, guided by evidence, and informed through partnership. The internet has changed profoundly over the past decade, and so too have the expectations of the people who use it. Safer Internet Day is a reminder that progress requires sustained collaboration across industry, civil society, researchers, and families.

—  

Global Online Safety Survey Methodology 

Microsoft has published annual research since 2016 that surveys how people of varying ages use and view online technology. This latest consumer-based report is based on a survey of nearly 15,000 teens (13–17) and adults that was conducted this past summer in 15 countries examining people’s attitudes and perceptions about online safety tools and interactions. Responses to online safety differ depending on the country. Full results can be accessedhere. 

 

The post Building a safer digital future, together appeared first on Microsoft On the Issues.

  •  
❌