Normal view

Building a safer digital future, together

10 February 2026 at 06:01

As we mark Safer Internet Day 2026, we’re reflecting on a simple but enduring principle: safety must be designed into online services, not bolted on. Microsoft’s work in this space spans more than two decades—from technology solutions like PhotoDNA to our investments in responsible gaming, public-private partnerships, and empowering users through education. This foundation guides our approach as we help individuals and families navigate a rapidly evolving landscape shaped by new technologies and new risks and as we innovate with next-generation AI offerings. At a moment when 91% of people tell us they worry about harms introduced by AI, our commitment to responsible innovation has never been more important—especially for our youngest users.

Read on for more about our longstanding efforts to create a safer digital environment, plus key findings from our Global Online Safety Survey and new examples of our work to empower families and communities through tools, research, and educational resourcesincluding the latest release in Minecraft Education’s CyberSafe series 

Ten years of safety research 

2026 marks the tenth year of our annual Global Online Safety Survey research. For a decade, we have invested in surveying teens and adults around the world about their experiences and perceptions of life onlineaiming to provide fresh insights to support our collective work. That’s 130,000+ interviews across 37 countries, with the results available on our website. Ten years later, respondents tell us that they feel more connected and more productive, but less safe online.  

This year’s Global Online Safety Survey also highlights the complexity of the digital environment young people now inhabit. Teens’ exposure to risk rose again, with hate speech (35%), scams (29%), and cyberbullying (23%) among the most commonly experienced harms. At the same time, teens demonstrated striking resilience: 72% talked to someone after experiencing a risk, and reporting behavior increased for the second consecutive year. But worries abouthe misuse of AI continueunderscoring again why safety-by-design for AI is essential, not optional. Find the full results and country-level summaries here. 

Year on year, the research has told a story of evolving online safety risks and of the real-world impact. In 2026, the call to action is more urgent than everunless industry can deliver safe and age-appropriate experiences, young people risk losing access to technology. At Microsoft, spanning across our teams from Windows to Xbox, we have sought to continuously evolve our approach and to lead industry in advancing tailored and thoughtful safety solutions  

Evolving to meet the moment 

Looking ahead, we know we need to continue to build strong guardrails to tackle acute risks and to leverage our experience while being informed by new research, new perspectives, and new technologiesThe application process closed yesterday for our first AI Futures Youth Councilto be comprised of teens from across the US and EUWe’re looking forward to bringing those teens together soon for a first meeting to get their direct feedback on the role they want emerging technology to play in their lives and how we can best support their safety.  

Microsoft has partnered with Cyberlite on a second youth-centered initiative to understand how teens aged 13–17 are engaging with AI companions. Through codesign workshops with students in India and Singapore, we’re capturing young people’s own perspectives on the benefits, risks, and emotional dimensions of AI use—insights that will directly inform educational resources for teens, parents, and educators. Early findings from the first workshop in December 2025 show that young people value AI as a judgment free space while also recognizing the tradeoffs: privacy risks, overreliance, and erosion of critical thinking loom larger for them than bad advice.  

We’re also thinking about how we define safety in the next era of Windows, leveraging the Family Safety controls that have been integrated for over a decade. As many countries have raised the local age for digital consent, more parents will have the option to enable parental controls for teens up to the age of 18—leveraging these tools as part of a holistic approach to digital parenting. And to help parents set up and understand Family Safety, we’ve developed a short new guide. 

Safety is also about transparency, empowerment, and education. At Xbox, bringing the joy of gaming to everyone means remaining transparent about the many ways we innovate so players, parents, and caregivers can feel confident that Xbox continues to be a place for positive play. You can read more about our recently published Xbox Transparency Report and the tools and resources available to players on the Xbox Wire blog 

We’re also excited to announce the latest release in Minecraft Education’s CyberSafe series: CyberSafe: Bad Connection? This series of immersive Minecraft worlds and educational resources is free and helps translate complex risks into fun learning experiences that meet young people in their favorite blocky world. Bad Connection?—the fifth in the series—reflects our commitment to evolving to meet new and challenging risks, with a focus on tackling serious risks related to online recruitment and radicalization. Learn more about how to access this new Minecraft world here.  

The CyberSafe series has reached more than 80 million downloads since 2022 through a partnership between Minecraft Education, Xbox, and Microsoft, helping a generation of young players build the agency, resilience, and digital citizenship they need to navigate an increasingly online world. As part of our commitment to ensure people have the knowledge and skills they need to benefit from technology and stay safe, Microsoft Elevate is empowering educators and students with tools and guidance to build safer, more responsible digital habits, recognizing that AI is transforming how people learn, work, and connect. Our commitment to helping young people access technology safely is also why we’ve partnered with organizations, like the National 4-H Council to prepare young people for an AI-powered world through AI literacy and digital safety curriculum and game-based learning with Minecraft Education. 

As we look ahead, our goal is clear: build technology that is safe by design, guided by evidence, and informed through partnership. The internet has changed profoundly over the past decade, and so too have the expectations of the people who use it. Safer Internet Day is a reminder that progress requires sustained collaboration across industry, civil society, researchers, and families.

—  

Global Online Safety Survey Methodology 

Microsoft has published annual research since 2016 that surveys how people of varying ages use and view online technology. This latest consumer-based report is based on a survey of nearly 15,000 teens (13–17) and adults that was conducted this past summer in 15 countries examining people’s attitudes and perceptions about online safety tools and interactions. Responses to online safety differ depending on the country. Full results can be accessedhere. 

 

The post Building a safer digital future, together appeared first on Microsoft On the Issues.

Uplifting and empowering young people for an AI future

12 November 2025 at 18:02

Today, I had the pleasure of joining a range of leaders for timely, impactful discussions on child well-being in the age of AI at the Vatican, building on thoughtful conversations held during the United Nations General Assembly. These issues are top of mind globally, from parents to policymakers to physicians.

At Microsoft, we remain focused on our goal of empowering young people to use technology safely, mindfully, and in pursuit of social, educational, and economic opportunities. That means taking new steps spurred by regulation, such as new age verification measures for our UK Xbox users, as well as adapting our longstanding commitments to responsible AI and child online safety and privacy to build trust in the AI era. Today, we’re sharing new research on youth perspectives, announcing the AI Futures Youth Council to amplify teen voices, and offering policy recommendations to help families navigate the digital world with confidence.

Centering young people’s voices: Announcing the AI Futures Youth Council and new age assurance research

In 2017, Microsoft led the industry with our first Council for Digital Good—a forum where we could hear directly from young people about their experiences and perceptions of online risk. In 2025, with AI reshaping our world—and their future—we again need to center the voices of young people as we think about responsible design for AI and how we set students up for the future. We are actively working with teens from the Asia-Pacific region to develop our first “for teens, by teens” guide to AI chatbots. Today, I’m pleased to announce the upcoming launch of our first “AI Futures Youth Council,” bringing together teens from the US and Europe to have their say on their future. We’ll share more about the application process soon.

We know that a critical precursor to providing young people positive and productive online experiences is understanding which users are young people. Around the globe, the debate over how to achieve age assurance online continues unabated. We have been grateful to work with CIPL and the WeProtect Global Alliance over the last year to explore how to achieve improved age assurance that is consistent with fundamental rights of privacy and access to information. As with any other safety intervention, our goal is to be proportionate and thoughtful where we take new steps, which is why we have focused on gaming in the first instance—reflecting the responsibilities we have to our youngest users and our ongoing commitment to player safety.

To inform our strategy and the broader policy conversation, we partnered with Praesidio Safeguarding to better understand youth perspectives on age assurance approaches across the UK, Ghana, and Indonesia. We are pleased to share that research today. The findings reinforce the importance of transparency, choice, and trust: teens want clear explanations of how their data is used, express concerns about exclusion where formal proof of age is lacking and show varying comfort levels with the use of biometric and behavioral data. Notably, young people value parental involvement but also highlight the need for independence and privacy as they mature. The results also highlight some of the important differences across geographies. For example, teenagers in Ghana often not only share devices with their families but may also share an account—underscoring a need for nuanced global approaches at multiple layers of the technology stack.

These insights underscore our belief that proportionality—matching safeguards to actual risks—is essential to building trust and empowering youth online. They also highlight the need for age assurance models that are inclusive, flexible, and respectful of youth autonomy—especially in global contexts where device and account sharing are common. We remain committed to ongoing dialogue and innovation, ensuring that our solutions evolve alongside the needs and expectations of children, families, and society at large.

Our policy recommendations: Empower young people to use technology safely

We believe technology should empower young people, not put them at risk. Given the diverse range of online services, it is important to remember there is no single “digital seatbelt” to protect and empower young people online.

We therefore offer the following recommendations as policymakers, regulators, and experts continue to discuss these issues, building on our 2024 blog:

  • Avoid blanket access restrictions. Age assurance requirements that block full access to a service—except in limited cases like sites dedicated to age-restricted content (e.g., pornography)—can unintentionally limit child rights, such as access to information. Instead, age assurance should be applied at the service level, target specific design features that pose heightened risks, and enable tailored experiences for children.
  • Focus on the highest risks for impact, such as content and features associated with documented harms to children, and as determined through democratic processes. Providers should take steps to assess and mitigate risks to children on their services, while ensuring documentation requirements or compliance obligations do not inadvertently undermine safety. A risk-based and proportionate approach—grounded in clear criteria and supported by interoperable standards—can also help ensure that age assurance is applied where most needed, without introducing unnecessary friction. Providers of high-risk services should bear the responsibility of age assurance.
  • Strengthen safeguards for AI companions. Recent tragic events have highlighted the need for continued care in developing AI companions, especially where these may be used by young people. At Microsoft, we are building AI services for empowerment and want the right guardrails in place to protect all users but welcome new, commonsense measures such as those enacted in California and Australia to reduce the potential harms related to suicide and self-injury risks, as well as to sexualized or violent content. We will continue to work closely with researchers and experts to understand and mitigate potential risks to young people in this fast-evolving field.
  • Incentivize age-appropriate design. Banning kids from online services isn’t the answer, but what constitutes an “age-appropriate” experience will vary. We have supported a duty of care approach to child safety where the duty can be implemented flexibly, guided by thoughtful and evidence-based regulatory guidance. Ongoing research and expert engagement are needed to understand how to advance child safety and rights on diverse services—not just social media.
  • Protect the privacy and security of all users. Tailoring age assurance requirements will help enable proportionate approaches to data processing. Current proposals for age verification by app stores risk creating significant privacy risks by collecting sensitive information and sharing unnecessary age data with a wide variety of services while also not solving the challenges lawmakers want to address. We continue to support federal privacy legislation in the US and encourage global efforts to develop standards and certifications for age assurance providers. Trusted credential sharing can also increasingly be enabled by emerging digital identity ecosystems—including government-issued IDs and wallet-based models—that preserve mutual privacy between issuers and relying parties.
  • Support, not overwhelm Our Global Online Safety Survey results show that while parents might underestimate the risks teens face online, teens are most likely to turn to a parent for help. Parents should not face a deluge of notifications nor bear the sole responsibility for safety but have access, awareness, and education on family safety tools that can help them make informed choices appropriate for their family and their values.
  • Foster multistakeholder collaboration. We believe it’s essential to elevate the voices and perspectives of young people, as well as for regulators and industry to engage with civil society and partner to advance practical solutions. As child safety regulations come into force, it will also be important to get feedback from affected communities on where regulation may have adverse rights impacts, as well as to understand where harm may have been averted. Public education will be needed to help all users understand why their online experiences might be changing.

We will continue learning, listening, and collaborating, especially with our new Council, and look forward to sharing our insights.

 

 

The post Uplifting and empowering young people for an AI future appeared first on Microsoft On the Issues.

❌