Today, Microsoftis announcing a coordinated legal action in the United States and, for the first time, the United Kingdom to disrupt RedVDS, a global cybercrime subscription service fueling millions in fraud losses. These efforts are part of a broader joint operation with international law enforcement, including German authorities and Europol, which has allowed Microsoft and its partners to seize key malicious infrastructure and take the RedVDS marketplace offline, a major step toward dismantling the networks behind AI-enabled fraud, such as real estate scams.
For as little as US$24 a month, RedVDS provides criminals with access to disposable virtual computers that make fraud cheap, scalable, and difficult to trace. Services like these have quietly become a driving force behind today’s surge in cyber‑enabled crime, powering attacks that harm individuals, businesses, and communities worldwide.Since March 2025,RedVDS‑enabled activity has driven roughly US$40 millionin reported fraud losses in the United States alone. Among the victims is H2-Pharma, an Alabama‑based pharmaceutical company that lost more than $7.3 million—money supposed to be used to sustain lifesaving cancer treatments, mental health medications, and children’s allergy drugs for patients across the country. In a separate case, the Gatehouse Dock Condominium Association in Florida was tricked out of nearly $500,000—funds contributed by residents and property owners for essential repairs. Both organizations are joining Microsoft as co‑plaintiffs in this civil action.
But these cases represent only a fraction of the harm. Fraud and scams frequently go unreported, victims are global, and cybercriminals routinely pivot across platforms and service providers. For the individual, fraud has lasting effects that extend beyond financial loss to emotional wellbeing, health, relationships, and long-term stability. As a result, the true toll of RedVDS‑enabled activity is far higher than the roughly US $40 million Microsoft can directly observe.
What RedVDS is—and why it matters
RedVDS is an online subscription service that is part of the growing cybercrime-as-a-service ecosystem where cybercriminals buy and sell services and tools to launch attacks at scale. It provides access to cheap, effective, and disposable virtual computers running unlicensed software, including Windows, allowing criminals to operate quickly, anonymously, and across borders.
A screenshot of RedVDS’s user dashboard, including a loyalty program and referral bonuses for customers.
Cybercriminals use RedVDS for a wide range of activities, including sending high‑volume phishing emails, hosting scam infrastructure, and facilitating fraud schemes. RedVDS is frequently paired with generative AI tools that help identify high‑value targets faster and generate more realistic, multimedia message email threads that mimic legitimate correspondences. In hundreds of cases, Microsoft observed attackers further augment their deception by leveraging face-swapping, video manipulation, and voice cloning AI tools to impersonate individuals and deceive victims.
In just one month, more than 2,600 distinct RedVDS virtual machines sent an average of one million phishing messages per day to Microsoft customers alone. While most were blocked or flagged as part of the 600 million cyberattacks Microsoft blocks per day, the sheer volume meant a small percentage may have succeeded in reaching the targets’ inboxes. Since September 2025, RedVDS‑enabled attacks have led to the compromise or fraudulent access of more than 191,000 organizations worldwide. These figures represent only a subset of the impacted accounts across all technology providers, illustrating how quickly this infrastructure increases the scale of cyberattacks.
Global density of compromised Microsoft email accounts using RedVDS from September 2025 through December 2025. The top five impacted countries are the United States, Canada, the United Kingdom, France, and India.
How RedVDS enables fraud
One of the most common ways RedVDS‑enabled attacks result in financial loss is through payment diversion fraud, also known as business email compromise, or “BEC.” In these schemes, attackers gain unauthorized access to email accounts, quietly monitor ongoing conversations, and wait for the right moment, such as an upcoming payment or wire transfer. At that point, they impersonate a trusted party and redirect funds, often moving the money within seconds. Both H2-Pharma and the Gatehouse Dock Condominium Association were targeted through sophisticated BEC schemes that exploited trust and timing.
BEC attack chain powered by RedVDS.
Sample impersonation email with fraudulent payment instructions.
RedVDS has also been heavily used to facilitate real estate payment diversion scams, one of the fastest‑growing forms of cyber‑enabled fraud. In these cases, attackers compromise the accounts of realtors, escrow agents, or title companies and send strategically timed emails with fraudulent payment instructions designed to divert closing funds, escrow payments, and other sizeable transactions. For families and first altogether. Microsoft has observed RedVDS‑enabled activity affecting more than 9,000 customers in the real estate sector alone, with particularly severe impact in countries such as Canada and Australia.
And the threat goes far beyond real estate. RedVDS‑enabled scams have hit construction, manufacturing, healthcare, logistics, education, legal services, and many other sectors—disrupting everything from production lines to patient .
A Global Response to a Global Threat
Cybercrime today is powered by shared infrastructure, which means disrupting individual attackers is not enough. Through this coordinated action, Microsoft has disrupted RedVDS’s operations, including seizing two domains that host the RedVDS marketplace and customer portal, while also laying the groundwork to identify the individuals behind them.
Microsoft’s legal actions are reinforced by close collaboration with law enforcement partners around the world, further disrupting the malicious operation. Germany’s Public Prosecutor’s Office Frankfurt am Main – Central Office for Combating Internet Crime (ZIT) and the German State Criminal Police Office Brandenburg have seized a critical server used to power RedVDS, effectively taking its central marketplace offline. At the same time and as part of this ongoing disruption, Microsoft is also working closely with international law enforcement, including Europol’s European Cybercrime Centre (EC3), to disrupt the broader network of servers and payment networks that supported RedVDS customers as part of the ongoing disruption.What people and organizations can do
We are deeply grateful to H2– -Pharma and the Gatehouse Dock Condominium Association for their willingness to come forward and share their experiences. Their cooperation, combined with Microsoft’s threat intelligence, made this action possible and will help protect future victims. Falling victim to a scam should never carry stigma. These attacks are executed by organized, professional criminal groups that intercept and manipulate legitimate communications between trusted parties.
Simple steps can significantly reduce risk, including slowing down and questioning urgency, calling points of contact back using numbers that are already known to you, verifying payment requests using additional contact information, enabling multifactor authentication, watching carefully for subtle changes in email addresses, keeping software up to date, and reporting suspicious activity to law enforcement. Every report helps dismantle networks like RedVDS and brings us closer to stopping cybercrime at scale.
Continuing a collective effort to disrupt cybercrime
This action against RedVDS builds on Microsoft’s ongoing efforts to disrupt fraud and scam infrastructure through legal and technical action, collaboration with law enforcement, and participation in global initiatives such as the National Cyber-Forensics and Training Alliance (NCFTA) and the Global Anti-Scam Alliance (GASA). It marks the 35th civil action targeting cybercrime infrastructure by Microsoft’s Digital Crimes Unit, underscoring a sustained strategy to go beyond individual takedowns and dismantle the services that criminals rely on to operate and scale.
As services like RedVDS continue to emerge, Microsoft will keep working with partners across sectors and borders to identify and disrupt the infrastructure behind cyber-enabled fraud, making it harder for criminals to profit and easier for people and organizations to stay safe online.
News of the move to acquire Seraphic comes less than a week after CrowdStrike announced an agreement to acquire identity security startup SGNL for $740 million.
Microsoft’s 5-point plan to partner with local communities across the United States
This year marks America’s 250th year of independence. One of the trends that has repeatedly shaped the nation’s history is again in the news. As we’re experiencing at Microsoft, AI is the latest in a long line of new technologies to require large-scale infrastructure development.
Microsoft today is launching a new initiative to build what we call Community-First AI Infrastructure—a commitment to do this work differently than some others and to do it responsibly. This commits us to the concrete steps needed to be a good neighbor in the communities where we build, own, and operate our datacenters. It reflects our sense of civic responsibility as well as a broad and long-term view of what it will take to run a successful AI infrastructure business. In short, we will set a high bar.
As we launch this initiative, we think about it in the context of both the headlines of the day and the lessons from the past. Beginning in the 1770s, the country has advanced through successive eras built on huge infrastructure development based on canals, railroads, power plants, and the electrical grid, followed by the telephone system, highways, and airports. AI infrastructure has become the next chapter in this story.
Like major buildouts of the past, AI infrastructure is expensive and complex. Investments are advancing at a rapid pace. Today, these require large-scale spending by the private sector in land, construction, electricity, liquid cooling, high-bandwidth connectivity, and operations. This revives a longstanding question: how can our nation build transformative infrastructure in a way that strengthens, rather than strains, the local communities where it takes root?
Large AI investments are accelerating just as datacenter concerns are growing in local communities. The pattern is familiar. Whether it was canals, railroads, the electrical grid, or the interstate highway system, each era produced its own conflicts over who bore the burdens of progress. One enduring lesson is that successful infrastructure buildouts will only progress when communities feel that the gains outweigh the costs. Long-term success requires a commitment to address public needs, including by the private companies making these investments.
This must start by understanding local concerns. Residential electricity rates have recently risen in dozens of states, driven in part by several years of inflation, supply chain constraints, and long-overdue grid upgrades. Communities value new jobs and property tax revenue, but not if they come with higher power bills or tighter water supplies. Without addressing these issues directly, even supportive communities will question the role of datacenters in their backyard.
As a company, we believe in the many positive advances AI will bring to America’s future. From stronger economic growth to better medical advances and more affordable products, we believe AI will make a difference in everyday lives. But we also recognize that AI, like other fundamental technological shifts, will create new challenges as well. And we believe that tech companies like Microsoft have both a unique opportunity to help contribute to these advances and a heightened responsibility to address these challenges head-on.
This Community-First AI Infrastructure Initiative provides a framework for doing exactly that. It is anchored in five commitments, each a clear promise to the communities where we build, own, and operate Microsoft datacenters. These are:
We’ll pay our way to ensure our datacenters don’t increase your electricity prices.
We’ll minimize our water use and replenish more of your water than we use.
We’ll create jobs for your residents.
We’ll add to the tax base for your local hospitals, schools, parks, and libraries.
We’ll strengthen your community by investing in local AI training and nonprofits.
We describe our plans in detail below. We recognize that these will evolve and improve, based most importantly on what we learn from ongoing engagement with local communities across the country. We’ll also follow this plan for Community-First AI Infrastructure with similar plans for other countries, shaped to reflect their local needs and traditions.
But we are choosing the beginning of 2026 in Washington, DC to launch this effort in the United States. Our goal is to move quickly, partner with local communities, and bring these commitments to life in the first half of this year.
1.Electricity: We’ll pay our way to ensure our datacenters don’t increase your electricity prices.
There’s no denying that AI consumes large amounts of electricity. While advances in technology may someday change this, today, this is the reality.
The United States will retain its AI leadership role only if AI infrastructure can tap into a rapidly growing supply of electricity. The International Energy Agency (IEA) estimates that US datacenter electricity demand will more than triple by 2035, growing from 200 terawatt-hours to 640 terawatt-hours per year. This growth is taking place alongside rapid electrification of manufacturing and other sectors of the economy.
Our nation is addressing this reality at a demanding time. Even in the absence of datacenter construction, the United States is facing major electricity challenges. Much of the country’s electricity transmission infrastructure is more than 40 years old, and it’s under strain. Supply chain constraints on transformers and high-voltage equipment are delaying upgrades that would enable existing lines to deliver more electricity. New transmission can take more than 7 to 10 years due to permitting and siting delays. This creates a mismatch with growing electricity demand.
Some have suggested that AI will be so beneficial that the public should help pay for the added electricity the country needs for it. We believe in the benefits AI will create, but we disagree with this approach. Especially when tech companies are so profitable, we believe that it’s both unfair and politically unrealistic for our industry to ask the public to shoulder added electricity costs for AI. Instead, we believe the long-term success of AI infrastructure requires that tech companies pay their own way for the electricity costs they create.
This will require that we take four steps, and we’re committed to each:
First, we’ll ask utilities and public commissions to set our rates high enough to cover the electricity costs for our datacenters. This includes the costs of adding and using the electricity infrastructure needed for the datacenters we build, own, and operate. We will work closely with utility companies that set electricity prices and state commissions that approve these prices. Our goal is straightforward: to ensure that the electricity cost of serving our datacenters is not passed on to residential customers.
In some areas, communities are already starting to benefit from this approach. In Wyoming, for example, Microsoft and Black Hills Energy have developed an innovative utility partnership that ensures our datacenter growth strengthens—rather than burdens—the local community. And as part of our datacenter investment in Wisconsin, we are supporting a new rate structure that would charge “Very Large Customers,” including datacenters, the cost of the electricity required to serve them. This protects residents by preventing those costs from being passed on. But we recognize the need to ensure that datacenter communities benefit everywhere. We believe this approach can and should be a model for other states.
Second, we’ll collaborate early, closely, and transparently with local utilities to add electricity and the supporting infrastructure to the grid when needed for our datacenters. Addressing electricity costs is critical, but it is an incomplete solution for local communities unless we expand electricity supply. This expansion typically requires a complex effort that includes the expansion of electrical generation capacity and improvements in transmission and substation systems.
We’re committed to collaborating with local utilities. We will sit down and plan together, providing early transparency around our projected power requirements and contracting in advance for the electricity we will use. When our datacenter expansion requires improvements in transmission and substation capabilities, we will continue our existing practices by paying for these improvements.
This work will build on a spirit of partnership with utilities we’ve worked to foster across the country. For example, in the wholesale energy market that covers much of the Midwest called the Midcontinent Independent System Operator (MISO), we have contracted to add 7.9 GW of new electricity generation to the grid, which is more than double our current consumption.
Third, we’ll pursue innovation to make our datacenters more efficient. We are also using AI to reduce energy use and improve the performance of our software and hardware in the design and management of our datacenters. And we are collaborating closely with utilities to leverage tools like AI to improve planning, get more electricity from existing lines and equipment, improve system resilience and durability, and speed the development of new infrastructure, including nuclear energy technologies.
By embedding these innovations into datacenters and by collaborating directly with local utilities, communities gain access to systems that are more efficient, more reliable, and better prepared to support growth without increasing costs for households.
Fourth, we’ll advocate for the state and national public policies needed to support our neighboring communities with affordable, reliable, and sustainable power. Public policy plays an essential role in supporting communities with affordable, reliable, and sustainable access to electricity. In 2022, Microsoft established priorities for electricity policy advocacy: expanding clean electricity generation, modernizing the grid, and engaging local communities. Over the past three years, we have advocated across all three areas and engaged with government leaders at the federal, state, and local levels to do so. To date, however, progress has been uneven. This needs to change.
We will advocate for policies across these areas with an urgent focus on accelerating project permitting and interconnection of electricity projects, expediting the planning and expansion of the electricity grid, and designing new electricity rates for large electricity users.
2. Water: We’ll minimize our water use and replenish more of your water than we use.
Across the country, communities are asking pointed questions about how datacenters use water. These are arising in places already facing water stress, like Phoenix and Atlanta, as well as regions with more abundant supply, like Wisconsin. These concerns are often amplified by aging municipal water systems and infrastructure gaps. Local communities want and deserve reassurance that new AI infrastructure won’t strain their water resources.
Our commitment ensures that our presence will strengthen local water systems rather than burden them. We’ll do this by reducing the amount of water we use and by investing in local water systems and water replenishment projects.
First, we’re committed to reducing the amount of water our datacenters use. The chips that power datacenters produce heat. To manage that heat, datacenters historically relied upon evaporative cooling systems that drew on large volumes of water for cooling in hot weather. As AI workloads have increased, the demand for cooling has increased. The GPU chips that power AI workloads run at very high temperatures; without proper cooling, these chips would burn out within minutes.
The good news is that the tech sector has invested in new innovations to address these cooling needs. Now is the time when we need to step up, use these new technologies, and take added steps to address water use concerns.
Across our entire owned fleet of datacenters, we are committed as a company to a 40 percent improvement in datacenter water-use intensity by 2030. We are optimizing water usage for cooling, improving our ability to balance between water-based cooling and air cooling based on environmental conditions. We have also launched a new AI datacenter design that uses a closed-loop system. By constantly recirculating a cooling liquid, we can dramatically cut our water usage. In this next-generation design, already deployed in locations such as Wisconsin and Georgia, potable water is no longer needed for cooling, reducing pressure on local freshwater systems.
For communities where water infrastructure constraints pose challenges, we will collaborate with local utilities to understand whether current systems can support the additional demand associated with datacenter growth. If sufficient capacity does not exist, we work with our engineering teams to identify solutions that avoid burdening the community.
This approach will build on what we’ve learned from the recent work at our datacenters in Quincy, Washington, an arid region where the local groundwater supply was already under pressure. To avoid drawing from the community’s potable water, we partnered with the city to construct the Quincy Water Reuse Utility, which treats and recirculates datacenter cooling water rather than relying on local groundwater. This approach protects limited drinking-water supplies while ensuring that high-quality, recycled water can be used for datacenter cooling needs. Where future system improvements are required, Microsoft funds those upgrades in full, ensuring that the community doesn’t have to shoulder the cost of supporting our operations.
We also partner with utilities from day one to map out water, wastewater, and pressure needs, and we fully fund the infrastructure required for growth, ensuring local water systems are resilient. Beyond our own footprint, we invest directly in community water infrastructure, modernizing water systems, expanding access, increasing water reliability, and helping utilities maintain stable rates and pressure. For example, near our datacenter in Leesburg, Virginia, Microsoft is funding more than $25 million of water and sewer improvements to ensure the cost of serving our facilities does not fall on local ratepayers.
Second, we will ensure that we replenish more water than we withdraw. This means restoring measurable amounts of water to the same water districts where our datacenter’s water is used, so the total water returned exceeds total water used. This standard provides greater transparency and precision in tracking and reporting, aligned with emerging industry standards.
We will pursue projects that make the most important water contribution to each local community. For example, in the greater Phoenix area and nearby Nevada communities, our leak detection partnerships with local utilities identify and repair hidden breaks in aging water systems, preventing water losses and keeping municipal water in circulation for community use. These projects both add to the total usable water supply and improve the reliability of service for residents.
Across the Midwest, we are restoring historic oxbow wetlands. These are crescent-shaped water bodies that naturally recharge groundwater, reduce flood risk, and enhance habitats for native species. These wetlands act as nature’s reservoirs, capturing and slowly returning water to local aquifers throughout both wet seasons and droughts, creating year-round value for farms, ecosystems, and nearby communities.
Overall, we approach replenishment the same way a household might think about a bank account: our operations make water withdrawals, and our replenishment projects make deposits. Some deposits, like our leak detection projects, go straight into the checking account—depositing water into the municipal supply for immediate community use. Others, like wetland restoration, go into a savings account—investing in the watershed’s long-term capacity to store and supply the region. These projects are evaluated using recognized methods that convert on-the-ground improvements into measurable gallons (or cubic meters) of water restored to local ecosystems, ensuring that commitments reflect tangible local benefits, not abstract promises.
Third, we will support this work with greater local transparency. People deserve to know how much water our datacenters use, and we are committed to making that information accessible, clear, and easy to understand. Aligned with this goal, we will begin publishing water-use data for each datacenter region in the country, as well as our progress on replenishment. This approach will ensure that communities can understand both our operational footprint and the progress we are making against our water-positive goals.
Fourth, we will advocate for public policies to help minimize water use and strengthen resilience. This means championing policies that enable sustainable growth while safeguarding community resources. We will support state and federal efforts to make reclaimed and industrial recycled water the default supply for datacenters wherever feasible. We will advocate for harmonized transparency standards that allow communities to clearly understand water use and stewardship practices. And we will work to reduce permitting delays by promoting predictable pathways for water-efficient datacenter projects.
These actions reflect our belief that technology and environmental responsibility must advance together, ensuring that AI-driven progress aligns with long-term water resilience for people, places, and ecosystems. Our policy activities are rooted in protecting local communities. By prioritizing recycled water and efficiency, we will help reduce pressure on aging municipal systems and ensure reliable water access for people and businesses.
3.We’ll create jobs for your residents.
New datacenters create jobs—typically thousands during construction and hundreds during operations. For example, in Washington state more than 1,300 skilled trades workers are building Microsoft datacenters and by the end of next year more than 650 full-time employees and contractors will work across all our operational facilities there.
One of our goals is to help ensure that workers from the local community benefit from these opportunities. To achieve this, we will invest in new partnerships to help give local residents the skills and opportunities to fill these jobs in both the construction and operational phases.
The AI infrastructure construction boom is driving large-scale physical development, creating a huge demand for skilled tradespeople nationwide. As datacenters and the energy projects that support them grow quickly, firms are vying for a limited workforce. At one level, this is good news for people who already have the qualifications these jobs require. But at another level, there is a risk the jobs will not go to local residents who want to pursue these jobs unless they can acquire the skills required.
We will take a multifaceted approach.
First, we will invest in partnerships to help train local workers to support the construction and maintenance of datacenters. This includes a new and first-of-its-kind partnership between Microsoft and North America’s Building Trades Unions (NABTU) to strengthen apprenticeship and training programs in the skilled trades where datacenters are being built. We are launching today a new agreement that establishes a cooperative framework to focus on building a pipeline of skilled workers in regions where we are building datacenters. This will also help enable NABTU to identify qualified contractor partners to bid on our infrastructure projects.
Second, we will expand our Datacenter Academy program to train individuals to fill ongoing datacenter operations roles. This program works in partnership with local community colleges and vocational schools to train students for critical roles in datacenter operations and related careers, once construction is complete.
A good example of this work is our Datacenter Academy partnerships in Boydton, Virginia, where we have a large datacenter campus. The Academy works with Southside Virginia Community College and the Southern Virginia Higher Education Center, which have helped hundreds of students and adult learners earn industry-recognized certifications in information technology and critical facilities operations.
In 2024, this work expanded with the opening of a new Critical Environment Training Lab (SoVA) in South Hill. This provides hands-on training with electrical, mechanical, and cooling systems using decommissioned datacenter equipment donated by Microsoft. Graduates of these programs have gone on to pursue careers supporting datacenter operations in Southern Virginia, including roles with Microsoft and the broader ecosystem of companies that help operate and maintain digital infrastructure. We will pursue similar partnerships in other states, and we are committed to making this an ongoing part of our work in the communities where we build new datacenters.
Third, we will use our voice to encourage policymakers to support these new job opportunities. While this work is of heightened importance in communities with datacenters, the broader need for this type of skilled labor is national in scope. According to LinkedIn data, job postings for data center occupations or requiring at least one core data center skill, such as data center operations, grew by 23 percent globally and 13.5 percent in the US year-over-year in 2025. This is likely to represent an ongoing trend. Over the next decade, trillions in private investment will offer steady employment opportunities for American workers—including electricians, pipefitters, HVAC techs, welders, and construction crews—alongside manufacturing technicians for related components, like chips, power generation, and cooling systems.
However, this rapid demand for skilled labor is set to outpace the available pipeline of workers. Today, the Associated Builders and Contractors estimates that the construction industry is short roughly 439,000 workers, mostly among skilled workers who do things like lay pipe and wire electrical panels.[1] Manufacturers report shortages as well, with the CEO of Ford Motor Company recently highlighting 5,000 open mechanic jobs that pay more than $100,000 per year. And for datacenter operations, employers face shortages in hands-on infrastructure skills such as cabling, racking, and network hardware.
This problem is exacerbated by the demographics of an aging workforce and a decades-old policy trend of deprioritizing vocational education for young Americans. A generation of skilled workers, vocationally trained in high schools and apprenticeships in the 20th century, are retiring from the trades. In the first quarter-century of the 21st century, high schools pivoted towards preparing young people for higher education and advanced degrees, often at the expense of traditional shop classes and training in skilled craftsmanship.
The increased demand for skilled trades, paired with an aging workforce, requires an enhanced public-private workforce partnership. Secondary schools in the US can be incentivized to do more to educate young people about the trades through vocational schools and pre-apprenticeship programs. Registered apprenticeship programs offered nationally provide a fulfilling career path with long-term wages and benefits.
In partnership with labor, the federal government can champion a national apprenticeship and workforce development initiative that helps young and aspiring American workers near AI infrastructure projects, especially in rural and post-industrial regions. President Trump’s AI Action Plan rightly identifies this opportunity, and we will work closely with the Department of Labor to help scale this effort. The federal government can also help by streamlining the process by which businesses can establish and maintain a registered apprenticeship program. They can also maximize the use of existing federal dollars that directly support registered apprenticeship programs. This could entail modernizing the regulations for the National Apprenticeship Act or updating the statutory language itself.
4.We will add to the tax base for your local hospitals, schools, parks, and libraries.
One of the most tangible benefits from datacenter development is invisible to an individual driving nearby. It’s the property taxes paid by datacenters to the local municipality, which are substantial. But this too requires that the private sector take a responsible approach, as described below.
We won’t ask local municipalities to reduce their local property tax rates when we buy land or propose a datacenter presence. Instead, we’ll pay our full and fair share of local property taxes, adding revenue to local towns and cities. This is obviously critical to supporting the growth a local community often experiences when datacenters are built or expanded. And most importantly, at a time when many communities are facing revenue shortages that threaten vital public assets like hospitals, schools, parks, and libraries, we know from experience that this can make a big difference.
The benefits of this approach are nowhere more apparent than in Quincy, Washington, a small agricultural community about 150 miles east of Seattle where Microsoft built its first datacenter in 2008. Since then, we have built more than twenty datacenters in the area, providing ongoing employment to thousands of construction workers for almost two decades. Hundreds of technicians enjoy permanent jobs in those datacenters, earning salaries well above the median income for Quincy. And we estimate that for every direct construction job created, another one is created in related sectors, including security services, maintenance and repair, retail, restaurants, and more. Altogether, our datacenters drive more than $200 million in regional economic activity each year.
As a result, the share of Quincy residents living below the poverty line has been cut in half, dropping from 29.4 percent in 2013 to 13.1 percent in 2023. And county property tax revenues have more than tripled over the past two decades, from roughly $60 million to more than $180 million. This has enabled the city to invest in public services and amenities. Last year, as rural hospitals around the country cut back on critical care offerings and shuttered their doors, Quincy opened a new 54,000-square-foot medical center. The city has also made substantial renovations to its high school, adding state-of-the-art athletic facilities, an auditorium, and a career and technical training department.
We want to make sure that the other communities where our datacenters are located benefit from our presence in the same way. In all the regions where we build, own, and operate datacenters, we’re devoted to taking a civically responsible approach. This means recognizing the importance of civic services, including public safety, local healthcare, schools, libraries, and parks. As we become an important local employer, local communities can count on us to be a constructive contributor to local business and civic efforts.
5. We’ll strengthen your community by investing in local AI training and nonprofits.
We believe the datacenter communities that power AI should be among the first to benefit from it. As these communities help drive innovation and economic growth for the nation, it’s essential that they share in the economic, educational, and community benefits AI is creating. Especially as jobs evolve and require more AI skills, this requires local investments in AI education and training. To support this goal, we will provide free, age-appropriate, best-in-class AI training and education in these communities in partnership with trusted, local community-based organizations.
For years, we have been helping people gain essential digital skills in communities in and around our datacenters, such as Quincy in Eastern Washington, Boydton in Southern Virginia, and Mt. Pleasant in Southeast Wisconsin. One thing we’ve learned is that these communities have vibrant anchor institutions—schools, libraries, and local chambers of commerce—that form the backbone of local learning, workforce development, and economic growth. That’s why our approach as we go forward will be to invest in communities with our datacenters to partner with and provide support to these anchor institutions so that every community member can leverage the power of AI in how they live, work, and learn.
First, we will partner with local K-12 schools, community colleges, and universities to provide age-appropriate, responsible AI literacy training and learning experiences for students and teachers in our datacenter communities. This will build on some of our most recent experiences. For example, in Quincy, Washington, we partnered with Quincy High School and the local FFA chapter to teach students the critical AI and data skills needed for careers in precision agriculture. And in our datacenter region in Mt. Pleasant, Wisconsin, we recently launched an AI bootcamp for students and faculty with Gateway Technical College to cultivate a new generation of developers and creators of AI tools and technology across Wisconsin technical colleges.
Our commitment is to build on this work to help students and teachers responsibly and effectively engage with AI, create with AI, manage AI, and design with AI by bringing free, locally relevant, responsible AI training that is aligned with AI literacy standards to students in every K-12 school, community college, and university in our datacenter markets.
Second, we will support adults in our datacenter communities with AI tools and skills by creating neighborhood AI learning hubs in partnership with local libraries in our key datacenter markets. This approach will build upon our previous digital skilling partnerships with local libraries. For example, during COVID, we partnered with libraries in rural communities across the country, and more recently, we helped train libraries in our Quincy and Mt. Pleasant datacenter markets on AI so that they could help their patrons learn AI skills. Building on this work, we will invest in AI literacy skills development for librarians and provide access to free AI literacy training and certifications to local library patrons, including by equipping public terminals at local libraries in our datacenter regions with AI tools and services.
Third, we will support AI skills training for small businesses. We recognize that AI training will be critical for small businesses as they navigate the transition to the AI economy. These businesses are the backbone of local economies, and their success directly impacts job creation, workforce stability, and community vitality. Through a new workforce transformation initiative, we will deliver AI training, tools, and insights to local chambers of commerce that support these small businesses. We will also provide flexible grants for AI training and upskilling to local chambers of commerce and a variety of workforce organizations to help local businesses upskill employees, adopt AI responsibly, and prepare their workforce for ongoing transformation—ensuring that economic opportunity stays rooted in the communities where we build and operate datacenters.
Finally, we will invest in your local nonprofit community. A defining aspect of Microsoft’s own history and culture has long been a commitment to support the many nonprofit organizations that are vital to every community the company calls home. As we expand our datacenters in new communities, we’re committed to bringing this role to these new regions.
This starts with support for our employees in the local community. We provide two key benefits to all our full-time employees. First, we will match every hour they spend volunteering for a nonprofit with a donation to that group of $25. Second, we’ll match each dollar they donate to a nonprofit with an equal donation by Microsoft. These give all our employees, including in our datacenters, a total potential match of $15,000 each year.
This approach to community engagement is an important part of Microsoft’s culture, and it has become the largest nonprofit charitable matching program in the history of business. In 2024 in the United States, it raised $229.1 million in donations for 29,000 nonprofits, plus 964,000 volunteer hours contributed by our employees. It’s a part of Microsoft we’re excited to bring to the communities that have our datacenters.
We recognize that our support for the local community also needs to go beyond this type of program. Our broader contribution must start with listening. You know best what your town needs, what nonprofits are making a difference, and which organizations are best positioned to do more. We will provide locally based Microsoft liaisons in major US datacenter communities to work side by side with local leaders and nonprofits. Our local staff will provide a community connection to our various Microsoft teams and resources. Working together, we will shape our direction and connection to help further our support for local nonprofits.
Conclusion
Many lessons emerge from the nation’s 250-year history relating to technology and infrastructure. The first is that large-scale infrastructure expansion is vital to economic growth and everyday improvements in people’s lives. Our lives today rely on electrical appliances, automobiles, phones, airplanes, and much more that would be impossible without modern infrastructure.
But a second lesson illustrates an important tension. Major infrastructure expansion is always difficult. It’s expensive. It inevitably raises questions, concerns, and even controversies. This has been true for more than 200 years, and we should assume it will be true well into the future. This always requires that important decisions be made by government leaders from village presidents and town councils to the American President and Congress.
Third, the most important decisions are often made at the local level. This reflects the outsized impact—both positive and negative—of infrastructure expansion at the local level. It also reflects the American political tradition and our zoning and permitting laws, which rightly put decision-making authority closest to those elected to serve local communities.
There’s a final lesson that speaks most directly to us. Private companies can help by stepping up and acting in a responsible way. We cannot surmount inevitable community challenges by ourselves. But we can make everything easier by embracing a long-term vision. By recognizing our responsibility. By playing a constructive role. And by supporting the entire community.
As we look to the future, we are committing to taking this final lesson to heart. And making it a fundamental part of our efforts every day.
Here we examine the CISO Outlook for 2026, with the purpose of evaluating what is happening now and preparing leaders for what lies ahead in 2026 and beyond.
Global adoption of artificial intelligence continued to rise in the second half of 2025, increasing by 1.2 percentage points compared to the first half of the year, with roughly one in six people worldwide now using generative AI tools, remarkable progress for a technology that only recently entered mainstream use.
To track this trend, we measure AI diffusion as the share of people worldwide who have used a generative AI product during the reported period. This measure is derived from aggregated and anonymized Microsoft telemetry and then adjusted to reflect differences in OS and device-market share, internet penetration, and country population. Additional details on the methodology are available in our AI Diffusion technical paper.[1]
No single metric is perfect, and this one is no exception. Through the Microsoft AI Economy Institute, we continue to refine how we measure AI diffusion globally, including how adoption varies across countries in ways that best advance priorities such as scientific discovery and productivity gains. For this report, we rely on the strongest cross-country measure available today, and we expect to complement it over time with additional indicators as they emerge and mature.
Despite progress in AI adoption, the data shows a widening divide: adoption in the Global North grew nearly twice as fast as in the Global South. As a result, 24.7 percent of the working age population in the Global North is now using these tools, compared to only 14.1 percent in the Global South.
Countries that have invested early in digital infrastructure, AI skilling, and government adoption, such as the United Arab Emirates, Singapore, Norway, Ireland, France, and Spain, continue to lead. The UAE extended its lead as the #1 ranked country, with 64.0 percent of the working age population using AI at the end of 2025, compared to 59.4 percent earlier in the year. The UAE has opened a lead of more than three percentage points over Singapore, which continues in second place with 60.9 percent adoption.
The second half of the year in the United States shows that leadership in innovation and infrastructure, while critical, does not by themselves lead to broad AI adoption. The U.S. leads in both AI infrastructure and frontier model development, but it fell from 23rd to 24th place in AI usage among the working age population, with a 28.3 percent usage rate. It lags far behind smaller, more highly digitized and AI-focused economies.
South Korea stands out as the clearest end-of-year success story. It surged seven spots in the global rankings, climbing from 25th to 18th, driven by government policies, improved frontier model capabilities in the Korean language, and consumer-facing features that resonated with the population. Generative AI is now used in schools, workplaces, and public services, and South Korea has become one of ChatGPT’s fastest-growing markets, leading OpenAI to open an office in Seoul.[2]
A parallel development reshaping the global landscape in 2025 was the rapid rise of DeepSeek, an open-source AI platform that has gained significant traction in markets long underserved by traditional providers. By releasing its model under an open-source MIT license and offering a completely free chatbot, DeepSeek removed both financial and technical barriers that limit access to advanced AI. Its strongest adoption, not surprisingly has emerged across China, Russia, Iran, Cuba, and Belarus. But perhaps even more notable is DeepSeek’s surging popularity across Africa, where it is aided by strategic promotion and partnerships with firms such as Huawei.[3]
This rapid evolution underscores an increasingly important dimension of AI competition between the United States and China, involving a race to promote adoption of their respective national models. DeepSeek’s success reflects growing Chinese momentum across Africa, a trend that may continue to accelerate in 2026. DeepSeek’s ascent also underscores a broader truth: the global diffusion of AI is influenced by accessibility factors, and the next wave of users may come from communities that have historically had limited access to technological progress. The challenge ahead is ensuring that innovation spreads in ways that help narrow divides rather than deepen them.
Why Effective CTEM Must be an Intelligence-Led Program
Continuous Threat Exposure Management (CTEM) is a continuous program and operational framework, not a single pre-boxed platform. Flashpoint believes that effective CTEM must be intelligence-led, using curated threat intelligence as the operational core to prioritize risk and turn exposure data into defensible decisions.
Continuous Threat Exposure Management (CTEM) is Not a Product
Since Gartner’s introduction of CTEM as a framework in 2022, cybersecurity vendors have engaged in a rapid “productization” race. This has led to inconsistent market definitions, with a variety of vendors from vulnerability scanners to Attack Surface Management (ASM) providers now claiming to be an “exposure management” solution.
The current approach to productizing CTEM is flawed. There is no such thing as a single “exposure management platform.” The enterprise reality is that most enterprises buy three or more products just to approximate what CTEM promises in theory. Even with these technologies, organizations still require heavy lifting with people, process, and custom integrations to actually make it work.
The Exposure Stack: When One Platform Becomes Three (or More)
A functional CTEM approach typically requires multiple platforms or tools, including:
Continuous Penetration/Exploitation Testing & Attack Path Analysis for continuous pentesting, attack path validation, and hands-on exposure validation.
Vulnerability and Exposure Management for vulnerability scanning, exposure scoring, and asset risk views.
Intelligence for deep, curated vulnerability, compromised credentials, card fraud, and other forms of intelligence that goes far beyond the scope of technology-based “management platforms”.
In some cases, organizations may also use an ASM vendor for shadow IT discovery, a CMDB for asset context, and ticketing integrations to drive remediation. This multi-platform model is the rule, not the exception. And that raises a hard truth: if you need three or more products, plus a dedicated team to implement CTEM, you need an intelligence-led CTEM program.
CTEM is an Operational Discipline, Not a Single Product
The narrative that CTEM can be packaged into a single product breaks down for three critical reasons:
1. CTEM is a Program, Not a Platform
You cannot buy a capability that requires full-stack asset visibility, contextualized threat actor data, real-world validation, and remediation orchestration from one tool. Each component spans a different domain of expertise and data. A vulnerability scanner, alone, cannot validate exploitability, a pentest service has a tough time scaling to daily monitoring, and generic threat intelligence feeds cannot provide critical business context.
However, CTEM requires orchestration of all these components in one operational loop. No single product delivers this comprehensively out of the box; this is why CTEM must be viewed as a continuous program, not a one-size-fits-all product.
2. Human Expertise is Irreplaceable
Vendors often advertise automation, however, key intelligence functions are still powered by and reliant on human analysis. Even with best-in-class AI tools in place, security teams are depending on human insights for:
Triaging noisy CVE lists
Cross-referencing exposure data with asset inventories
Manually validating if risks are real
Prioritizing based on threat intelligence and internal context
Writing custom logic and integrations to bridge platforms together
In other words, exposure management today still relies on human insights and expertise. So while vendors advertise “automation and intelligence,” what they’re really delivering is a starting point. Ultimately, AI is a force multiplier for threat analysts, not a replacement.
3. Risk Without Intelligence Is Just Data
Most platforms treat exposure like a math problem. But real risk isn’t just CVSS (Common Vulnerability Scoring System) scores or asset counts, it requires answering critical, intelligence-based questions:
How likely is this vulnerability to be exploited, and what’s the impact if it is?
How likely is this misconfiguration to be exploited, and what is its impact?
How likely is this compromised credential to be used by a threat actor, and what is the potential impact?
These answers require intelligence, not just data. Best-in-class intelligence provides security teams with confirmed exploit activity in the wild, context around attacker usage in APT (Advanced Persistent Threat) campaigns, and detailed metadata for prioritization where CVSS fails. That is why Flashpoint intelligence is leveraged by over 800 organizations as the operational core of exposure management, turning exposure data into defensible decisions.
CTEM Productization vs. CTEM Reality
If your risk strategy requires continuous penetration and exploit testing, vulnerability management, threat intelligence, and manual prioritization and validation, you’re not buying CTEM; you’re building it. At Flashpoint, we’re helping organizations build CTEM the right way: driven by intelligence, and powered by integrations and AI.
The Intelligence-Led Future of Exposure Management
Flashpoint treats CTEM for what it really is, as a program that must be constructed intelligently, iteratively, and contextually.
That means:
Using threat and vulnerability intelligence to drive what actually gets prioritized
Treating scanners, ASM platforms, and pentesting as inputs, not outcomes
Building processes where intelligence, context, and validation inform exposure decisions, not just ticket creation
Investing in platform interconnectivity, not just feature checklists
Using Flashpoint’s intelligence collections, organizations can achieve intelligence-led exposure management, with threat and vulnerability intelligence working together to provide context and actionable insights in a continuous, prioritized loop. This empowers security teams to build and scale their own CTEM programs, which is the only realistic approach in a cybersecurity landscape where no single platform can do it all.
Achieve Elite Operation Control Over Your CTEM Program Using Flashpoint
If you’re evaluating exposure management tools, ask yourself:
What happens when we find a critical vulnerability and how do we know it matters?
Can this platform correlate attacker behavior with our asset landscape?
Does it validate risk or just report it?
How many other tools will we need to buy just to complete the picture?
The answers may surprise you. At Flashpoint, we’re helping organizations build CTEM the right way, driven by intelligence, powered by integration, and grounded in reality. Request a demo today and see how best-in-class intelligence is the key to achieving an effective CTEM program.
Surfacing Threats Before They Scale: Why Primary Source Collection Changes Intelligence
This blog explores how Primary Source Collection (PSC) enables intelligence teams to surface emerging fraud and threat activity before it reaches scale.
Spend enough time investigating fraud and threat activity, and a familiar pattern emerges. Before a tactic shows up at scale—before credential stuffing floods login pages or counterfeit checks hit customers—there is almost always a quieter formation phase. Threat actors test ideas, trade techniques, and refine playbooks in small, often closed communities before launching coordinated campaigns.
The signals are there. The challenge is that most organizations never see them.
For years, intelligence programs have leaned heavily on static feeds: prepackaged streams of indicators, alerts, and reports delivered on a fixed cadence. These feeds validate what is already known, but they rarely surface what is still taking shape. They are designed to summarize activity after it has matured, not to discover it while it is still evolving.
Meanwhile, the real innovation in fraud and threat ecosystems happens elsewhere in invite-only Telegram channels, dark web marketplaces, and regional-language forums that update in real time. By the time a static feed flags a new technique, it is often already widespread.
This disconnect has consequences. When intelligence arrives too late, teams are left responding to impact rather than shaping outcomes.
How Threats Actually Evolve
Fraudsters and threat actors do not work in isolation, they collaborate. In closed forums and encrypted channels, one actor experiments with a new login bypass, another tests two-factor authentication evasion, and a third packages those ideas into a tool or service. What begins as a handful of screenshots or code snippets quickly becomes a repeatable process.
These shared processes often take the form of playbooks that act as step-by-step guides that document how to execute a fraud scheme or exploit a weakness. Once a playbook begins circulating, scale is inevitable. Techniques that started as limited tests turn into thousands of coordinated attempts almost overnight.
Every intelligence or fraud analyst has experienced the moment when an unfamiliar tactic suddenly overwhelms detection systems. The frustrating reality is that the warning signs were often visible weeks earlier, they simply never made it into the static feeds teams were relying on.
Why Static Collection Falls Short
Static collection creates a sense of coverage, but that coverage is often shallow. Sources are fixed. Cadence is slow. Context is stripped away.
A feed might tell you that a domain, handle, or email address is associated with a known tactic, but not how that tactic was developed, who is promoting it, or whether it has any relevance to your organization’s specific exposure. You are seeing the exhaust, not the engine.
This lag matters. The window between a tactic being tested in a small community and being deployed at scale is often the most valuable moment for intervention. Miss that window, and response becomes exponentially more expensive.
As threats accelerate and collaboration among adversaries increases, intelligence programs that depend solely on static inputs struggle to keep pace.
A Different Model: Primary Source Collection
Primary Source Collection (PSC) changes how intelligence is gathered by starting with the questions that matter most and collecting directly from the original environments where those answers exist.
Rather than relying on a predefined list of sources or vendor-determined priorities, PSC begins with a defined intelligence requirement. Collection is then shaped around that requirement, directing analysts to the forums, marketplaces, and channels where relevant activity is actively unfolding.
This means monitoring closed communities advertising check alteration services. It means observing invite-only groups trading identity fraud tutorials. It means collecting original posts, screenshots, files, and discussions while they are still part of an active conversation instead of weeks later in summarized form. When actors begin discussing a new bypass technique or sharing proof-of-concept screenshots, that is the moment to act, not weeks later when the same method is being resold across marketplaces.
Primary Source Collection provides that window. It surfaces the conversations, artifacts, and early indicators that reveal what is coming next and gives teams the time they need to intervene before campaigns scale.
This does not replace analytics, automation, or baseline monitoring. It strengthens them by feeding earlier, richer insight into downstream systems. It ensures that detection and response are informed by how threats are actually developing, not just how they appear after the fact.
In one case, a financial institution using this approach identified counterfeit checks featuring its brand being advertised in underground marketplaces weeks before customers began reporting losses. By collecting directly from those spaces, analysts flagged the images, traced sellers, and alerted internal teams early enough to prevent further exploitation.
That is what early warning looks like when collection is aligned with purpose.
Making Intelligence Taskable
One of the most important shifts enabled by Primary Source Collection is tasking.
Traditional intelligence programs operate like autopilot. They deliver a steady stream of data, but that stream reflects the provider’s priorities rather than the organization’s evolving needs. Analysts spend valuable time triaging irrelevant information while emerging risks go unnoticed.
In classified intelligence environments, this problem has long been addressed through tasking. Every collection effort begins with a clearly defined requirement and priorities drive collection, not the other way around.
PSC applies that same discipline to open-source and commercial intelligence. Teams define Priority Intelligence Requirements (PIRs), such as identifying actors testing bypass methods for specific login flows, and immediately direct collection toward those needs. As priorities change, tasking changes with them.
This transforms intelligence from a passive stream into an operational capability. Analysts are no longer waiting for someone else’s update cycle. They are shaping visibility in real time, testing hypotheses, validating concerns, and uncovering tactics before they mature.
For leadership, this provides something more valuable than indicators: confidence that critical developments are not happening just out of sight.
How Taskable Collection Works in Practice
A taskable Primary Source Collection framework is dynamic by design. As stakeholder priorities shift due to a new campaign, incident, or geopolitical development, collection pivots immediately.
In practice, this approach includes:
Source discovery: Identifying new, relevant sources as they emerge, using a combination of analyst expertise and automated tooling.
Secure access: Entering closed or restricted spaces safely and ethically through controlled environments and vetted identities.
Direct collection: Capturing original content directly from threat actor environments, including posts, images, and files.
Processing and enrichment: Applying techniques such as optical character recognition, entity extraction, and metadata tagging to transform raw material into usable intelligence.
Delivery and collaboration: Routing outputs into investigative workflows or directly to stakeholders to accelerate response.
Intelligence can then mirror the agility of modern threats instead of lagging behind them.
Why This Shift Matters Now
Threat and fraud operations are moving faster than ever. Barriers to entry are lower. Tooling is more accessible. Collaboration rivals legitimate software development cycles.
Defenders cannot afford to move slower than the adversaries they are trying to stop.
Primary Source Collection is how intelligence teams keep pace. It aligns collection with mission needs, enables real-time tasking, and delivers insight early enough to change outcomes instead of just documenting them.
The signals have always been there. What has changed is the ability to surface them while they still matter.
Beyond the Malware: Inside the Digital Empire of a North Korean Threat Actor
In this post Flashpoint reveals how an infostealer infection on a North Korean threat actor’s machine exposed their digital operational security failures and reliance on AI. Leveraging Flashpoint intelligence, we pivot from a single persona to a network of fake identities and companies targeting the Web3 and crypto industry.
Last week, Hudson Rock published a blog on “Trevor Greer,” a persona tied to a North Korean IT Worker. Flashpoint shared additional insights with our clients back in July, and we’re now making those findings public.
Trevor Greer, a North Korean operative, was identified via an infostealer infection on their own machine. Information-stealing malware, also known as Infostealers or stealers, are malware designed to scrape passwords and cookies from unsuspecting victims. Stealers (like LummaC2 or RedLine) are typically used by cybercriminals to steal login credentials from everyday users to sell on the Dark Web. It is rare to see them infect the machines of a state-sponsored advanced persistent threat group (APT).
However, when adversaries unknowingly infect themselves, they can expose valuable insights into the inner workings of their campaigns. Leveraging Flashpoint intelligence sourced from the leaked logs of “Trevor Greer,” our analysts uncovered a myriad of fake identities and companies used by DPRK APTs.
Finding Trevor Greer
Flashpoint analysts have been tracking the Trevor Greer email address since December 2024 in relation to the “Contagious Interview” campaign, in which threat actors operated as LinkedIn recruiters to target Web3 developers, resulting in the deployment of multiple stealers compromising developer Web3 wallets. Flashpoint also identified the specific persona’s involvement in a campaign in which North Korean threat actors posed as IT freelance workers and applied for jobs at legitimate companies before compromising the organizations internally.
ByBit Compromise
The ByBit compromise in late February 2025 further fueled Flashpoint’s investigations into the Trevor Greer email address. Bybit, a cryptocurrency exchange, suffered a critical incident resulting in North Korean actors extorting US $1.5 billion worth of cryptocurrency. In the aftermath, Silent Push researchers identified the persona “Trevor Greer” associated with the email address trevorgreer9312@gmail[.]com, which registered the domain “Bybit-assessment[.]com” prior to the Bybit compromise.
A later report claimed that the domain “getstockprice[.]com” was involved in the compromise. Despite these domain discrepancies, both investigations attributed the attack to North Korean advanced persistent threat (APT) nexus groups.
Tracing the Infection
Using Flashpoint’s vast intelligence collections, we performed a full investigation of compromised virtual private servers (VPS), revealing the actor’s potential involvement in several other operations, including remote IT work, several self-made blockchain and cryptocurrency exchange companies, and a potential crypto scam dating back to 2022.
Flashpoint analysts also discovered that the Trevor Greer email address was linked to domains infected with information-stealing malware.
What the Logs Revealed
Analysts extracted information about the associated infected host from Trevor Greer, revealing possible tradecraft and tools used. Analysts further identified specific indicators of compromise (IOCs) used in the campaigns mentioned above, as well as email addresses used by the actor for remote work.
The data painted a vivid picture of how these threat actors operate:
Preparation for “Contagious Interviews”
The browser history revealed the actor logging into Willo, a legitimate video interview platform. This suggests the actor was conducting reconnaissance to clone the site for the “Contagious Interview” campaign, where they lured Web3 developers into fake job interviews to deploy malware.
Reliance on AI Tools
The logs exposed the actor’s reliance on AI to bridge the language gap. The operator frequently accessed ChatGPT and Quillbot, likely using them to write convincing emails, build resumes, and generate code for their malware.
Pivoting: One Node to a Network
By analyzing the “Trevor Greer” logs, we were able to pivot to other personas and campaigns involved in the operation.
Fake Employment: The logs contained credentials for freelance platforms, such as Upwork and Freelancer, associated with other aliases, including “Kenneth Debolt” and “Fabian Klein.” This confirmed the actor was part of a broader scheme to infiltrate Western companies as remote IT workers.
Fake Companies: The data linked the actor to fake corporate entities, such as Block Bounce (blockbounce[.]xyz), a sham crypto trading firm set up to appear legitimate to potential victims.
Developer Personas: The infection data linked the actor to the GitHub account svillalobosdev, which had been active in open source projects to build credibility before the attack.
Legitimate Platforms & Tools: Analysts observed the actor using job boards such as Dice and HRapply[.]com, freelance platforms such as Upwork and Freelancer, and direct applications through company Workday sites. To improve their resume, the actor used resumeworded[.]com or cakeresume[.]com. For conversing, the threat actor likely relies on a mix of both GPT and Quilbot, as found in infected host logins, to ensure they sound human. During interviews, analysts determined that they potentially used Speechify.
Deep & Dark Web Resources: The actor also likely purchased Social Security numbers (SSNs) from SSNDOB24[.]com, a site for acquiring Social Security data.
Disrupt Threat Actors Using Flashpoint
The “Trevor Greer” case study illustrates a critical shift in modern threat intelligence. We are no longer limited to analyzing the malware adversaries deploy; sometimes, we can analyze the adversaries themselves.
Using their own tools against them, Flashpoint transformed a faceless state-sponsored entity into a tangible user with bad habits, sloppy OPSEC, and a trail of digital breadcrumbs. Behind every sophisticated APT campaign is a human operator, and sometimes, they click the wrong link too.
Request a demo today to delve deeper into the tactics, techniques, and procedures of advanced persistent threats and learn how Flashpoint’s intelligence strengthens your defenses.
Since opening our first Canadian office in Toronto in 1985, Microsoft has played an important role in every chapter of Canada’s digital story, long before cloud and AI were household words. That history matters. Over four decades, our company and our thousands of employees have grown alongside Canada. We’ve developed a deep appreciation for this nation’s culture, values, needs, and important role in the world.
Today we are announcing the most important commitment in Microsoft Canada’s history. We’re adding to our investments—with a total of $19 billion CAD between 2023 and 2027, including more than $7.5 billion CAD in the next two years. We’re building new digital and AI infrastructure needed for the nation’s growth and prosperity, with new capacity beginning to come online in the second half of 2026. Equally important, we’re launching a new five-point plan to promote and protect Canada’s digital sovereignty. And we’re combining this with ongoing and new work to invest in Canada’s people, ensuring they have access to the skills needed to succeed in an AI era.
This builds upon Microsoft’s longstanding and deep relationship with the Canadian people. With more than 5,300 employees across 11 cities nationwide, including Toronto, Vancouver, Montreal, Calgary, Edmonton, Ottawa, and Quebec City, we have employees in every region to bring talent closer to the communities we serve.
Beyond our own team, based on third party estimates, we’re fueling the broader tech ecosystem with more than 17,000 companies that are Microsoft partners in Canada generating between $33B CAD and $41B CAD in annual revenue. Based on this partnership model, Microsoft helps support 426,000 jobs across Canada, including close to 300,000 people who build solutions on Microsoft platforms or provide goods and services for these efforts. As we expand our AI and cloud footprint, these partnerships are helping Canadian organizations to modernize and compete globally.
Our commitment also extends beyond business. In 2024 alone, we donated $219M CAD in grants, employee giving, and technology services to Canadian non-profits and charities.
At its core, our commitment to Canada centres on three things: technology, trust, and talent.
Technology: Building the Backbone of Canada’s Digital Future
Canada’s AI transformation is accelerating. According to Microsoft’s AI Diffusion Leaderboard, Canada ranks 14th globally in AI adoption, with usage now topping a third of the population. Developer contributions are growing too with Canada ranking 14th worldwide in GitHub AI contributors.
This momentum is clear. Canada is a leader not just in AI research, but in putting AI to good use. But sustaining this momentum requires more than enthusiasm. It demands advanced AI infrastructure, sovereign safeguards, world-class cybersecurity, and a skilled workforce to keep pace with innovation. That’s why Microsoft is investing to create a secure, sustainable, and scalable backbone for AI adoption, empowering Canada to lead confidently in the AI era.
Our investment expands our Azure Canada Central and Canada East datacentre regions, delivering sustainable, secure, and scalable cloud and AI capabilities. These datacentres will power everything from modernized public services to advanced AI innovation—responsibly and within Canadian borders.
Every facility and datacentre we build in Canada reflects Microsoft’s global commitment to sustainability. We’re designing our facilities to be energy-efficient, powered increasingly by renewable energy, and optimized for water conservation through advanced cooling technologies. These steps align with our pledge to be carbon negative, water positive, and zero waste by 2030, ensuring that as we expand our AI and cloud footprint, we do so responsibly—minimizing environmental impact while supporting Canada’s clean energy goals.
Since early 2023, these investments have already launched major infrastructure projects, created thousands of jobs, and partnered with Canadian innovators to drive sustainability and economic growth. These datacentres also translate into thousands of construction and permanent engineering and technology jobs, partnerships with Canadian digital innovators, and a surge in local economic opportunity.
Our infrastructure expansion has helped transform and develop new industries—from retail and finance to cleantech and quantum computing. Firms like Canadian Tire, Manulife, BMO, and Gay Lea Foods are embracing AI to transform their businesses, and their stories are a testament to Canada’s leadership in digital adoption.
To help achieve our 2030 sustainability goals, Microsoft is also investing in Canadian cleantech innovation. Canada is recognized as a global leader in cleantech and carbon removal technologies, and we are proud to collaborate with outstanding Canadian companies like Eavor, Cyclic Materials, Arca, Deep Sky, and Carbon Engineering (via 1PointFive).
Trust: A Five-Point Plan to Protect Canada’s Digital Sovereignty
As important as our investment in AI infrastructure is the new company-wide initiative we are launching to protect Canada’s digital sovereignty. This builds on technology and expertise across Microsoft and is based on a five-part plan to defend Canada’s cybersecurity, keep Canadian data on Canadian soil, strengthen privacy protection, support leading local AI developers, and ensure the continuity of cloud and AI services.
Defending Canada’s cybersecurity
As we enter the second quarter of the 21st century, the protection of digital sovereignty starts with the protection of cybersecurity. Reflecting Microsoft’s long-term presence in Canada, we appreciate how much has changed since the century began. During the first quarter of the century, Canada’s population grew by more than 28 percent and its GDP in real terms grew by more than 55 percent. Changing geopolitics and navigation in the Arctic Ocean have put Canada in a more important global position than ever.
Canada’s growth and importance have made the country a bigger cybersecurity target.
Microsoft has long prioritized the protection of Canadian cybersecurity. With unmatched threat intelligence capabilities based on 100 trillion signals from around the world every day, we’ve seen increasing international targeting of Canadian digital assets, especially from China, Russia, North Korea, and countries across south Asia and the Middle East. This has included influence operations in advance of elections and digital espionage focused on government agencies.
Even more significant, Canada’s diverse and robust economy has become a target of sophisticated international ransomware attacks. Organized criminal groups—some with nation state sponsorship—are targeting every sector of the economy and the public, and they are starting to rely on even more sophisticated technology and techniques, including AI. Our assessment is that in 2025 more than half of cyberattacks against Canada with known motives have been based on financial objectives, and 80 percent of them have involved efforts to exfiltrate data. Almost 20 percent have targeted the healthcare and education sectors, which creates more widespread threats to the public.
To strengthen our protection of Canada’s cybersecurity, we are launching today in Ottawa a dedicated Threat Intelligence Hub. This Hub will house Microsoft subject matter experts in threat intelligence, threat protection research, and applied AI security research. They will have access to Microsoft threat intelligence data and assets from around the world, so they can work closely with the Government of Canada and law enforcement partners to track and interdict nation state actors and organized crime.
In recent months, our team in Canada has been working to thwart China-based threat actors and has been sharing intelligence related to North Korean IT workers using stolen or fake identities to secure jobs with technology companies in Canada. We are dedicated to making this cybersecurity protection even stronger going forward.
Keeping Canadian Data on Canadian Soil
We also recognize the importance of ensuring that our Canadian customers can keep their local data on Canadian soil. This is why we embarked a decade ago, in close consultation with national leaders, to build and open our first two Canadian datacentres to provide local data residency in Toronto and Quebec City. We have steadily expanded our local services each year since. In 2026, we will take three new steps to keep Canadian data on Canadian soil.
First, we will strengthen sovereign controls and expand our data residency commitments by offering in-country data processing for Copilot interactions.
Second, we will expand our Azure Local offering in Canada to enable the extension of Azure capabilities to customer-owned environments such as private cloud and on-premises infrastructure.
And third, we will launch Sovereign AI Landing Zone (SAIL) in Canada. This is an open-source AI Landing Zone whose code will be hosted publicly on GitHub, and which will provide a secure foundation for deploying AI solutions within Canada’s borders, so organizations can build, scale, and innovate while maintaining the highest standards of privacy and compliance.
Protecting Canadian privacy
We recognize that privacy is a cornerstone of digital trust. We have long protected the digital privacy of people across Canada. As we look to 2026, we will build on this strong foundation with new technical capabilities and legal measures.
Next year, Microsoft will bring the latest confidential computing capabilities to our Canadian datacentre regions. Confidential computing in Azure enables organizations to keep data encrypted and isolated, even while in use, helping meet stringent digital sovereignty requirements. Azure Key Vault will also be available to Canadian customers, supporting external key management and allowing encryption keys to remain under customer control, whether stored on-premises or with a trusted third-party Hardware Security Module (HSM).
We will couple these technical measures with expanded contractual protection. We are codifying our promise to protect our Canadian customers’ data with a contractual commitment, in which we agree to challenge any government demand for Canadian government or commercial customer data where we have a legal basis for doing so.
Supporting Canada’s AI developers
Canada’s growing AI and digital ecosystem also requires protection and support for the nation’s leading AI developers. We have expanded this work in 2025 and will continue to prioritize these efforts in the year ahead.
Our work with Cohere exemplifies this commitment: we are welcoming Cohere into the Microsoft Foundry’s first-party model lineup, making their advanced language models—Command A, Embed 4, and Rerank—accessible on Azure. This will amplify Canadian innovation on a global stage. This partnership is built on more than technology; it is grounded in trust and shared values, with initiatives to help Cohere scale across Canada and worldwide.
We will explore new ways to integrate Cohere’s sovereign, made-in-Canada AI models into Microsoft services, helping to ensure Canadian enterprises and the public sector benefit from secure, locally developed solutions that embody responsibility and integrity. Together with Canada’s leading innovators, we are building relationships that deliver opportunity and impact while reinforcing the trusted foundation of Canadian digital sovereignty.
Defending the continuity of Canadian cloud services
Finally, in the face of geopolitical uncertainty, continuity is essential. Microsoft pledges to rigorously defend the uninterrupted operation of cloud services for Canadian government customers. If ever confronted with an order to suspend or halt operations in Canada, we will pursue every available legal and diplomatic avenue—including litigation—to protect access to critical infrastructure. Our track record demonstrates our resolve to stand up for customer rights. We remain ready to reinforce this commitment through robust contractual agreements, confident in our ability to ensure the ongoing operation of Canadian datacentres. Ultimately, these efforts aim to deepen trust between people, institutions, and nations, grounded in mutual respect and a shared commitment to advancing Canada’s digital future.
Microsoft’s digital infrastructure in Canada is not built on wheels. It is permanent infrastructure, and fully subject to Canadian laws and regulations. We recognize and respect that our operations in Canada are governed by Canadian law, just as we adhere to local laws in every country where we operate.
Talent: Investing in the Future for Every Canadian
At its core, every datacentre we build and every AI capability we deploy is an investment in Canadians and their future. Because technology alone doesn’t drive transformation, people do. That’s why it’s imperative to ensure that every Canadian can develop the skills needed to succeed in an AI era.
The need is clear. By 2030, nearly 60 percent of workers worldwide will require new digital skills, yet today only 24 percent of Canadians have received AI training, compared to a global average of 39 percent. Closing this gap is critical for Canada’s competitiveness.
Our new Microsoft Elevate business unit is designed to put people first, making AI opportunities accessible across the country. Since July 2024, Microsoft Canada has engaged 5.7 million learners through free skilling programs, with more than 546,000 individuals completing an AI training course. And we’re not stopping there. By 2026, Microsoft Elevate will help 250,000 Canadians earn in-demand AI credentials, ensuring the workforce is ready for the next decade of innovation.
Our partnerships amplify this impact. The Nonprofit AI Impact Hub, developed with the Canadian Centre for Nonprofit Digital Resilience (CCNDR) and Imagine Canada, strengthens the digital resilience of Canada’s 170,000 charities and nonprofits, which collectively employ 2.7 million people. Through role-based AI training and micro-credentials, we’re equipping this sector with tools to serve communities better.
We’re also investing in the next generation. Today, we are proud to announce a new partnership with Actua, a national leader that brings STEM education to youth throughout Canada, including those in remote, rural, and Indigenous communities. Microsoft Canada and Actua are committed to working with Indigenous communities across Canada to support AI skills development, so that the benefits of AI are felt widely. This partnership will support Actua’s AI Ready and InSTEM (Indigenous Youth in STEM) programs, to equip 20,000 young Canadians with essential AI skills. The InSTEM program will add AI learning for Indigenous youth, blending technology with cultural heritage and knowledge. For instance, students learn how AI tools can help preserve Indigenous languages and support cultural identity.
Canada Can Count on Us
Few American companies have benefitted more than Microsoft from such longstanding ties to Canada. Living so close to the border, we have long appreciated the many attributes that make Canada so special. We share more than geography. We share priorities like security, sustainability, and inclusive growth.
Today, we’re taking this partnership to the next level. We believe Canada has what it takes to help lead the world in responsible AI innovation and adoption, and we’re committed to being a partner every step of the way.
Digital Supply Chain Risk: Critical Vulnerability Affecting React Allows for Unauthorized Remote Code Execution
CVE-2025-55182 (VulnDB ID: 428930), is a severe, unauthenticated RCE impacting a major component of React and its ecosystem, putting global applications at immediate, high-fidelity risk.
Flashpoint’s vulnerability research team assesses significant enterprise and supply chain risk given React’s ubiquity: the impacted JavaScript library underpins modern UIs, with 168,640 dependents and more than 51 million weekly downloads.
How CVE-2025-55182 Works
CVE-2025-55182 (VulnDB ID: 428930) impacts all React versions since 19.0.0, meaning that this issue has been potentially exploitable since November 14, 2024. This vulnerability stems from how React handles payloads sent to React Server Function endpoints and deserializes them.
Flashpoint’s VulnDB entry for CVE-2025-55182
Depending on the implementation of this library, a remote, unauthenticated threat actor could send a crafted payload that would be deserialized in a way that causes remote code execution. This would lead to a total compromise of the system hosting the application, allowing for malware such as infostealers, ransomware, or cryptojackers (cryptocurrency mining) to be downloaded.
A working exploit for CVE-2025-55182 has already been published that is effective against some installations. In addition, Amazon has reported that two threat actors, attributed to Chinese Advanced Persistent Threat Groups (APTs), have begun to exploit this vulnerability. Those groups are:
Understanding the Impact and Scope of CVE-2025-55182
It is critical that security teams fully understand the potential downstream scope and impact so that they can fully focus on mitigation, rather than time-consuming research. While the vendor has provided a full disclosure, there are several important caveats to understand about CVE-2025-55182:
Applications not implementing any React Server Function endpoints may still be vulnerable as long as it supports React Server Components.
If an application’s React code does not use a server, it is not affected by this vulnerability.
Applications that do not use a framework, bundler, or bundler plugins that support React Server Components are unaffected by this vulnerability.
Additionally, several React frameworks and bundlers have been discovered to leverage vulnerable React packages in various ways. The following frameworks and bundlers are known to be affected:
next
react-router
waku
@parcel/rsc
@vitejs/plugin-rsc
rwsdk
NPMJS.com currently shows that the react-dom package, which is effectively part of React, has 168,640 dependents. This means that an incredible number of enterprise applications are likely to be affected. Nearly every commercial application is built on hundreds, sometimes thousands of components and dependencies. Furthermore, applications coded via Vibe and similar technology are also likely to leverage React: potentially amplifying the downstream risk this vulnerability poses.
How to Mitigate CVE-2025-55182
For mitigation, the React library has released versions 19.0.1, 19.1.2, and 19.2.1 that resolve the issue. Flashpoint advises organizations to upgrade their respective libraries urgently. Security teams leveraging dynamic SBOMs (Software Bill of Materials) can drastically increase risk mapping and triage for deployed React versions.
To avoid confusion, security teams should ignore CVE-2025-66478. It has been rejected for being a duplicate of the preferred CVE-2025-55182.
Mitigate Critical Vulnerabilities Using Flashpoint
Flashpoint strongly recommends security teams treat this vulnerability with utmost urgency. Our vulnerability research team will continue to monitor this vulnerability and its downstream impacts. All updates will be provided via Flashpoint’s VulnDB.
Request a demo today and gain access to quality vulnerability intelligence that helps address critical threats in a timely manner.
Flashpoint’s Top 5 Predictions for the 2026 Threat Landscape
Flashpoint’s forward-looking threat insights for security and executive teams, provides the strategic foresight needed to prepare for the convergence of AI, identity, and physical security threats in 2026.
As the global threat landscape accelerates its transformation, 2026 marks an inflection point requiring defensive strategies to fundamentally shift. The volatility observed in 2025 has paved the way for an era soon to be defined by AI-weaponized autonomy, information-stealing malware, systemic instability of public vulnerability systems, and the complete convergence of digital and physical risk.
Flashpoint offers a unique window into these complexities, providing organizations with the foresight needed to navigate what lies ahead. Drawing from Flashpoint’s leading intelligence and primary source collections, we highlight five key trends shaping the 2026 threat landscape. These insights aim to help organizations not only understand what’s next but also build the resilience needed to withstand and adapt to emerging challenges.
Prediction 1: Agentic AI Threats Will Weaponize Autonomy, Forcing a New Defensive Standard
2026 will see continued evolution of AI threats, with future attacks centering on autonomy and integration. Across the deep and dark web, Flashpoint is observing threat actors move past experimentation and into operational use of illegal AI.
As attackers train custom fraud-tuned LLMs (Large Language Models) and multilingual phishing tools directly on illicit data, these AI models will become more capable. The criminal intent shaping their misuse will also become more sophisticated. Additionally, 2026 will see a greater marketplace for paid jailbreaking communities and synthetic media kits for KYC (Know Your Customer) bypass.
These advancements are enabling criminals to move beyond simple tools and engage in scaled, autonomous fraud operations, leading to two major shifts:
Agentic AI is becoming the true flashpoint: Threat actors will be using agentic systems to automate reconnaissance, generate synthetic identities, and iterate on fraud playbooks in near real-time. In this SaaS ecosystem, AI will help attackers leverage subscription tiers and customer feedback loops at scale.
The attack surface will shift to focus on AI Integrations: Organizations are increasingly plugging LLMs into live data streams, internal tools, identity systems, and autonomous agents. This practice often lacks the same security vetting, access controls, and monitoring applied to other enterprise systems. As such, attackers will heavily target these integrations, such as APIs, plugins, and system connections, rather than the models themselves.
“The ubiquity of automation has dramatically increased attack tempo, leaving many security teams behind the curve. While automation can replace repetitive tasks across the enterprise, organizations must not make the critical mistake of substituting human judgement for AI at the intelligence level.
This is paramount because a critical threat in 2026 is Agentic AI autonomy weaponized against soft targets—API integrations and identity systems. The only winning defense will be human-led and AI-scaled, prioritizing purposeful use to keep organizations ahead of this exponential risk.”
Josh Lefkowitz, CEO at Flashpoint
These evolving AI threats will force a fundamental shift in defensive strategies. Defenders will have to shift to deploying systems around AI rather than trust them on their own.
Prediction 2: Identity Compromise via Infostealers Will Become the Foundation of Every Attack
Infostealers will become the entry point, the data broker, the reconnaissance layer, and the fuel for everything that comes after a cyberattack. This shift is already in motion and is accelerating rapidly: in just the first half of 2025, infostealers were responsible for 1.8 billion stolen credentials, an 800% spike from the start of the year. However, 2026 will redefine the malware’s role, making its most valuable output being access, rather than disruption.
Infostealers will become the upstream event that powers the rest of the attack chain. Identity and session data will be increasingly targeted, since it gives attackers immediate access into victim environments. Ransomware, fraud, data theft, and extortion will simply be downstream ways to monetize.
This upstream approach defines the new reality of the attack chain, which is already operational. Nearly every major stealer strain Flashpoint observes now exfiltrates the following:
An organization’s attack surface is no longer just composed of their own networks. It is the entire digital identity of their employees and partners. This new reality requires security teams to take a new approach. Instead of attempting to block attacks, they must proactively detect compromised credentials before they are weaponized. This will be the difference between reacting to a data breach and preventing one.
“The infostealer economy has fully industrialized the attack chain, making initial compromise a low-cost commodity. Multiple security incidents in 2025 tie back to credentials found in infostealer logs. This reality has underscored the critical importance of digital trust—specifically, verifying who can access what resources. For 2026, identity is the perimeter to watch, and security teams must proactively hunt for compromised credentials before they’re weaponized.”
Ian Gray, Vice President of Intelligence at Flashpoint
Prediction 3: CVE Volatility Will Force Redundancy in Vulnerability Intelligence
The temporary funding crisis at CVE in April 2025 and the subsequent CISA stopgap extension through March 2026 exposed the systemic fragility of a centralized vulnerability intelligence model. With the future of the CVE/NVD system hanging in the balance, 2026 will be defined by the urgent need for redundancy and diversification in vulnerability intelligence.
In today’s vulnerability intelligence ecosystem, nearly every organization’s vulnerability management framework relies on CVE and NVD—including its “alternatives” such as the EUVD (European Union Vulnerability Database). The CVE system has grown into a critical global cybersecurity utility, relied upon by nearly all vulnerability scanners, SIEM platforms, patch management tools, threat intelligence feeds, and compliance reports. A complete shutdown of CVE would result in a widespread loss of institutional infrastructure.
The next generation of security needs to be built on practices that are resilient, diversified, and intelligence-driven. It should be focused on providing insights that can be used to take action such as threat actor behavior, likelihood of exploitation in the wild, relevance to ransomware campaigns, and business context. Security teams will need to leverage a comprehensive source of vulnerability intelligence such as Flashpoint’s VulnDB that provides full coverage for CVE, while also cataloging more than 100,000 vulnerabilities missed by CVE and NVD.
Prediction 4: Executive Protection Will Remain a Critical Challenge as Cyber-Physical Threats Converge
The continued blurring of lines between cyber, physical, and geopolitical threats will elevate the risk to organizational leadership, turning executive protection into a holistic intelligence function in 2026. The rise of information warfare combined with physical world convergence means the threat to key personnel is no longer purely digital.
In the aftermath of the tragic December 2024 assassination of United Healthcare’s CEO, Flashpoint has seen the continued circulation and glorification of “wanted-style posters” of executives in extremist communities. Additionally, Flashpoint has seen nation-state actors participate, using espionage and influence to target high-value individuals. Organizations must adopt an integrated approach that connects insights from threat actor chatter and a wealth of other OSINT sources. This fusion of intelligence is essential for applying frameworks to ensure the safety of leadership and key personnel.
Prediction 5: Extortion Shifts to Identity-Based Supply Chain Risk
2025 was marked by several large-scale extortion campaigns, demonstrating how the threat landscape is rapidly evolving. Ransomware operations have shifted into a straight extortion play. Flashpoint has observed a surge in new entrants to the ransomware market, accompanied by a decline in the quality and decorum of ransomware groups.
Furthermore, vishing campaigns attributed to “Scattered Spider” have highlighted weaknesses in identity, trust, and verification. Campaigns from “Scattered LAPSUS$ Hunters” have also exposed vulnerabilities in third-party integrations. These attacks culminated in extortion, showcasing that modern attacks will target trusted users and trusted applications for initial access, and will forgo ransomware in place of data access.
As this shift continues into 2026, threat actors will increasingly focus their efforts on exploiting human behavior and identity systems. Instead of attempting to spend resources on breaking network perimeters, attackers will instead socially engineer employees to gain access to corporate systems at scale. This change in TTPs will undoubtedly greatly increase supply chain risk, especially for third parties.
Charting a Path Through an Evolving Threat Landscape with Flashpoint Intelligence
These five predictions highlight the transformative trends shaping the future of cybersecurity and threat intelligence. Staying ahead of these challenges demands more than just reactive measures—it requires actionable intelligence, strategic foresight, and cross-sector collaboration. By embracing these principles and investing in proactive security strategies, organizations can not only mitigate risks but also seize opportunities to enhance their resilience.
As the threat landscape continues to rapidly evolve, staying informed and prepared are critical components of risk mitigation. With the right tools, insights, and partnerships, security teams can navigate the complexities ahead and safeguard what matters most.
Around the world, the dangers of extreme weather are a daily reality. In 2024, extreme weather displaced or disrupted the lives of more than 800,000 people worldwide —a reminder that accurate, timely forecasts aren’t just about data; they’re about people. From farmers deciding when to plant to coastal communities preparing for hurricanes, better forecasting can save lives, protect infrastructure, and support economies.
That is why Microsoft remains deeply committed to Aurora, an AI model designed to help scientists understand Earth systems in new ways. Trained on vast amounts of data, it’s tuned to model the Earth’s systems. Aurora has already shown promise across multiple scenarios, including predicting the weather, tracking hurricanes and air quality, and modeling ocean waves and energy flows.
Today, we are reaffirming our commitment: keep Aurora open, collaborative, and impactful so researchers can innovate faster and deliver solutions that help communities prepare, adapt, and thrive. Scientific progress depends on openness and a strong global community, which is why Aurora will progress as an open-source platform, enabling scientists everywhere to contribute and apply it to new climate and weather challenges.
The next phase: Fueling innovation through research partnerships
We’re collaborating with Professor Rich Turner, a leader in machine learning research, and his lab at the University of Cambridge through a Microsoft AI for Good grant and research scientists to continue development of Aurora. Originally developed by Microsoft Research AI for Science, with collaboration from Professor Turner, we believe Aurora has the potential to change the way scientists around the world can use AI for weather and climate science.
Building on our SPARROW initiative, we’re also investing in research of open-source weather stations that can expand access to high-quality environmental data. These affordable, community-deployable systems are designed to help fill critical observation gaps and strengthen the dependability of weather predictions where they matter most.
Making Aurora available to scientists everywhere
Aurora’s source code and model weights are already open—but we’re going further. Together with Turner and Cambridge, our AI for Good team will open-source future releases of Aurora and new models that are built upon it, including training pipelines. By making Aurora open and free to build upon, we’re enabling researchers and developers everywhere to collaborate, contribute, and drive innovation together.
Empowering national meteorological services
As with any technology, the measure of success for tools like Aurora is to have a positive impact on the lives of people. Empowering national meteorological services across the Global South, along with the Global North, is a priority. We’re particularly focused on the application of Aurora to help meteorological services develop and strengthen their own forecasting systems that are tailored to their own local environments. This will allow them to adapt, extend, and innovate on top of Aurora, improving the accuracy, reliability, and reach of their forecasts.
Enabling a cross-industry ecosystem
Aurora is trained on one of the largest collections of atmospheric data ever assembled to develop an AI forecasting model. It’s then fine-tuned to perform a variety of specific tasks, like predicting wave height or air quality, using modest amounts of additional data.
The application of such a model could unlock innovation across all kinds of other industries. For example, energy companies and commodity traders have expressed interest, particularly in seeing how Aurora can be adapted to better predict renewable power generation, anticipate extreme weather events, and help protect energy grids.
We are excited to see our work on Aurora graduate from a research project into a truly collaborative, open-source effort. By opening Aurora to the global community, we’re enabling breakthroughs in scientific understanding that we hope will transform humanitarian aid, optimize energy systems, advance sustainability, and even reshape financial services.
Today, I had the pleasure of joining a range of leaders for timely, impactful discussions on child well-being in the age of AI at the Vatican, building on thoughtful conversations held during the United Nations General Assembly. These issues are top of mind globally, from parents to policymakers to physicians.
At Microsoft, we remain focused on our goal of empowering young people to use technology safely, mindfully, and in pursuit of social, educational, and economic opportunities. That means taking new steps spurred by regulation, such as new age verification measures for our UK Xbox users, as well as adapting our longstanding commitments to responsible AI and child online safety and privacy to build trust in the AI era. Today, we’re sharing new research on youth perspectives, announcing the AI Futures Youth Council to amplify teen voices, and offering policy recommendations to help families navigate the digital world with confidence.
Centering young people’s voices: Announcing the AI Futures Youth Council and new age assurance research
In 2017, Microsoft led the industry with our first Council for Digital Good—a forum where we could hear directly from young people about their experiences and perceptions of online risk. In 2025, with AI reshaping our world—and their future—we again need to center the voices of young people as we think about responsible design for AI and how we set students up for the future. We are actively working with teens from the Asia-Pacific region to develop our first “for teens, by teens” guide to AI chatbots. Today, I’m pleased to announce the upcoming launch of our first “AI Futures Youth Council,” bringing together teens from the US and Europe to have their say on their future. We’ll share more about the application process soon.
We know that a critical precursor to providing young people positive and productive online experiences is understanding which users are young people. Around the globe, the debate over how to achieve age assurance online continues unabated. We have been grateful to work with CIPL and the WeProtect Global Alliance over the last year to explore how to achieve improved age assurance that is consistent with fundamental rights of privacy and access to information. As with any other safety intervention, our goal is to be proportionate and thoughtful where we take new steps, which is why we have focused on gaming in the first instance—reflecting the responsibilities we have to our youngest users and our ongoing commitment to player safety.
To inform our strategy and the broader policy conversation, we partnered with Praesidio Safeguarding to better understand youth perspectives on age assurance approaches across the UK, Ghana, and Indonesia. We are pleased to share that research today. The findings reinforce the importance of transparency, choice, and trust: teens want clear explanations of how their data is used, express concerns about exclusion where formal proof of age is lacking and show varying comfort levels with the use of biometric and behavioral data. Notably, young people value parental involvement but also highlight the need for independence and privacy as they mature. The results also highlight some of the important differences across geographies. For example, teenagers in Ghana often not only share devices with their families but may also share an account—underscoring a need for nuanced global approaches at multiple layers of the technology stack.
These insights underscore our belief that proportionality—matching safeguards to actual risks—is essential to building trust and empowering youth online. They also highlight the need for age assurance models that are inclusive, flexible, and respectful of youth autonomy—especially in global contexts where device and account sharing are common. We remain committed to ongoing dialogue and innovation, ensuring that our solutions evolve alongside the needs and expectations of children, families, and society at large.
Our policy recommendations: Empower young people to use technology safely
We believe technology should empower young people, not put them at risk. Given the diverse range of online services, it is important to remember there is no single “digital seatbelt” to protect and empower young people online.
We therefore offer the following recommendations as policymakers, regulators, and experts continue to discuss these issues, building on our 2024 blog:
Avoid blanket access restrictions. Age assurance requirements that block full access to a service—except in limited cases like sites dedicated to age-restricted content (e.g., pornography)—can unintentionally limit child rights, such as access to information. Instead, age assurance should be applied at the service level, target specific design features that pose heightened risks, and enable tailored experiences for children.
Focus on the highest risks for impact, such as content and features associated with documented harms to children, and as determined through democratic processes. Providers should take steps to assess and mitigate risks to children on their services, while ensuring documentation requirements or compliance obligations do not inadvertently undermine safety. A risk-based and proportionate approach—grounded in clear criteria and supported by interoperable standards—can also help ensure that age assurance is applied where most needed, without introducing unnecessary friction. Providers of high-risk services should bear the responsibility of age assurance.
Strengthen safeguards for AI companions. Recent tragic events have highlighted the need for continued care in developing AI companions, especially where these may be used by young people. At Microsoft, we are building AI services for empowerment and want the right guardrails in place to protect all users but welcome new, commonsense measures such as those enacted in California and Australia to reduce the potential harms related to suicide and self-injury risks, as well as to sexualized or violent content. We will continue to work closely with researchers and experts to understand and mitigate potential risks to young people in this fast-evolving field.
Incentivize age-appropriate design. Banning kids from online services isn’t the answer, but what constitutes an “age-appropriate” experience will vary. We have supported a duty of care approach to child safety where the duty can be implemented flexibly, guided by thoughtful and evidence-based regulatory guidance. Ongoing research and expert engagement are needed to understand how to advance child safety and rights on diverse services—not just social media.
Protect the privacy and security of all users. Tailoring age assurance requirements will help enable proportionate approaches to data processing. Current proposals for age verification by app stores risk creating significant privacy risks by collecting sensitive information and sharing unnecessary age data with a wide variety of services while also not solving the challenges lawmakers want to address. We continue to support federal privacy legislation in the US and encourage global efforts to develop standards and certifications for age assurance providers. Trusted credential sharing can also increasingly be enabled by emerging digital identity ecosystems—including government-issued IDs and wallet-based models—that preserve mutual privacy between issuers and relying parties.
Support, not overwhelm Our Global Online Safety Survey results show that while parents might underestimate the risks teens face online, teens are most likely to turn to a parent for help. Parents should not face a deluge of notifications nor bear the sole responsibility for safety but have access, awareness, and education on family safety tools that can help them make informed choices appropriate for their family and their values.
Foster multistakeholder collaboration. We believe it’s essential to elevate the voices and perspectives of young people, as well as for regulators and industry to engage with civil society and partner to advance practical solutions. As child safety regulations come into force, it will also be important to get feedback from affected communities on where regulation may have adverse rights impacts, as well as to understand where harm may have been averted. Public education will be needed to help all users understand why their online experiences might be changing.
We will continue learning, listening, and collaborating, especially with our new Council, and look forward to sharing our insights.