Cyber Insights 2026: Information Sharing
Information sharing is necessary for efficient cybersecurity, and is widespread; but never quite perfect in practice.
The post Cyber Insights 2026: Information Sharing appeared first on SecurityWeek.
Information sharing is necessary for efficient cybersecurity, and is widespread; but never quite perfect in practice.
The post Cyber Insights 2026: Information Sharing appeared first on SecurityWeek.
US officials told The New York Times that cyberattacks were used to turn off the lights in Caracas and disrupt air defense radars.
The post New Reports Reinforce Cyberattack’s Role in Maduro Capture Blackout appeared first on SecurityWeek.
We've known that social engineering would get AI wings. Now, at the beginning of 2026, we are learning just how high those wings can soar.
The post Cyber Insights 2026: Social Engineering appeared first on SecurityWeek.
China has more than 5,000 cybersecurity companies and all the top 20 firms are working with the government.
The post Cybersecurity Firms React to China’s Reported Software Ban appeared first on SecurityWeek.
Easterly will be leading the world-renowned cybersecurity conference and other RSAC programs.
The post Former CISA Director Jen Easterly Appointed CEO of RSAC appeared first on SecurityWeek.
Researchers have disclosed technical details on a new AMD processor attack that allows remote code execution inside confidential VMs.
The post New ‘StackWarp’ Attack Threatens Confidential VMs on AMD Processors appeared first on SecurityWeek.
Designed for long-term access, the framework targets cloud and container environments with loaders, implants, and rootkits.
The post VoidLink Linux Malware Framework Targets Cloud Environments appeared first on SecurityWeek.

Today, Microsoft is announcing a coordinated legal action in the United States and, for the first time, the United Kingdom to disrupt RedVDS, a global cybercrime subscription service fueling millions in fraud losses. These efforts are part of a broader joint operation with international law enforcement, including German authorities and Europol, which has allowed Microsoft and its partners to seize key malicious infrastructure and take the RedVDS marketplace offline, a major step toward dismantling the networks behind AI-enabled fraud, such as real estate scams.
For as little as US $24 a month, RedVDS provides criminals with access to disposable virtual computers that make fraud cheap, scalable, and difficult to trace. Services like these have quietly become a driving force behind today’s surge in cyber‑enabled crime, powering attacks that harm individuals, businesses, and communities worldwide. Since March 2025, RedVDS‑enabled activity has driven roughly US $40 million in reported fraud losses in the United States alone. Among the victims is H2-Pharma, an Alabama‑based pharmaceutical company that lost more than $7.3 million — money supposed to be used to sustain lifesaving cancer treatments, mental health medications, and children’s allergy drugs for patients across the country. In a separate case, the Gatehouse Dock Condominium Association in Florida was tricked out of nearly $500,000—funds contributed by residents and property owners for essential repairs. Both organizations are joining Microsoft as co‑plaintiffs in this civil action.
But these cases represent only a fraction of the harm. Fraud and scams frequently go unreported, victims are global, and cybercriminals routinely pivot across platforms and service providers. For the individual, fraud has lasting effects that extend beyond financial loss to emotional wellbeing, health, relationships, and long-term stability. As a result, the true toll of RedVDS‑enabled activity is far higher than the roughly US $40 million Microsoft can directly observe.
RedVDS is an online subscription service that is part of the growing cybercrime-as-a-service ecosystem where cybercriminals buy and sell services and tools to launch attacks at scale. It provides access to cheap, effective, and disposable virtual computers running unlicensed software, including Windows, allowing criminals to operate quickly, anonymously, and across borders.

Cybercriminals use RedVDS for a wide range of activities, including sending high‑volume phishing emails, hosting scam infrastructure, and facilitating fraud schemes. RedVDS is frequently paired with generative AI tools that help identify high‑value targets faster and generate more realistic, multimedia message email threads that mimic legitimate correspondences. In hundreds of cases, Microsoft observed attackers further augment their deception by leveraging face-swapping, video manipulation, and voice cloning AI tools to impersonate individuals and deceive victims.
In just one month, more than 2,600 distinct RedVDS virtual machines sent an average of one million phishing messages per day to Microsoft customers alone. While most were blocked or flagged as part of the 600 million cyberattacks Microsoft blocks per day, the sheer volume meant a small percentage may have succeeded in reaching the targets’ inboxes. Since September 2025, RedVDS‑enabled attacks have led to the compromise or fraudulent access of more than 191,000 organizations worldwide. These figures represent only a subset of the impacted accounts across all technology providers, illustrating how quickly this infrastructure increases the scale of cyberattacks.

One of the most common ways RedVDS‑enabled attacks result in financial loss is through payment diversion fraud, also known as business email compromise, or “BEC.” In these schemes, attackers gain unauthorized access to email accounts, quietly monitor ongoing conversations, and wait for the right moment, such as an upcoming payment or wire transfer. At that point, they impersonate a trusted party and redirect funds, often moving the money within seconds. Both H2-Pharma and the Gatehouse Dock Condominium Association were targeted through sophisticated BEC schemes that exploited trust and timing.


RedVDS has also been heavily used to facilitate real estate payment diversion scams, one of the fastest‑growing forms of cyber‑enabled fraud. In these cases, attackers compromise the accounts of realtors, escrow agents, or title companies and send strategically timed emails with fraudulent payment instructions designed to divert closing funds, escrow payments, and other sizeable transactions. For families and first altogether. Microsoft has observed RedVDS‑enabled activity affecting more than 9,000 customers in the real estate sector alone, with particularly severe impact in countries such as Canada and Australia.
And the threat goes far beyond real estate. RedVDS‑enabled scams have hit construction, manufacturing, healthcare, logistics, education, legal services, and many other sectors—disrupting everything from production lines to patient .
Cybercrime today is powered by shared infrastructure, which means disrupting individual attackers is not enough. Through this coordinated action, Microsoft has disrupted RedVDS’s operations, including seizing two domains that host the RedVDS marketplace and customer portal, while also laying the groundwork to identify the individuals behind them.
Microsoft’s legal actions are reinforced by close collaboration with law enforcement partners around the world, further disrupting the malicious operation. Germany’s Public Prosecutor’s Office Frankfurt am Main – Central Office for Combating Internet Crime (ZIT) and the German State Criminal Police Office Brandenburg have seized a critical server used to power RedVDS, effectively taking its central marketplace offline. At the same time and as part of this ongoing disruption, Microsoft is also working closely with international law enforcement, including Europol’s European Cybercrime Centre (EC3), to disrupt the broader network of servers and payment networks that supported RedVDS customers as part of the ongoing disruption.
What people and organizations can do
We are deeply grateful to H2– -Pharma and the Gatehouse Dock Condominium Association for their willingness to come forward and share their experiences. Their cooperation, combined with Microsoft’s threat intelligence, made this action possible and will help protect future victims. Falling victim to a scam should never carry stigma. These attacks are executed by organized, professional criminal groups that intercept and manipulate legitimate communications between trusted parties.
Simple steps can significantly reduce risk, including slowing down and questioning urgency, calling points of contact back using numbers that are already known to you, verifying payment requests using additional contact information, enabling multifactor authentication, watching carefully for subtle changes in email addresses, keeping software up to date, and reporting suspicious activity to law enforcement. Every report helps dismantle networks like RedVDS and brings us closer to stopping cybercrime at scale.
This action against RedVDS builds on Microsoft’s ongoing efforts to disrupt fraud and scam infrastructure through legal and technical action, collaboration with law enforcement, and participation in global initiatives such as the National Cyber-Forensics and Training Alliance (NCFTA) and the Global Anti-Scam Alliance (GASA). It marks the 35th civil action targeting cybercrime infrastructure by Microsoft’s Digital Crimes Unit, underscoring a sustained strategy to go beyond individual takedowns and dismantle the services that criminals rely on to operate and scale.
As services like RedVDS continue to emerge, Microsoft will keep working with partners across sectors and borders to identify and disrupt the infrastructure behind cyber-enabled fraud, making it harder for criminals to profit and easier for people and organizations to stay safe online.
The post Microsoft disrupts global cybercrime subscription service responsible for millions in fraud losses appeared first on Microsoft On the Issues.
RedVDS enables threat actors to set up servers that can be used for phishing, BEC attacks, account takeover, and fraud.
The post RedVDS Cybercrime Service Disrupted by Microsoft and Law Enforcement appeared first on SecurityWeek.
The Predator spyware is more sophisticated and dangerous than previously realized.
The post Predator Spyware Turns Failed Attacks Into Intelligence for Future Exploits appeared first on SecurityWeek.
News of the move to acquire Seraphic comes less than a week after CrowdStrike announced an agreement to acquire identity security startup SGNL for $740 million.
The post CrowdStrike to Acquire Browser Security Firm Seraphic for $420 Million appeared first on SecurityWeek.
Two vulnerabilities patched this month by Microsoft were disclosed publicly before fixes were released.
The post Microsoft Patches Exploited Windows Zero-Day, 111 Other Vulnerabilities appeared first on SecurityWeek.
AI will assist companies in finding their external attack surface, but it will also assist bad actors in locating and attacking the weak points.
The post Cyber Insights 2026: External Attack Surface Management appeared first on SecurityWeek.
Microsoft’s 5-point plan to partner with local communities across the United States
This year marks America’s 250th year of independence. One of the trends that has repeatedly shaped the nation’s history is again in the news. As we’re experiencing at Microsoft, AI is the latest in a long line of new technologies to require large-scale infrastructure development.
Microsoft today is launching a new initiative to build what we call Community-First AI Infrastructure—a commitment to do this work differently than some others and to do it responsibly. This commits us to the concrete steps needed to be a good neighbor in the communities where we build, own, and operate our datacenters. It reflects our sense of civic responsibility as well as a broad and long-term view of what it will take to run a successful AI infrastructure business. In short, we will set a high bar.
As we launch this initiative, we think about it in the context of both the headlines of the day and the lessons from the past. Beginning in the 1770s, the country has advanced through successive eras built on huge infrastructure development based on canals, railroads, power plants, and the electrical grid, followed by the telephone system, highways, and airports. AI infrastructure has become the next chapter in this story.
Like major buildouts of the past, AI infrastructure is expensive and complex. Investments are advancing at a rapid pace. Today, these require large-scale spending by the private sector in land, construction, electricity, liquid cooling, high-bandwidth connectivity, and operations. This revives a longstanding question: how can our nation build transformative infrastructure in a way that strengthens, rather than strains, the local communities where it takes root?
Large AI investments are accelerating just as datacenter concerns are growing in local communities. The pattern is familiar. Whether it was canals, railroads, the electrical grid, or the interstate highway system, each era produced its own conflicts over who bore the burdens of progress. One enduring lesson is that successful infrastructure buildouts will only progress when communities feel that the gains outweigh the costs. Long-term success requires a commitment to address public needs, including by the private companies making these investments.
This must start by understanding local concerns. Residential electricity rates have recently risen in dozens of states, driven in part by several years of inflation, supply chain constraints, and long-overdue grid upgrades. Communities value new jobs and property tax revenue, but not if they come with higher power bills or tighter water supplies. Without addressing these issues directly, even supportive communities will question the role of datacenters in their backyard.
As a company, we believe in the many positive advances AI will bring to America’s future. From stronger economic growth to better medical advances and more affordable products, we believe AI will make a difference in everyday lives. But we also recognize that AI, like other fundamental technological shifts, will create new challenges as well. And we believe that tech companies like Microsoft have both a unique opportunity to help contribute to these advances and a heightened responsibility to address these challenges head-on.
This Community-First AI Infrastructure Initiative provides a framework for doing exactly that. It is anchored in five commitments, each a clear promise to the communities where we build, own, and operate Microsoft datacenters. These are:
We describe our plans in detail below. We recognize that these will evolve and improve, based most importantly on what we learn from ongoing engagement with local communities across the country. We’ll also follow this plan for Community-First AI Infrastructure with similar plans for other countries, shaped to reflect their local needs and traditions.
But we are choosing the beginning of 2026 in Washington, DC to launch this effort in the United States. Our goal is to move quickly, partner with local communities, and bring these commitments to life in the first half of this year.
There’s no denying that AI consumes large amounts of electricity. While advances in technology may someday change this, today, this is the reality.
The United States will retain its AI leadership role only if AI infrastructure can tap into a rapidly growing supply of electricity. The International Energy Agency (IEA) estimates that US datacenter electricity demand will more than triple by 2035, growing from 200 terawatt-hours to 640 terawatt-hours per year. This growth is taking place alongside rapid electrification of manufacturing and other sectors of the economy.
Our nation is addressing this reality at a demanding time. Even in the absence of datacenter construction, the United States is facing major electricity challenges. Much of the country’s electricity transmission infrastructure is more than 40 years old, and it’s under strain. Supply chain constraints on transformers and high-voltage equipment are delaying upgrades that would enable existing lines to deliver more electricity. New transmission can take more than 7 to 10 years due to permitting and siting delays. This creates a mismatch with growing electricity demand.
Some have suggested that AI will be so beneficial that the public should help pay for the added electricity the country needs for it. We believe in the benefits AI will create, but we disagree with this approach. Especially when tech companies are so profitable, we believe that it’s both unfair and politically unrealistic for our industry to ask the public to shoulder added electricity costs for AI. Instead, we believe the long-term success of AI infrastructure requires that tech companies pay their own way for the electricity costs they create.
This will require that we take four steps, and we’re committed to each:
First, we’ll ask utilities and public commissions to set our rates high enough to cover the electricity costs for our datacenters. This includes the costs of adding and using the electricity infrastructure needed for the datacenters we build, own, and operate. We will work closely with utility companies that set electricity prices and state commissions that approve these prices. Our goal is straightforward: to ensure that the electricity cost of serving our datacenters is not passed on to residential customers.
In some areas, communities are already starting to benefit from this approach. In Wyoming, for example, Microsoft and Black Hills Energy have developed an innovative utility partnership that ensures our datacenter growth strengthens—rather than burdens—the local community. And as part of our datacenter investment in Wisconsin, we are supporting a new rate structure that would charge “Very Large Customers,” including datacenters, the cost of the electricity required to serve them. This protects residents by preventing those costs from being passed on. But we recognize the need to ensure that datacenter communities benefit everywhere. We believe this approach can and should be a model for other states.
Second, we’ll collaborate early, closely, and transparently with local utilities to add electricity and the supporting infrastructure to the grid when needed for our datacenters. Addressing electricity costs is critical, but it is an incomplete solution for local communities unless we expand electricity supply. This expansion typically requires a complex effort that includes the expansion of electrical generation capacity and improvements in transmission and substation systems.
We’re committed to collaborating with local utilities. We will sit down and plan together, providing early transparency around our projected power requirements and contracting in advance for the electricity we will use. When our datacenter expansion requires improvements in transmission and substation capabilities, we will continue our existing practices by paying for these improvements.
This work will build on a spirit of partnership with utilities we’ve worked to foster across the country. For example, in the wholesale energy market that covers much of the Midwest called the Midcontinent Independent System Operator (MISO), we have contracted to add 7.9 GW of new electricity generation to the grid, which is more than double our current consumption.
Third, we’ll pursue innovation to make our datacenters more efficient. We are also using AI to reduce energy use and improve the performance of our software and hardware in the design and management of our datacenters. And we are collaborating closely with utilities to leverage tools like AI to improve planning, get more electricity from existing lines and equipment, improve system resilience and durability, and speed the development of new infrastructure, including nuclear energy technologies.
By embedding these innovations into datacenters and by collaborating directly with local utilities, communities gain access to systems that are more efficient, more reliable, and better prepared to support growth without increasing costs for households.
Fourth, we’ll advocate for the state and national public policies needed to support our neighboring communities with affordable, reliable, and sustainable power. Public policy plays an essential role in supporting communities with affordable, reliable, and sustainable access to electricity. In 2022, Microsoft established priorities for electricity policy advocacy: expanding clean electricity generation, modernizing the grid, and engaging local communities. Over the past three years, we have advocated across all three areas and engaged with government leaders at the federal, state, and local levels to do so. To date, however, progress has been uneven. This needs to change.
We will advocate for policies across these areas with an urgent focus on accelerating project permitting and interconnection of electricity projects, expediting the planning and expansion of the electricity grid, and designing new electricity rates for large electricity users.
Across the country, communities are asking pointed questions about how datacenters use water. These are arising in places already facing water stress, like Phoenix and Atlanta, as well as regions with more abundant supply, like Wisconsin. These concerns are often amplified by aging municipal water systems and infrastructure gaps. Local communities want and deserve reassurance that new AI infrastructure won’t strain their water resources.
Our commitment ensures that our presence will strengthen local water systems rather than burden them. We’ll do this by reducing the amount of water we use and by investing in local water systems and water replenishment projects.
First, we’re committed to reducing the amount of water our datacenters use. The chips that power datacenters produce heat. To manage that heat, datacenters historically relied upon evaporative cooling systems that drew on large volumes of water for cooling in hot weather. As AI workloads have increased, the demand for cooling has increased. The GPU chips that power AI workloads run at very high temperatures; without proper cooling, these chips would burn out within minutes.
The good news is that the tech sector has invested in new innovations to address these cooling needs. Now is the time when we need to step up, use these new technologies, and take added steps to address water use concerns.
Across our entire owned fleet of datacenters, we are committed as a company to a 40 percent improvement in datacenter water-use intensity by 2030. We are optimizing water usage for cooling, improving our ability to balance between water-based cooling and air cooling based on environmental conditions. We have also launched a new AI datacenter design that uses a closed-loop system. By constantly recirculating a cooling liquid, we can dramatically cut our water usage. In this next-generation design, already deployed in locations such as Wisconsin and Georgia, potable water is no longer needed for cooling, reducing pressure on local freshwater systems.
For communities where water infrastructure constraints pose challenges, we will collaborate with local utilities to understand whether current systems can support the additional demand associated with datacenter growth. If sufficient capacity does not exist, we work with our engineering teams to identify solutions that avoid burdening the community.
This approach will build on what we’ve learned from the recent work at our datacenters in Quincy, Washington, an arid region where the local groundwater supply was already under pressure. To avoid drawing from the community’s potable water, we partnered with the city to construct the Quincy Water Reuse Utility, which treats and recirculates datacenter cooling water rather than relying on local groundwater. This approach protects limited drinking-water supplies while ensuring that high-quality, recycled water can be used for datacenter cooling needs. Where future system improvements are required, Microsoft funds those upgrades in full, ensuring that the community doesn’t have to shoulder the cost of supporting our operations.
We also partner with utilities from day one to map out water, wastewater, and pressure needs, and we fully fund the infrastructure required for growth, ensuring local water systems are resilient. Beyond our own footprint, we invest directly in community water infrastructure, modernizing water systems, expanding access, increasing water reliability, and helping utilities maintain stable rates and pressure. For example, near our datacenter in Leesburg, Virginia, Microsoft is funding more than $25 million of water and sewer improvements to ensure the cost of serving our facilities does not fall on local ratepayers.
Second, we will ensure that we replenish more water than we withdraw. This means restoring measurable amounts of water to the same water districts where our datacenter’s water is used, so the total water returned exceeds total water used. This standard provides greater transparency and precision in tracking and reporting, aligned with emerging industry standards.
We will pursue projects that make the most important water contribution to each local community. For example, in the greater Phoenix area and nearby Nevada communities, our leak detection partnerships with local utilities identify and repair hidden breaks in aging water systems, preventing water losses and keeping municipal water in circulation for community use. These projects both add to the total usable water supply and improve the reliability of service for residents.
Across the Midwest, we are restoring historic oxbow wetlands. These are crescent-shaped water bodies that naturally recharge groundwater, reduce flood risk, and enhance habitats for native species. These wetlands act as nature’s reservoirs, capturing and slowly returning water to local aquifers throughout both wet seasons and droughts, creating year-round value for farms, ecosystems, and nearby communities.
Overall, we approach replenishment the same way a household might think about a bank account: our operations make water withdrawals, and our replenishment projects make deposits. Some deposits, like our leak detection projects, go straight into the checking account—depositing water into the municipal supply for immediate community use. Others, like wetland restoration, go into a savings account—investing in the watershed’s long-term capacity to store and supply the region. These projects are evaluated using recognized methods that convert on-the-ground improvements into measurable gallons (or cubic meters) of water restored to local ecosystems, ensuring that commitments reflect tangible local benefits, not abstract promises.
Third, we will support this work with greater local transparency. People deserve to know how much water our datacenters use, and we are committed to making that information accessible, clear, and easy to understand. Aligned with this goal, we will begin publishing water-use data for each datacenter region in the country, as well as our progress on replenishment. This approach will ensure that communities can understand both our operational footprint and the progress we are making against our water-positive goals.
Fourth, we will advocate for public policies to help minimize water use and strengthen resilience. This means championing policies that enable sustainable growth while safeguarding community resources. We will support state and federal efforts to make reclaimed and industrial recycled water the default supply for datacenters wherever feasible. We will advocate for harmonized transparency standards that allow communities to clearly understand water use and stewardship practices. And we will work to reduce permitting delays by promoting predictable pathways for water-efficient datacenter projects.
These actions reflect our belief that technology and environmental responsibility must advance together, ensuring that AI-driven progress aligns with long-term water resilience for people, places, and ecosystems. Our policy activities are rooted in protecting local communities. By prioritizing recycled water and efficiency, we will help reduce pressure on aging municipal systems and ensure reliable water access for people and businesses.
New datacenters create jobs—typically thousands during construction and hundreds during operations. For example, in Washington state more than 1,300 skilled trades workers are building Microsoft datacenters and by the end of next year more than 650 full-time employees and contractors will work across all our operational facilities there.
One of our goals is to help ensure that workers from the local community benefit from these opportunities. To achieve this, we will invest in new partnerships to help give local residents the skills and opportunities to fill these jobs in both the construction and operational phases.
The AI infrastructure construction boom is driving large-scale physical development, creating a huge demand for skilled tradespeople nationwide. As datacenters and the energy projects that support them grow quickly, firms are vying for a limited workforce. At one level, this is good news for people who already have the qualifications these jobs require. But at another level, there is a risk the jobs will not go to local residents who want to pursue these jobs unless they can acquire the skills required.
We will take a multifaceted approach.
First, we will invest in partnerships to help train local workers to support the construction and maintenance of datacenters. This includes a new and first-of-its-kind partnership between Microsoft and North America’s Building Trades Unions (NABTU) to strengthen apprenticeship and training programs in the skilled trades where datacenters are being built. We are launching today a new agreement that establishes a cooperative framework to focus on building a pipeline of skilled workers in regions where we are building datacenters. This will also help enable NABTU to identify qualified contractor partners to bid on our infrastructure projects.
Second, we will expand our Datacenter Academy program to train individuals to fill ongoing datacenter operations roles. This program works in partnership with local community colleges and vocational schools to train students for critical roles in datacenter operations and related careers, once construction is complete.
A good example of this work is our Datacenter Academy partnerships in Boydton, Virginia, where we have a large datacenter campus. The Academy works with Southside Virginia Community College and the Southern Virginia Higher Education Center, which have helped hundreds of students and adult learners earn industry-recognized certifications in information technology and critical facilities operations.
In 2024, this work expanded with the opening of a new Critical Environment Training Lab (SoVA) in South Hill. This provides hands-on training with electrical, mechanical, and cooling systems using decommissioned datacenter equipment donated by Microsoft. Graduates of these programs have gone on to pursue careers supporting datacenter operations in Southern Virginia, including roles with Microsoft and the broader ecosystem of companies that help operate and maintain digital infrastructure. We will pursue similar partnerships in other states, and we are committed to making this an ongoing part of our work in the communities where we build new datacenters.
Third, we will use our voice to encourage policymakers to support these new job opportunities. While this work is of heightened importance in communities with datacenters, the broader need for this type of skilled labor is national in scope. According to LinkedIn data, job postings for data center occupations or requiring at least one core data center skill, such as data center operations, grew by 23 percent globally and 13.5 percent in the US year-over-year in 2025. This is likely to represent an ongoing trend. Over the next decade, trillions in private investment will offer steady employment opportunities for American workers—including electricians, pipefitters, HVAC techs, welders, and construction crews—alongside manufacturing technicians for related components, like chips, power generation, and cooling systems.
However, this rapid demand for skilled labor is set to outpace the available pipeline of workers. Today, the Associated Builders and Contractors estimates that the construction industry is short roughly 439,000 workers, mostly among skilled workers who do things like lay pipe and wire electrical panels.[1] Manufacturers report shortages as well, with the CEO of Ford Motor Company recently highlighting 5,000 open mechanic jobs that pay more than $100,000 per year. And for datacenter operations, employers face shortages in hands-on infrastructure skills such as cabling, racking, and network hardware.
This problem is exacerbated by the demographics of an aging workforce and a decades-old policy trend of deprioritizing vocational education for young Americans. A generation of skilled workers, vocationally trained in high schools and apprenticeships in the 20th century, are retiring from the trades. In the first quarter-century of the 21st century, high schools pivoted towards preparing young people for higher education and advanced degrees, often at the expense of traditional shop classes and training in skilled craftsmanship.
The increased demand for skilled trades, paired with an aging workforce, requires an enhanced public-private workforce partnership. Secondary schools in the US can be incentivized to do more to educate young people about the trades through vocational schools and pre-apprenticeship programs. Registered apprenticeship programs offered nationally provide a fulfilling career path with long-term wages and benefits.
In partnership with labor, the federal government can champion a national apprenticeship and workforce development initiative that helps young and aspiring American workers near AI infrastructure projects, especially in rural and post-industrial regions. President Trump’s AI Action Plan rightly identifies this opportunity, and we will work closely with the Department of Labor to help scale this effort. The federal government can also help by streamlining the process by which businesses can establish and maintain a registered apprenticeship program. They can also maximize the use of existing federal dollars that directly support registered apprenticeship programs. This could entail modernizing the regulations for the National Apprenticeship Act or updating the statutory language itself.
One of the most tangible benefits from datacenter development is invisible to an individual driving nearby. It’s the property taxes paid by datacenters to the local municipality, which are substantial. But this too requires that the private sector take a responsible approach, as described below.
We won’t ask local municipalities to reduce their local property tax rates when we buy land or propose a datacenter presence. Instead, we’ll pay our full and fair share of local property taxes, adding revenue to local towns and cities. This is obviously critical to supporting the growth a local community often experiences when datacenters are built or expanded. And most importantly, at a time when many communities are facing revenue shortages that threaten vital public assets like hospitals, schools, parks, and libraries, we know from experience that this can make a big difference.
The benefits of this approach are nowhere more apparent than in Quincy, Washington, a small agricultural community about 150 miles east of Seattle where Microsoft built its first datacenter in 2008. Since then, we have built more than twenty datacenters in the area, providing ongoing employment to thousands of construction workers for almost two decades. Hundreds of technicians enjoy permanent jobs in those datacenters, earning salaries well above the median income for Quincy. And we estimate that for every direct construction job created, another one is created in related sectors, including security services, maintenance and repair, retail, restaurants, and more. Altogether, our datacenters drive more than $200 million in regional economic activity each year.
As a result, the share of Quincy residents living below the poverty line has been cut in half, dropping from 29.4 percent in 2013 to 13.1 percent in 2023. And county property tax revenues have more than tripled over the past two decades, from roughly $60 million to more than $180 million. This has enabled the city to invest in public services and amenities. Last year, as rural hospitals around the country cut back on critical care offerings and shuttered their doors, Quincy opened a new 54,000-square-foot medical center. The city has also made substantial renovations to its high school, adding state-of-the-art athletic facilities, an auditorium, and a career and technical training department.
We want to make sure that the other communities where our datacenters are located benefit from our presence in the same way. In all the regions where we build, own, and operate datacenters, we’re devoted to taking a civically responsible approach. This means recognizing the importance of civic services, including public safety, local healthcare, schools, libraries, and parks. As we become an important local employer, local communities can count on us to be a constructive contributor to local business and civic efforts.
We believe the datacenter communities that power AI should be among the first to benefit from it. As these communities help drive innovation and economic growth for the nation, it’s essential that they share in the economic, educational, and community benefits AI is creating. Especially as jobs evolve and require more AI skills, this requires local investments in AI education and training. To support this goal, we will provide free, age-appropriate, best-in-class AI training and education in these communities in partnership with trusted, local community-based organizations.
For years, we have been helping people gain essential digital skills in communities in and around our datacenters, such as Quincy in Eastern Washington, Boydton in Southern Virginia, and Mt. Pleasant in Southeast Wisconsin. One thing we’ve learned is that these communities have vibrant anchor institutions—schools, libraries, and local chambers of commerce—that form the backbone of local learning, workforce development, and economic growth. That’s why our approach as we go forward will be to invest in communities with our datacenters to partner with and provide support to these anchor institutions so that every community member can leverage the power of AI in how they live, work, and learn.
First, we will partner with local K-12 schools, community colleges, and universities to provide age-appropriate, responsible AI literacy training and learning experiences for students and teachers in our datacenter communities. This will build on some of our most recent experiences. For example, in Quincy, Washington, we partnered with Quincy High School and the local FFA chapter to teach students the critical AI and data skills needed for careers in precision agriculture. And in our datacenter region in Mt. Pleasant, Wisconsin, we recently launched an AI bootcamp for students and faculty with Gateway Technical College to cultivate a new generation of developers and creators of AI tools and technology across Wisconsin technical colleges.
Our commitment is to build on this work to help students and teachers responsibly and effectively engage with AI, create with AI, manage AI, and design with AI by bringing free, locally relevant, responsible AI training that is aligned with AI literacy standards to students in every K-12 school, community college, and university in our datacenter markets.
Second, we will support adults in our datacenter communities with AI tools and skills by creating neighborhood AI learning hubs in partnership with local libraries in our key datacenter markets. This approach will build upon our previous digital skilling partnerships with local libraries. For example, during COVID, we partnered with libraries in rural communities across the country, and more recently, we helped train libraries in our Quincy and Mt. Pleasant datacenter markets on AI so that they could help their patrons learn AI skills. Building on this work, we will invest in AI literacy skills development for librarians and provide access to free AI literacy training and certifications to local library patrons, including by equipping public terminals at local libraries in our datacenter regions with AI tools and services.
Third, we will support AI skills training for small businesses. We recognize that AI training will be critical for small businesses as they navigate the transition to the AI economy. These businesses are the backbone of local economies, and their success directly impacts job creation, workforce stability, and community vitality. Through a new workforce transformation initiative, we will deliver AI training, tools, and insights to local chambers of commerce that support these small businesses. We will also provide flexible grants for AI training and upskilling to local chambers of commerce and a variety of workforce organizations to help local businesses upskill employees, adopt AI responsibly, and prepare their workforce for ongoing transformation—ensuring that economic opportunity stays rooted in the communities where we build and operate datacenters.
Finally, we will invest in your local nonprofit community. A defining aspect of Microsoft’s own history and culture has long been a commitment to support the many nonprofit organizations that are vital to every community the company calls home. As we expand our datacenters in new communities, we’re committed to bringing this role to these new regions.
This starts with support for our employees in the local community. We provide two key benefits to all our full-time employees. First, we will match every hour they spend volunteering for a nonprofit with a donation to that group of $25. Second, we’ll match each dollar they donate to a nonprofit with an equal donation by Microsoft. These give all our employees, including in our datacenters, a total potential match of $15,000 each year.
This approach to community engagement is an important part of Microsoft’s culture, and it has become the largest nonprofit charitable matching program in the history of business. In 2024 in the United States, it raised $229.1 million in donations for 29,000 nonprofits, plus 964,000 volunteer hours contributed by our employees. It’s a part of Microsoft we’re excited to bring to the communities that have our datacenters.
We recognize that our support for the local community also needs to go beyond this type of program. Our broader contribution must start with listening. You know best what your town needs, what nonprofits are making a difference, and which organizations are best positioned to do more. We will provide locally based Microsoft liaisons in major US datacenter communities to work side by side with local leaders and nonprofits. Our local staff will provide a community connection to our various Microsoft teams and resources. Working together, we will shape our direction and connection to help further our support for local nonprofits.
Many lessons emerge from the nation’s 250-year history relating to technology and infrastructure. The first is that large-scale infrastructure expansion is vital to economic growth and everyday improvements in people’s lives. Our lives today rely on electrical appliances, automobiles, phones, airplanes, and much more that would be impossible without modern infrastructure.
But a second lesson illustrates an important tension. Major infrastructure expansion is always difficult. It’s expensive. It inevitably raises questions, concerns, and even controversies. This has been true for more than 200 years, and we should assume it will be true well into the future. This always requires that important decisions be made by government leaders from village presidents and town councils to the American President and Congress.
Third, the most important decisions are often made at the local level. This reflects the outsized impact—both positive and negative—of infrastructure expansion at the local level. It also reflects the American political tradition and our zoning and permitting laws, which rightly put decision-making authority closest to those elected to serve local communities.
There’s a final lesson that speaks most directly to us. Private companies can help by stepping up and acting in a responsible way. We cannot surmount inevitable community challenges by ourselves. But we can make everything easier by embracing a long-term vision. By recognizing our responsibility. By playing a constructive role. And by supporting the entire community.
As we look to the future, we are committing to taking this final lesson to heart. And making it a fundamental part of our efforts every day.
YouTube Video
[1] News Releases | ABC: Construction Industry Must Attract 439,000 W
The post Building Community-First AI Infrastructure appeared first on Microsoft On the Issues.
Ransomware remains the biggest concern for CISOs in 2026, according to WEF’s Global Cybersecurity Outlook 2026 report.
The post Cyber Fraud Overtakes Ransomware as Top CEO Concern: WEF appeared first on SecurityWeek.
Here we examine the CISO Outlook for 2026, with the purpose of evaluating what is happening now and preparing leaders for what lies ahead in 2026 and beyond.
The post Cyber Insights 2026: What CISOs Can Expect in 2026 and Beyond appeared first on SecurityWeek.
The record-breaking deal has already received a green light from the US government.
The post EU Sets February Deadline for Verdict on Google’s $32B Wiz Acquisition appeared first on SecurityWeek.
Read the full Global AI Adoption Report.
Global adoption of artificial intelligence continued to rise in the second half of 2025, increasing by 1.2 percentage points compared to the first half of the year, with roughly one in six people worldwide now using generative AI tools, remarkable progress for a technology that only recently entered mainstream use.
To track this trend, we measure AI diffusion as the share of people worldwide who have used a generative AI product during the reported period. This measure is derived from aggregated and anonymized Microsoft telemetry and then adjusted to reflect differences in OS and device-market share, internet penetration, and country population. Additional details on the methodology are available in our AI Diffusion technical paper.[1]
No single metric is perfect, and this one is no exception. Through the Microsoft AI Economy Institute, we continue to refine how we measure AI diffusion globally, including how adoption varies across countries in ways that best advance priorities such as scientific discovery and productivity gains. For this report, we rely on the strongest cross-country measure available today, and we expect to complement it over time with additional indicators as they emerge and mature.
Despite progress in AI adoption, the data shows a widening divide: adoption in the Global North grew nearly twice as fast as in the Global South. As a result, 24.7 percent of the working age population in the Global North is now using these tools, compared to only 14.1 percent in the Global South.
Countries that have invested early in digital infrastructure, AI skilling, and government adoption, such as the United Arab Emirates, Singapore, Norway, Ireland, France, and Spain, continue to lead. The UAE extended its lead as the #1 ranked country, with 64.0 percent of the working age population using AI at the end of 2025, compared to 59.4 percent earlier in the year. The UAE has opened a lead of more than three percentage points over Singapore, which continues in second place with 60.9 percent adoption.
The second half of the year in the United States shows that leadership in innovation and infrastructure, while critical, does not by themselves lead to broad AI adoption. The U.S. leads in both AI infrastructure and frontier model development, but it fell from 23rd to 24th place in AI usage among the working age population, with a 28.3 percent usage rate. It lags far behind smaller, more highly digitized and AI-focused economies.
South Korea stands out as the clearest end-of-year success story. It surged seven spots in the global rankings, climbing from 25th to 18th, driven by government policies, improved frontier model capabilities in the Korean language, and consumer-facing features that resonated with the population. Generative AI is now used in schools, workplaces, and public services, and South Korea has become one of ChatGPT’s fastest-growing markets, leading OpenAI to open an office in Seoul.[2]
A parallel development reshaping the global landscape in 2025 was the rapid rise of DeepSeek, an open-source AI platform that has gained significant traction in markets long underserved by traditional providers. By releasing its model under an open-source MIT license and offering a completely free chatbot, DeepSeek removed both financial and technical barriers that limit access to advanced AI. Its strongest adoption, not surprisingly has emerged across China, Russia, Iran, Cuba, and Belarus. But perhaps even more notable is DeepSeek’s surging popularity across Africa, where it is aided by strategic promotion and partnerships with firms such as Huawei.[3]
This rapid evolution underscores an increasingly important dimension of AI competition between the United States and China, involving a race to promote adoption of their respective national models. DeepSeek’s success reflects growing Chinese momentum across Africa, a trend that may continue to accelerate in 2026. DeepSeek’s ascent also underscores a broader truth: the global diffusion of AI is influenced by accessibility factors, and the next wave of users may come from communities that have historically had limited access to technological progress. The challenge ahead is ensuring that innovation spreads in ways that help narrow divides rather than deepen them.
[1]A. Misra, J. Wang, S. McCullers, K. White, and J. L. Ferres, “Measuring AI Diffusion: A Population-Normalized Metric for Tracking Global AI Usage,” Nov. 04, 2025, arXiv: arXiv:2511.02781. doi: 10.48550/arXiv.2511.02781..
[2] OpenAI Korea set to launch next month – The Korea Times.” https://www.koreatimes.co.kr/business/companies/20250828/openai-korea-set-to-launch-next-month
[3] S. Rai, L. Prinsloo, and H. Nyambura, “China’s DeepSeek Is Beating Out OpenAI and Google in Africa (1).” Bloomberg News..
The post Global AI adoption in 2025 — A widening digital divide appeared first on Microsoft On the Issues.
Continuous Threat Exposure Management (CTEM) is a continuous program and operational framework, not a single pre-boxed platform. Flashpoint believes that effective CTEM must be intelligence-led, using curated threat intelligence as the operational core to prioritize risk and turn exposure data into defensible decisions.

Since Gartner’s introduction of CTEM as a framework in 2022, cybersecurity vendors have engaged in a rapid “productization” race. This has led to inconsistent market definitions, with a variety of vendors from vulnerability scanners to Attack Surface Management (ASM) providers now claiming to be an “exposure management” solution.
The current approach to productizing CTEM is flawed. There is no such thing as a single “exposure management platform.” The enterprise reality is that most enterprises buy three or more products just to approximate what CTEM promises in theory. Even with these technologies, organizations still require heavy lifting with people, process, and custom integrations to actually make it work.
A functional CTEM approach typically requires multiple platforms or tools, including:
In some cases, organizations may also use an ASM vendor for shadow IT discovery, a CMDB for asset context, and ticketing integrations to drive remediation. This multi-platform model is the rule, not the exception. And that raises a hard truth: if you need three or more products, plus a dedicated team to implement CTEM, you need an intelligence-led CTEM program.
The narrative that CTEM can be packaged into a single product breaks down for three critical reasons:
You cannot buy a capability that requires full-stack asset visibility, contextualized threat actor data, real-world validation, and remediation orchestration from one tool. Each component spans a different domain of expertise and data. A vulnerability scanner, alone, cannot validate exploitability, a pentest service has a tough time scaling to daily monitoring, and generic threat intelligence feeds cannot provide critical business context.
However, CTEM requires orchestration of all these components in one operational loop. No single product delivers this comprehensively out of the box; this is why CTEM must be viewed as a continuous program, not a one-size-fits-all product.
Vendors often advertise automation, however, key intelligence functions are still powered by and reliant on human analysis. Even with best-in-class AI tools in place, security teams are depending on human insights for:
In other words, exposure management today still relies on human insights and expertise. So while vendors advertise “automation and intelligence,” what they’re really delivering is a starting point. Ultimately, AI is a force multiplier for threat analysts, not a replacement.
Most platforms treat exposure like a math problem. But real risk isn’t just CVSS (Common Vulnerability Scoring System) scores or asset counts, it requires answering critical, intelligence-based questions:
These answers require intelligence, not just data. Best-in-class intelligence provides security teams with confirmed exploit activity in the wild, context around attacker usage in APT (Advanced Persistent Threat) campaigns, and detailed metadata for prioritization where CVSS fails. That is why Flashpoint intelligence is leveraged by over 800 organizations as the operational core of exposure management, turning exposure data into defensible decisions.
If your risk strategy requires continuous penetration and exploit testing, vulnerability management, threat intelligence, and manual prioritization and validation, you’re not buying CTEM; you’re building it. At Flashpoint, we’re helping organizations build CTEM the right way: driven by intelligence, and powered by integrations and AI.
Flashpoint treats CTEM for what it really is, as a program that must be constructed intelligently, iteratively, and contextually.
That means:
Using Flashpoint’s intelligence collections, organizations can achieve intelligence-led exposure management, with threat and vulnerability intelligence working together to provide context and actionable insights in a continuous, prioritized loop. This empowers security teams to build and scale their own CTEM programs, which is the only realistic approach in a cybersecurity landscape where no single platform can do it all.
If you’re evaluating exposure management tools, ask yourself:
The answers may surprise you. At Flashpoint, we’re helping organizations build CTEM the right way, driven by intelligence, powered by integration, and grounded in reality. Request a demo today and see how best-in-class intelligence is the key to achieving an effective CTEM program.
The post Why Effective CTEM Must be an Intelligence-Led Program appeared first on Flashpoint.

This blog explores how Primary Source Collection (PSC) enables intelligence teams to surface emerging fraud and threat activity before it reaches scale.

Spend enough time investigating fraud and threat activity, and a familiar pattern emerges. Before a tactic shows up at scale—before credential stuffing floods login pages or counterfeit checks hit customers—there is almost always a quieter formation phase. Threat actors test ideas, trade techniques, and refine playbooks in small, often closed communities before launching coordinated campaigns.
The signals are there. The challenge is that most organizations never see them.
For years, intelligence programs have leaned heavily on static feeds: prepackaged streams of indicators, alerts, and reports delivered on a fixed cadence. These feeds validate what is already known, but they rarely surface what is still taking shape. They are designed to summarize activity after it has matured, not to discover it while it is still evolving.
Meanwhile, the real innovation in fraud and threat ecosystems happens elsewhere in invite-only Telegram channels, dark web marketplaces, and regional-language forums that update in real time. By the time a static feed flags a new technique, it is often already widespread.
This disconnect has consequences. When intelligence arrives too late, teams are left responding to impact rather than shaping outcomes.
Fraudsters and threat actors do not work in isolation, they collaborate. In closed forums and encrypted channels, one actor experiments with a new login bypass, another tests two-factor authentication evasion, and a third packages those ideas into a tool or service. What begins as a handful of screenshots or code snippets quickly becomes a repeatable process.
These shared processes often take the form of playbooks that act as step-by-step guides that document how to execute a fraud scheme or exploit a weakness. Once a playbook begins circulating, scale is inevitable. Techniques that started as limited tests turn into thousands of coordinated attempts almost overnight.
Every intelligence or fraud analyst has experienced the moment when an unfamiliar tactic suddenly overwhelms detection systems. The frustrating reality is that the warning signs were often visible weeks earlier, they simply never made it into the static feeds teams were relying on.
Static collection creates a sense of coverage, but that coverage is often shallow. Sources are fixed. Cadence is slow. Context is stripped away.
A feed might tell you that a domain, handle, or email address is associated with a known tactic, but not how that tactic was developed, who is promoting it, or whether it has any relevance to your organization’s specific exposure. You are seeing the exhaust, not the engine.
This lag matters. The window between a tactic being tested in a small community and being deployed at scale is often the most valuable moment for intervention. Miss that window, and response becomes exponentially more expensive.
As threats accelerate and collaboration among adversaries increases, intelligence programs that depend solely on static inputs struggle to keep pace.
Primary Source Collection (PSC) changes how intelligence is gathered by starting with the questions that matter most and collecting directly from the original environments where those answers exist.
Rather than relying on a predefined list of sources or vendor-determined priorities, PSC begins with a defined intelligence requirement. Collection is then shaped around that requirement, directing analysts to the forums, marketplaces, and channels where relevant activity is actively unfolding.
This means monitoring closed communities advertising check alteration services. It means observing invite-only groups trading identity fraud tutorials. It means collecting original posts, screenshots, files, and discussions while they are still part of an active conversation instead of weeks later in summarized form. When actors begin discussing a new bypass technique or sharing proof-of-concept screenshots, that is the moment to act, not weeks later when the same method is being resold across marketplaces.
Primary Source Collection provides that window. It surfaces the conversations, artifacts, and early indicators that reveal what is coming next and gives teams the time they need to intervene before campaigns scale.
This does not replace analytics, automation, or baseline monitoring. It strengthens them by feeding earlier, richer insight into downstream systems. It ensures that detection and response are informed by how threats are actually developing, not just how they appear after the fact.
In one case, a financial institution using this approach identified counterfeit checks featuring its brand being advertised in underground marketplaces weeks before customers began reporting losses. By collecting directly from those spaces, analysts flagged the images, traced sellers, and alerted internal teams early enough to prevent further exploitation.
That is what early warning looks like when collection is aligned with purpose.
One of the most important shifts enabled by Primary Source Collection is tasking.
Traditional intelligence programs operate like autopilot. They deliver a steady stream of data, but that stream reflects the provider’s priorities rather than the organization’s evolving needs. Analysts spend valuable time triaging irrelevant information while emerging risks go unnoticed.
In classified intelligence environments, this problem has long been addressed through tasking. Every collection effort begins with a clearly defined requirement and priorities drive collection, not the other way around.
PSC applies that same discipline to open-source and commercial intelligence. Teams define Priority Intelligence Requirements (PIRs), such as identifying actors testing bypass methods for specific login flows, and immediately direct collection toward those needs. As priorities change, tasking changes with them.
This transforms intelligence from a passive stream into an operational capability. Analysts are no longer waiting for someone else’s update cycle. They are shaping visibility in real time, testing hypotheses, validating concerns, and uncovering tactics before they mature.
For leadership, this provides something more valuable than indicators: confidence that critical developments are not happening just out of sight.
A taskable Primary Source Collection framework is dynamic by design. As stakeholder priorities shift due to a new campaign, incident, or geopolitical development, collection pivots immediately.
In practice, this approach includes:
Intelligence can then mirror the agility of modern threats instead of lagging behind them.
Threat and fraud operations are moving faster than ever. Barriers to entry are lower. Tooling is more accessible. Collaboration rivals legitimate software development cycles.
Defenders cannot afford to move slower than the adversaries they are trying to stop.
Primary Source Collection is how intelligence teams keep pace. It aligns collection with mission needs, enables real-time tasking, and delivers insight early enough to change outcomes instead of just documenting them.
The signals have always been there. What has changed is the ability to surface them while they still matter.
Flashpoint supports intelligence teams across fraud, cyber, and executive protection with taskable, primary source intelligence. Request a walkthrough to see how PSC enables earlier, more confident decision-making.
The post Surfacing Threats Before They Scale: Why Primary Source Collection Changes Intelligence appeared first on Flashpoint.