Normal view

Cyber and Physical Risks Targeting the 2026 Winter Olympics

Blogs

Blog

Cyber and Physical Risks Targeting the 2026 Winter Olympics

In this post we analyze the multi-vector threat landscape of the 2026 Winter Olympics, examining how the Games’ dispersed geographic footprint and high digital complexity create unique potential for cyber sabotage and physical disruptions.

SHARE THIS:
Default Author Image
February 5, 2026

The Milano-Cortina 2026 Winter Olympics represent a historic milestone as the first Games co-hosted by two major cities. However, the event’s expansive geographic footprint—covering 22,000 square kilometers across northern Italy—presents a complex security environment. From the metropolitan centers of Milan to the alpine peaks of Cortina d’Ampezzo, security forces are contending with a multi-vector threat landscape.

Kinetic and Physical Security Challenges

The geographically dispersed nature of the Milano-Cortina 2026 Winter Games also creates unique physical security challenges. Because venues are spread across thousands of square kilometers of the Alps, securing transit corridors and ensuring rapid emergency response across different Italian regions—including Lombardy, Veneto, and Trentino—is an incredible logistical hurdle. New tunnels, increased train services, and extended bus routes have been welcomed but create new potential targets for physical disruption by threat actors or protestors.

Terrorist and Extremist Threats

Flashpoint has not identified any terrorist or extremist threats to the Winter Olympic Games. However, lone threat actors in support of international terrorist organizations or domestic violence extremists remain a persistent threat due to the large number of attendees expected and the media attention that this event will attract.

Authorities in northern Italy are investigating a series of sabotage attacks on the national railway network that coincided with the opening of the 2026 Winter Olympic Games. The coordinated incidents—which included arson at a track switch, severed electrical cables, and the discovery of a rudimentary explosive device—caused delays of over two hours and temporarily disabled the vital transport hub of Bologna.

Protests

Flashpoint analysts identified several protests targeting the 2026 Winter Olympics:

  • US Presence and ICE Backlash: Hundreds of demonstrators have participated in protests in central Milan to demand that US ICE agents withdraw from security roles at the upcoming Winter Olympics.
  • Anti-Olympic and Environmental Activism: The most organized opposition comes from the Unsustainable Olympics Committee. They have already staged marches in Milan and Cortina, with more planned for February.
  • Pro-Palestinian Groups: Organizations such as BDS Italia are actively campaigning to boycott the games, demanding that Israel not be permitted to participate. Other pro-Palestinian groups have attempted to disrupt the Torch Relay in several cities and are expected to hold flash mob-style demonstrations in Milan’s Piazza del Duomo during the Opening Ceremony.
  • Labor Strikes: Italy frequently experiences transport strikes, which often fall on Fridays. Because the Opening Ceremony is on Friday, February 6, unions are leveraging this for maximum impact. An International Day of Protest has been coordinated by port and dock workers across the Mediterranean for February 6.

On February 7, a massive protest of approximately 10,000 people near the Olympic Village in Milan descended into violence as a peaceful march against the Winter Games ended in clashes with Italian police. While the majority of demonstrators initially focused on the environmental destruction caused by Olympic infrastructure, a smaller group of masked protestors engaged security forces with flares, stones, and firecrackers.

Cyber Threats Facing the 2026 Winter Olympics

The Milano-Cortina 2026 Winter Olympics will be among the most digitally complex global events, making it a prime target for cyberattacks. The greatest risks stem from familiar tactics such as phishing, spoofed websites, and business email compromise, which exploit human trust rather than technical flaws. With billions of viewers and a vast network of cloud services, vendors, and connected systems, the games create an expansive attack surface under intense operational pressure.

Italy blocked a series of cyberattacks targeting its foreign ministry offices, including one in Washington, as well as Winter Olympics websites and hotels in Cortina d’Ampezzo, with officials attributing the attempts to Russian sources. Foreign Minister Antonio Tajani confirmed the attacks were prevented just days before the Games’ official opening, which began with curling matches on February 4. 

Past Olympic Games show a clear pattern of heightened cyber activity, including phishing campaigns, distributed denial-of-service (DDoS) attacks, ransomware, and online scams targeting both organizers and the public. A mix of cybercriminals, advanced persistent threats, and hacktivists is expected to exploit the event for financial gain, espionage, or publicity. Experts emphasize that improving security awareness, verifying digital interactions, and strengthening supply chain defenses are critical, as the most damaging incidents often arise from ordinary threats amplified by scale and urgency.

Staying Safe at the 2026 Winter Games

The security success of Milano-Cortina 2026 relies on the integration of real-time intelligence, advanced technological safeguards, and public vigilance. As the Games proceed, the intersection of cyber-sabotage and physical protest remains the most likely source of operational disruption.

To stay safe at this year’s Games, participants should:

  1. Download Official Apps: Install the Milano Cortina 2026 Ground Transportation App and the Atm Milano app for real-time updates on transit, road closures, and “guaranteed” travel windows during strikes.
  2. Plan Around Friday Strikes: Be aware that transport strikes (Feb 6, 13, and 20) typically guarantee services only between 6:00 AM – 9:00 AM and 6:00 PM – 9:00 PM. Plan your venue transfers accordingly.
  3. Secure Your Digital Footprint: Avoid public Wi-Fi at major venues. Use a VPN and ensure Multi-Factor Authentication (MFA) is active on all your ticketing and banking accounts.
  4. Stay Clear of Protests: While most demonstrations are expected to be peaceful, they can cause sudden police cordons and transit delays.
  5. Respect the Drone Ban: Unauthorized drones are strictly prohibited over Milan and venue clusters. Leave yours at home to avoid heavy fines or interception by security units.

Stay Safe Using Flashpoint

While there are no current indications of imminent threats of extreme violence targeting the Milano-Cortina 2026 Winter Olympics, the event’s vast geographic footprint and digital complexity demand constant vigilance. Securing an event that spans 22,000 square kilometers requires more than just a physical presence; it necessitates a multi-faceted approach that bridges the gap between digital and kinetic risks.

To effectively navigate the intersection of cyber-sabotage, civil unrest, and logistical challenges, organizations and attendees must adopt a comprehensive strategy that integrates real-time intelligence with proactive security measures. Download Flashpoint’s Physical Safety Event Checklist to learn more.

Request a demo today.

The post Cyber and Physical Risks Targeting the 2026 Winter Olympics appeared first on Flashpoint.

Flashpoint’s Threat Intelligence Capability Assessment

Blogs

Blog

Flashpoint’s Threat Intelligence Capability Assessment

In this post we introduce a new free assessment designed to pinpoint intelligence gaps, top strategic priorities for progress, and prioritized practical actions to drive real impact.

SHARE THIS:
Default Author Image
February 5, 2026

Many organizations today have some form of threat intelligence. Far fewer have a threat intelligence function that is structured, measurable, and trusted across the business. Experienced security professionals know that volume does not equal value—having more feeds, more alerts, or more dashboards doesn’t automatically translate into better intelligence. In reality, teams need clear visibility into the source of their intelligence data, how it aligns to their most important risks, and whether it’s actually influencing decisions.

Without this baseline, organizations struggle to answer fundamental questions: 

  • Are we collecting intelligence that reflects our real risk exposure?
  • Are we missing upstream threats—or over-prioritizing noise?
  • Is our intelligence tailored to our environment, or largely generic?
  • Is it reaching the right teams at the right moment to drive action?

These blind spots create friction across security operations—and make it difficult to improve with confidence.

How is Your Intelligence Working Across Your Environment?

That’s why Flashpoint created the Threat Intelligence Capability Assessment out of a simple observation: the most successful intelligence functions aren’t defined by the size of their budget or the number of feeds they ingest. They are defined by how intelligence flows across the full threat intelligence lifecycle:

  1. Requirements & Tasking: How clear are your intelligence priorities, and how directly are they tied to real business risk?
  2. Collection & Discovery: Is your visibility broad, deep, and flexible enough to keep pace with changing threats?
  3. Analysis & Prioritization: How effectively are signals, context, and impact being connected to inform decisions?
  4. Dissemination & Action: Is intelligence reaching the teams and leaders who need it, when they need it?
  5. Feedback & Retasking: How consistently are priorities reviewed, refined, and adjusted based on outcomes?

By examining each stage independently, our assessment reveals where intelligence accelerates decisions and where it quietly breaks down.

Why This Assessment is Different

Most maturity assessments focus on inputs: tooling, headcount, or abstract maturity labels.

Flashpoint’s Threat Intelligence Capability Assessment takes a different approach. It evaluates how intelligence actually functions across the full intelligence lifecycle— from requirements and tasking through feedback and retasking—and what that means in practice for day-to-day operations.

Rather than stopping at a score, the assessment helps organizations:

  1. Understand what their stage means in real operational terms
  2. Identify constraints and patterns that may be limiting impact
  3. Focus on top strategic priorities for progress
  4. Take immediate, practical actions to strengthen intelligence workflows
  5. Apply a 90-day planning framework to turn insight into execution

Critically, The Threat Intelligence Capability Assessment is grounded in operational reality, not vendor theory, and is designed to be applied by function, recognizing that intelligence maturity is rarely uniform across an organization.

“As cyber threats grow in scale, complexity, and impact, organizations need a clear understanding of how effectively intelligence supports their ability to detect high-priority risks and respond with speed. This assessment helps teams move beyond a score to understand what’s holding them back, where to focus next, and how to turn intelligence into action.”

Josh Lefkowitz, CEO and co-founder of Flashpoint

Where Do You Stand?

This assessment isn’t about simply measuring where you are today—it’s about identifying holding you back, and where targeted improvements can deliver the greatest return.  

After taking Flashpoint’s quick 5 minute assessment, security leaders can evaluate each component of their intelligence program—such as SOCs (Security Operations Center), vulnerability teams, fraud teams, and physical security—and benchmark them to surface potential gaps and needed improvements.
Whether your program is at the developing, maturing, advanced, or leader stage, the goal is the same: to move from intelligence as a supporting activity to intelligence as a driver of proactive operations.

  • Developing: The early stages of building a dedicated intelligence function. Work is largely reactive—driven primarily by escalations or stakeholder questions—and may be reliant on open sources, vendor feeds, internal alerts, or ad-hoc investigations.
  • Maturing: Processes have moved beyond reactive workflows and are beginning to operate with a consistent structure. There are documented priority intelligence requirements and teams are intentionally building depth across sources, workflows, and reporting.
  • Advanced: In this stage, intelligence functions shape how your organization understands, prioritizes, and responds to threats. Requirements are well-defined, visibility spans multiple layers of the threat ecosystem, and analysts apply structured tradecraft that produces actionable intelligence.
  • Leader: Intelligence functions are a core component of organizational risk strategy. Outputs are trusted and used across the business to inform high-stakes decisions, shape long-range planning, and provide early warning across cyber, fraud, physical, brand, and geopolitical domains.

A Practical Roadmap, Not a Judgment

No matter which stage you are currently in, advancing an intelligence function requires deeper visibility into relevant ecosystems, stronger analytic rigor, and the ability to act on intelligence at the moment it matters. To move the needle, organizations need clear requirements, direct visibility into where threats originate, structured tradecraft, and intelligence that drives decisions.

Flashpoint helps teams accelerate progress with the data, expertise, and workflows that strengthen intelligence programs at every stage—without requiring a new operational model. Take the assessment now to see where your intelligence program stands. Or, learn more about how Flashpoint helps intelligence teams progress faster, reduce fragmentation, and sustain momentum toward intelligence-led operations, delivered through the Flashpoint Ignite Platform.

Request a demo today.

The post Flashpoint’s Threat Intelligence Capability Assessment appeared first on Flashpoint.

Smart AI Policy Means Examining Its Real Harms and Benefits

4 February 2026 at 23:40

The phrase "artificial intelligence" has been around for a long time, covering everything from computers with "brains"—think Data from Star Trek or Hal 9000 from 2001: A Space Odyssey—to the autocomplete function that too often has you sending emails to the wrong person. It's a term that sweeps a wide array of uses into it—some well-established, others still being developed.

Recent news shows us a rapidly expanding catalog of potential harms that may result from companies pushing AI into every new feature and aspect of public life—like the automation of bias that follows from relying on a backward-looking technology to make consequential decisions about people's housing, employment, education, and so on. Complicating matters, the computation needed for some AI services requires vast amounts of water and electricity, leading to sometimes difficult questions about whether the increased fossil fuel use or consumption of water is justified.

We are also inundated with advertisements and exhortations to use the latest AI-powered apps, and with hype insisting AI can solve any problem.

Obscured by this hype, there are some real examples of AI proving to be a helpful tool. For example, machine learning is especially useful for scientists looking at everything from the inner workings of our biology to cosmic bodies in outer space. AI tools can also improve accessibility for people with disabilities, facilitate police accountability initiatives, and more. There are reasons why these problems are amenable to machine learning and why excitement over these uses shouldn’t translate into a perception that just any language model or AI technology possesses expert knowledge or can solve whatever problem it’s marketed as solving.

EFF has long fought for sensible, balanced tech policies because we’ve seen how regulators can focus entirely on use cases they don’t like (such as the use of encryption to hide criminal behavior) and cause enormous collateral harm to other uses (such as using encryption to hide dissident resistance). Similarly, calls to completely preempt state regulation of AI would thwart important efforts to protect people from the real harms of AI technologies. Context matters. Large language models (LLMs) and the tools that rely on them are not magic wands—they are general-purpose technologies. And if we want to regulate those technologies in a way that doesn’t shut down beneficial innovations, we have to focus on the impact(s) of a given use or tool, by a given entity, in a specific context. Then, and only then, can we even hope to figure out what to do about it.

So let’s look at the real-world landscape.

AI’s Real and Potential Harms

Thinking ahead about potential negative uses of AI helps us spot risks. Too often, the corporations developing AI tools—as well as governments that use them—lose sight of the real risks, or don’t care. For example, companies and governments use AI to do all sorts of things that hurt people, from price collusion to mass surveillance. AI should never be part of a decision about whether a person will be arrested, deported, placed into foster care, or denied access to important government benefits like disability payments or medical care.

There is too much at stake, and governments have a duty to make responsible, fair, and explainable decisions, which AI can’t reliably do yet. Why? Because AI tools are designed to identify and reproduce patterns in data that they are “trained” on.  If you train AI on records of biased government decisions, such as records of past arrests, it will “learn” to replicate those discriminatory decisions.

And simply having a human in the decision chain will not fix this foundational problem. Studies have shown that having a human “in the loop” doesn’t adequately correct for AI bias, both because the human tends to defer to the AI and because the AI can provide cover for a biased human to ratify decisions that agree with their biases and override the AI at other times.

These biases don’t just arise in obvious contexts, like when a government agency is making decisions about people. It can also arise in equally life-affecting contexts like medical care. Whenever AI is used for analysis in a context with systemic disparities and whenever the costs of an incorrect decision fall on someone other than those deciding whether to use the tool.  For example, dermatology has historically underserved people of color because of a focus on white skin, with the resulting bias affecting AI tools trained on the existing and biased image data.

These kinds of errors are difficult to detect and correct because it’s hard or even impossible to understand how an AI tool arrives at individual decisions. These tools can sometimes find and apply patterns that a human being wouldn't even consider, such as basing diagnostic decisions on which hospital a scan was done at. Or determining that malignant tumors are the ones where there is a ruler next to them—something that a human would automatically exclude from their evaluation of an image. Unlike a human, AI does not know that the ruler is not part of the cancer.

Auditing and correcting for these kinds of mistakes is vital, but in some cases, might negate any sort of speed or efficiency arguments made in favor of the tool. We all understand that the more important a decision is, the more guardrails against disaster need to be in place. For many AI tools, those don't exist yet. Sometimes, the stakes will be too high to justify the use of AI. In general, the higher the stakes, the less this technology should be used.

We also need to acknowledge the risk of over-reliance on AI, at least as it is currently being released. We've seen shades of a similar problem before online (see: "Dr. Google"), but the speed and scale of AI use—and the increasing market incentive to shoe-horn “AI” into every business model—have compounded the issue.

Moreover, AI may reinforce a user’s pre-existing beliefs—even if they’re wrong or unhealthy. Many users may not understand how AI works, what it is programmed to do, and how to fact check it. Companies have chosen to release these tools widely without adequate information about how to use them properly and what their limitations are. Instead they market them as easy and reliable. Worse, some companies also resist transparency in the name of trade secrets and reducing liability, making it harder for anyone to evaluate AI-generated answers. 

Other considerations may weigh against AI uses are its environmental impact and potential labor market effects. Delving into these is beyond the scope of this post, but it is an important factor in determining if AI is doing good somewhere and whether any benefits from AI are equitably distributed.

Research into the extent of AI harms and means of avoiding them is ongoing, but it should be part of the analysis.

AI’s Real and Potential Benefits

However harmful AI technologies can sometimes be, in the right hands and circumstances, they can do things that humans simply can’t. Machine learning technology has powered search tools for over a decade. It’s undoubtedly useful for machines to help human experts pore through vast bodies of literature and data to find starting points for research—things that no number of research assistants could do in a single year. If an actual expert is involved and has a strong incentive to reach valid conclusions, the weaknesses of AI are less significant at the early stage of generating research leads. Many of the following examples fall into this category.

Machine learning differs from traditional statistics in that the analysis doesn’t make assumptions about what factors are significant to the outcome. Rather, the machine learning process computes which patterns in the data have the most predictive power and then relies upon them, often using complex formulae that are unintelligible to humans. These aren’t discoveries of laws of nature—AI is bad at generalizing that way and coming up with explanations. Rather, they’re descriptions of what the AI has already seen in its data set.

To be clear, we don't endorse any products and recognize initial results are not proof of ultimate success. But these cases show us the difference between something AI can actually do versus what hype claims it can do.

Researchers are using AI to discover better alternatives to today’s lithium-ion batteries, which require large amounts of toxic, expensive, and highly combustible materials. Now, AI is rapidly advancing battery development: by allowing researchers to analyze millions of candidate materials and generate new ones. New battery technologies discovered with the help of AI have a long way to go before they can power our cars and computers, but this field has come further in the past few years than it had in a long time.

AI Advancements in Scientific and Medical Research

AI tools can also help facilitate weather prediction. AI forecasting models are less computationally intensive and often more reliable than traditional tools based on simulating the physical thermodynamics of the atmosphere. Questions remain, though about how they will handle especially extreme events or systemic climate changes over time.

For example:

  • The National Oceanic and Atmospheric Administration has developed new machine learning models to improve weather prediction, including a first-of-its-kind hybrid system that  uses an AI model in concert with a traditional physics-based model to deliver more accurate forecasts than either model does on its own. to augment its traditional forecasts, with improvements in accuracy when the AI model is used in concert with the physics-based model.
  • Several models were used to forecast a recent hurricane. Google DeepMind’s AI system performed the best, even beating official forecasts from the U.S. National Hurricane Center (which now uses DeepMind’s AI model).

 Researchers are using AI to help develop new medical treatments:

  • Deep learning tools, like the Nobel Prize-winning model AlphaFold, are helping researchers understand protein folding. Over 3 million researchers have used AlphaFold to analyze biological processes and design drugs that target disease-causing malfunctions in those processes.
  • Researchers used machine learning simulate and computationally test a large range of new antibiotic candidates hoping they will help treat drug-resistant bacteria, a growing threat that kills millions of people each year.
  • Researchers used AI to identify a new treatment for idiopathic pulmonary fibrosis, a progressive lung disease with few treatment options. The new treatment has successfully completed a Phase IIa clinical trial. Such drugs still need to be proven safe and effective in larger clinical trials and gain FDA approval before they can help patients, but this new treatment for pulmonary fibrosis could be the first to reach that milestone.
  • Machine learning has been used for years to aid in vaccine development—including the development of the first COVID-19 vaccines––accelerating the process by rapidly identifying potential vaccine targets for researchers to focus on.
AI Uses for Accessibility and Accountability 

AI technologies can improve accessibility for people with disabilities. But, as with many uses of this technology, safeguards are essential. Many tools lack adequate privacy protections, aren’t designed for disabled users, and can even harbor bias against people with disabilities. Inclusive design, privacy, and anti-bias safeguards are crucial. But here are two very interesting examples:

  • AI voice generators are giving people their voices back, after losing their ability to speak. For example, while serving in Congress, Rep. Jennifer Wexton developed a debilitating neurological condition that left her unable to speak. She used her cloned voice to deliver a speech from the floor of the House of Representatives advocating for disability rights.
  • Those who are blind or low-vision, as well as those who are deaf or hard-of-hearing, have benefited from accessibility tools while also discussing their limitations and drawbacks. At present, AI tools often provide information in a more easily accessible format than traditional web search tools and many websites that are difficult to navigate for users that rely on a screen reader. Other tools can help blind and low vision users navigate and understand the world around them by providing descriptions of their surroundings. While these visual descriptions may not always be as good as the ones a human may provide, they can still be useful in situations when users can’t or don’t want to ask another human to describe something. For more on this, check out our recent podcast episode on “Building the Tactile Internet.”

When there is a lot of data to comb through, as with police accountability, AI is very useful for researchers and policymakers:

  •  The Human Rights Data Analysis Group used LLMs to analyze millions of pages of records regarding police misconduct. This is essentially the reverse of harmful use cases relating to surveillance; when the power to rapidly analyze large amounts of data is used by the public to scrutinize the state there is a potential to reveal abuses of power and, given the power imbalance, very little risk that undeserved consequences will befall those being studied.
  • An EFF client, Project Recon, used an AI system to review massive volumes of transcripts of prison parole hearings to identify biased parole decisions. This innovative use of technology to identify systemic biases, including racial disparities, is the type of AI use we should support and encourage.

It is not a coincidence that the best examples of positive uses of AI come in places where experts, with access to infrastructure to help them use the technology and the requisite experience to evaluate the results, are involved. Moreover, academic researchers are already accustomed to explaining what they have done and being transparent about it—and it has been hard won knowledge that ethics are a vital step in work like this.

Nor is it a coincidence that other beneficial uses involve specific, discrete solutions to problems faced by those whose needs are often unmet by traditional channels or vendors. The ultimate outcome is beneficial, but it is moderated by human expertise and/or tailored to specific needs.

Context Matters

It can be very tempting—and easy—to make a blanket determination about something, especially when the stakes seem so high. But we urge everyone—users, policymakers, the companies themselves—to cut through the hype. In the meantime, EFF will continue to work against the harms caused by AI while also making sure that beneficial uses can advance.

Protecting the Big Game: A Threat Assessment for Super Bowl LX

Blogs

Blog

Protecting the Big Game: A Threat Assessment for Super Bowl LX

This threat assessment analyzes potential physical and cyber threats to Super Bowl LX.

SHARE THIS:
Default Author Image
February 4, 2026
Superbowl LIX Threat Assessment | Flashpoint Blog
Table Of Contents

Each year, the Super Bowl draws one of the largest live audiences of any global sporting event, with tens of thousands of spectators attending in person and more than 100 million viewers expected to watch worldwide. Super Bowl LX, taking place on February 8, 2026 at Levi’s Stadium, will feature the Seattle Seahawks and the New England Patriots, with Bad Bunny headlining the halftime show and Green Day performing during the opening ceremony.

Beyond the game itself, the Super Bowl represents one of the most influential commercial and media stages in the world, with major brands investing in some of the most expensive advertising time of the year. The scale, visibility, and economic significance of the event make it an attractive target for threat actors seeking attention, disruption, or financial gain, underscoring the need for heightened security awareness.

Cybersecurity Considerations

At this time, Flashpoint has not observed any specific cyber threats targeting Super Bowl LX. Despite the absence of overt threats, it remains possible that threat actors may attempt to obtain personal information—including financial and credit card details—through scams, malware, phishing campaigns, or other opportunistic cyber activity.

High-profile events such as the Super Bowl have historically been leveraged as bait for cyber campaigns targeting fans and attendees rather than league infrastructure. In October 2024, the online store of the Green Bay Packers was hacked, exposing customers’ financial details. Previous incidents also include the February 2022 “BlackByte” ransomware attack that targeted the San Francisco 49ers in the lead-up to Super Bowl LVI.

Although Flashpoint has not identified any credible calls for large-scale cyber campaigns against Super Bowl LX at this time, analysts assess that cyber activity—if it occurs—is more likely to focus on fraud, impersonation, and social engineering directed at ticket holders, travelers, and high-profile attendees.

Online Sentiment

Flashpoint is currently monitoring online sentiment ahead of Super Bowl LX. At the time of publishing, analysts have identified pockets of increasingly negative online chatter related primarily to allegations of federal immigration enforcement activity in and around the event, as well as broader political and social tensions surrounding the Super Bowl.

Online discussions include calls for protests and boycotts tied to perceived Immigration and Customs Enforcement (ICE) involvement, as well as controversy surrounding halftime and opening ceremony performers. While sentiment toward the game itself and associated events remains largely positive, Flashpoint continues to monitor for escalation in rhetoric that could translate into real-world activity.

Potential Physical Threats

Protests and Boycotts

Flashpoint analysts have identified online chatter promoting protests in the Bay Area in response to allegations that Immigration and Customs Enforcement (ICE) agents will conduct enforcement operations in and around Super Bowl LX. A planned protest is scheduled to take place near Levi’s Stadium on February 8, 2026, during game-day hours.

At this time, Flashpoint has not identified any calls for violence or physical confrontation associated with these actions. However, analysts cannot rule out the possibility that demonstrations could expand or relocate, potentially causing localized disruptions near the venue or surrounding infrastructure if protesters gain access to restricted areas.

In addition, Flashpoint has identified online calls to boycott the Super Bowl tied to both the alleged ICE presence and controversy surrounding the event’s halftime and opening ceremony performers. Flashpoint has not identified any chatter indicating that players, NFL personnel, or affiliated organizations plan to boycott or disrupt the game or related events.

Terrorist and Extremist Threats

Flashpoint has not identified any direct or credible threats to Super Bowl LX or its attendees from violent extremists or terrorist groups at this time. However, as with any high-profile sporting event, lone actors inspired by international terrorist organizations or domestic violent extremist ideologies remain a persistent risk due to the scale of attendance and global media attention.

Super Bowl LX is designated as a SEAR-1 event, necessitating extensive interagency coordination and heightened security measures. Law enforcement presence is expected to be significant, with layered security protocols, strict access control points, and comprehensive screening procedures in place throughout Levi’s Stadium and surrounding areas. Contingency planning for crowd management, emergency response, and evacuation scenarios is ongoing.

Mitigation Strategies and Executive Protection

Given the absence of specific, identified threats, mitigation strategies for key personnel attending Super Bowl LX focus on general best practices. Security teams tasked with executive protection should remove sensitive personal information from online sources, monitor open-source and social media channels, and establish targeted alerts for potential threats or emerging protest activity.

Physical security teams and protected individuals should also familiarize themselves with venue layouts, emergency exits, nearby medical facilities, and law enforcement presence, and remain alert to changes in crowd dynamics or protest activity in the vicinity of the event.

The nearest medical facilities are:

  • O’Connor Hospital (Santa Clara Valley Healthcare)
  • Kaiser Permanente Santa Clara Medical Center
  • Santa Clara Valley Medical Center
  • Valley Health Center Sunnyvale

Several of these facilities offer 24/7 emergency services and are located within a short driving distance of the stadium.

The primary law enforcement facility near the venue is:

  • Santa Clara Police Department

As a SEAR-1 event, extensive coordination is expected among local, state, and federal law enforcement agencies throughout the Bay Area.

    Stay Safe Using Flashpoint

    Although there are no indications of any credible, immediate threats to Super Bowl LX or attendees at this time, it is imperative to be vigilant and prepared. Protecting key personnel in today’s threat environment requires a multi-faceted approach. To effectively bridge the gap between online and offline threats, organizations must adopt a comprehensive strategy that incorporates open source intelligence (OSINT) and physical security measures. Download Flashpoint’s Physical Safety Event Checklist to learn more.

    Request a demo today.

    ❌