Normal view

Building an AI-powered defense-in-depth security architecture for serverless microservices

16 February 2026 at 21:10

March 10, 2026: This post has been updated to note that Amazon Q Detector Library describes the detectors used during code reviews to identify security and quality issues in code.


Enterprise customers face an unprecedented security landscape where sophisticated cyber threats use artificial intelligence to identify vulnerabilities, automate attacks, and evade detection at machine speed. Traditional perimeter-based security models are insufficient when adversaries can analyze millions of attack vectors in seconds and exploit zero-day vulnerabilities before patches are available.

The distributed nature of serverless architectures compounds this challenge—while microservices offer agility and scalability, they significantly expand the attack surface where each API endpoint, function invocation, and data store becomes a potential entry point, and a single misconfigured component can provide attackers the foothold needed for lateral movement. Organizations must simultaneously navigate complex regulatory environments where compliance frameworks like GDPR, HIPAA, PCI-DSS, and SOC 2 demand robust security controls and comprehensive audit trails, while the velocity of software development creates tension between security and innovation, requiring architectures that are both comprehensive and automated to enable secure deployment without sacrificing speed.

The challenge is multifaceted:

  • Expanded attack surface: Multiple entry points across distributed services requiring protection against distributed denial of service (DDoS) attacks, injection vulnerabilities, and unauthorized access
  • Identity and access complexity: Managing authentication and authorization across numerous microservices and service-to-service communications
  • Data protection requirements: Encrypting sensitive data in transit and at rest while securely storing and rotating credentials without compromising performance
  • Compliance and data protection: Meeting regulatory requirements through comprehensive audit trails and continuous monitoring in distributed environments
  • Network isolation challenges: Implementing controlled communication paths without exposing resources to the public internet
  • AI-powered threats: Defending against attackers who use AI to automate reconnaissance, adapt attacks in real-time, and identify vulnerabilities at machine speed

The solution lies in defense-in-depth—a layered security approach where multiple independent controls work together to protect your application.

This article demonstrates how to implement a comprehensive AI-powered defense-in-depth security architecture for serverless microservices on Amazon Web Services (AWS). By layering security controls at each tier of your application, this architecture creates a resilient system where no single point of failure compromises your entire infrastructure, designed so that if one layer is compromised, additional controls help limit the impact and contain the incident while incorporating AI and machine learning services throughout to help organizations address and respond to AI-powered threats with AI-powered defenses.

Architecture overview: A journey through security layers

Let’s trace a user request from the public internet through our secured serverless architecture, examining each security layer and the AWS services that protect it. This implementation deploys security controls at seven distinct layers with continuous monitoring and AI-powered threat detection throughout, where each layer provides specific capabilities that work together to create a comprehensive defense-in-depth strategy:

  • Layer 1 blocks malicious traffic before it reaches your application
  • Layer 2 verifies user identity and enforces access policies
  • Layer 3 encrypts communications and manages API access
  • Layer 4 isolates resources in private networks
  • Layer 5 secures compute execution environments
  • Layer 6 protects credentials and sensitive configuration
  • Layer 7 encrypts data at rest and controls data access
  • Continuous monitoring detects threats across layers using AI-powered analysis


Figure 1: Architecture diagram

Figure 1: Architecture diagram

Layer 1: Edge protection

Before requests reach your application, they traverse the public internet where attackers launch volumetric DDoS attacks, SQL injection, cross-site scripting (XSS), and other web exploits. AWS observed and mitigated thousands of distributed denial of service (DDoS) attacks in 2024, with one exceeding 2.3 terabits per second.

  • DDos protection: AWS Shield provides managed DDoS protection for applications running on AWS and is enabled for customers at no cost. AWS Shield Advanced offers enhanced detection, continuous access to the AWS DDoS Response Team (DRT), cost protection during attacks, and advanced diagnostics for enterprise applications.
  • Layer 7 protection: AWS WAF protects against Layer 7 attacks through managed rule groups from AWS and AWS Marketplace sellers that cover OWASP Top 10 vulnerabilities including SQL injection, XSS, and remote file inclusion. Rate-based rules automatically block IPs that exceed request thresholds, protecting against application-layer DDoS and brute force attacks. Geo-blocking capabilities restrict access based on geographic location, while Bot Control uses machine learning to identify and block malicious bots while allowing legitimate traffic.
  • AI for security: Amazon GuardDuty uses generative AI to enhance native security services, implementing AI capabilities to improve threat detection, investigation, and response through automated analysis.
  • AI-powered enhancement: Organizations can build autonomous AI security agents using Amazon Bedrock to analyze AWS WAF logs, reason through attack data, and automate incident response. These agents detect novel attack patterns that signature-based systems miss, generate natural language summaries of security incidents, automatically recommend AWS WAF rule updates based on emerging threats, correlate attack indicators across distributed services to identify coordinated campaigns, and trigger appropriate remediation actions based on threat context. This helps enable more proactive threat detection and response capabilities, reducing mean time to detection and response.

Layer 2: Verifying identity

After requests pass edge protection, you must verify user identity and determine resource access. Traditional username/password authentication is vulnerable to credential stuffing, phishing, and brute force attacks, requiring robust identity management that supports multiple authentication methods and adaptive security responding to risk signals in real time.

Amazon Cognito provides comprehensive identity and access management for web and mobile applications through two components:

  • User pools offer a fully managed user directory handling registration, sign-in, multi-factor authentication (MFA), password policies, social identity provider integration, SAML and OpenID Connect federation for enterprise identity providers, and advanced security features including adaptive authentication and compromised credential detection.
  • Identity pools grant temporary, limited-privilege AWS credentials to users for secure direct access to AWS services without exposing long-term credentials.

Amazon Cognito adaptive authentication uses machine learning to detect suspicious sign-in attempts by analyzing device fingerprinting, IP address reputation, geographic location anomalies, and sign-in velocity patterns, then allows sign-in, requires additional MFA verification, or blocks attempts based on risk assessment. Compromised credential detection automatically checks credentials against databases of compromised passwords and blocks sign-ins using known compromised credentials. MFA supports both SMS-based and time-based one-time password (TOTP) methods, significantly reducing account takeover risk.

For advanced behavioral analysis, organizations can use Amazon Bedrock to analyze patterns across extended timeframes, detecting account takeover attempts through geographic anomalies, device fingerprint changes, access pattern deviations, and time-of-day anomalies.

Layer 3: The application front door

An API gateway serves as your application’s entry point. It must handle request routing, throttling, API key management, encryption and it needs to integrate seamlessly with your authentication layer and provide detailed logging for security auditing while maintaining high performance and low latency.

  • Amazon API Gateway is a fully managed service for creating, publishing, and securing APIs at scale, providing critical security capabilities including SSL/TLS encryption with AWS Certificate Manager (ACM) to automatically handle certificate provisioning, renewal, and deployment. Request throttling and quota management protects backend services through configurable burst and rate limits with usage quotas per API key or client to prevent abuse, while API key management controls access from partner systems and third-party integrations. Request/response validation uses JSON Schema to validate data before reaching AWS Lambda functions, preventing malformed requests from consuming compute resources while seamless integration with Amazon Cognito validates JSON Web Tokens (JWTs) and enforces authentication requirements before requests reach application logic.
  • GuardDuty provides AI-powered intelligent threat detection by analyzing API invocation patterns and identifying suspicious activity including credential exfiltration using machine learning. For advanced analysis, Amazon Bedrock analyzes API Gateway metrics and Amazon CloudWatch logs to identify unusual HTTP 4XX error spikes (for example, 403 Forbidden) that might indicate scanning or probing attempts, geographic distribution anomalies, endpoint access pattern deviations, time-series anomalies in request volume, or suspicious user agent patterns.

Layer 4: Network isolation

Application logic and data must be isolated from direct internet access. Network segmentation is designed to limit lateral movement if a security incident occurs, helping to prevent compromised components from easily accessing sensitive resources.

  • Amazon Virtual Private Cloud (Amazon VPC) provides isolated network environments implementing a multi-tier architecture with public subnets for NAT gateways and application load balancers with internet gateway routes, private subnets for Lambda functions and application components accessing the internet through NAT Gateways for outbound connections, and data subnets with the most restrictive access controls. Lambda functions run in private subnets to prevent direct internet access, VPC flow logs capture network traffic for security analysis, security groups provide stateful firewalls following least privilege principles, Network ACLs add stateless subnet-level firewalls with explicit deny rules, and VPC endpoints enable private connectivity to Amazon DynamoDB, AWS Secrets Manager, and Amazon S3 without traffic leaving the AWS network.
  • GuardDuty provides AI-powered network threat detection by continuously monitoring VPC Flow Logs, CloudTrail logs, and DNS logs using machine learning to identify unusual network patterns, unauthorized access attempts, compromised instances, and reconnaissance activity, now including generative AI capabilities for automated analysis and natural language security queries.

Layer 5: Compute security

Lambda functions executing your application code and often requiring access to sensitive resources and credentials must be protected against code injection, unauthorized invocations, and privilege escalation. Additionally, functions must be monitored for unusual behavior that might indicate compromise.

Lambda provides built-in security features including:

  • AWS Identity and Access Management (IAM) execution roles that define precise resource and action access following least privilege principles
  • Resource-based policies that control which services and accounts can invoke functions to prevent unauthorized invocations
  • Environment variable encryption using AWS Key Management Services (AWS KMS) for variables at rest while sensitive data should use Secrets Manager function isolation designed so that each execution runs in isolated environments preventing cross-invocation data access
  • VPC integration enabling functions to benefit from network isolation and security group controls
  • Runtime security with automatically patched and updated managed runtimes
  • Code signing with AWS Signer digitally signing deployment packages for code integrity and cryptographic verification against unauthorized modifications

TheAmazon Q Detector Library describes the detectors used during code reviews to identify security and quality issues in code. Detectors contain rules that are used to identify critical security vulnerabilities like OWASP Top 10 and CWE Top 25 issues, including secrets exposure and package dependency vulnerabilities. They also detect code quality concerns such as IaC best practices and inefficient AWS API usage patterns, helping developers maintain secure and high-quality applications.

Vulnerability management: Amazon Inspector provides automated vulnerability management, continuously scanning Lambda functions for software vulnerabilities and network exposure, using machine learning to prioritize findings and provide detailed remediation guidance.

Layer 6: Protecting credentials

Applications require access to sensitive credentials including database passwords, API keys, and encryption keys. Hardcoding secrets in code or storing them in environment variables creates security vulnerabilities, requiring secure storage, regular rotation, authorized-only access, and comprehensive auditing for compliance.

  • Secrets Manager protects access to applications, services, and IT resources without managing hardware security modules (HSMs). It provides centralized secret storage for database credentials, API keys, and OAuth tokens in an encrypted repository using AWS KMS encryption at rest.
  • Automatic secret rotation configures rotation for database credentials, automatically updating both the secret store and target database without application downtime.
  • Fine-grained access control uses IAM policies to control which users and services access specific secrets, implementing least-privilege access.
  • Audit trails log secret access in AWS CloudTrail for compliance and security investigations. VPC endpoint support is designed so that secret retrieval traffic doesn’t leave the AWS network.
  • Lambda integration enables functions to retrieve secrets programmatically at runtime, designed so that secrets aren’t stored in code or configuration files and can be rotated without redeployment.
  • GuardDuty provides AI-powered monitoring, detecting anomalous behavior patterns that could indicate credential compromise or unauthorized access.

Layer 7: Data protection

The data layer stores sensitive business information and customer data requiring protection both at rest and in transit. Data must be encrypted, access tightly controlled, and operations audited, while maintaining resilience against availability attacks and high performance.

Amazon DynamoDB is a fully managed NoSQL database providing built-in security features including:

  • Encryption at rest (using AWS-owned, AWS managed, or customer managed KMS keys)
  • Encryption in transit (TLS 1.2 or higher)
  • Fine-grained access control through IAM policies with item-level and attribute-level permissions
  • VPC endpoints for private connectivity
  • Point-in-Time Recovery for continuous backups
  • Streams for audit trails
  • Backup and disaster recovery capabilities
  • Global Tables for multi-AWS Region, multi-active replication designed to provide high availability and low-latency global access

GuarDuty and Amazon Bedrock provide AI-powered data protection:

  • GuardDuty monitors DynamoDB API activity through CloudTrail logs using machine learning to detect anomalous data access patterns including unusual query volumes, access from unexpected geographic locations, and data exfiltration attempts.
  • Amazon Bedrock analyzes DynamoDB Streams and CloudTrail logs to identify suspicious access patterns, correlate anomalies across multiple tables and time periods, generate natural language summaries of data access incidents for security teams, and recommend access control policy adjustments based on actual usage patterns versus configured permissions. This helps transform data protection from reactive monitoring to proactive threat hunting that can detect compromised credentials and insider threats.

Continuous monitoring

Even with comprehensive security controls at every layer, continuous monitoring is essential to detect threats that bypass defenses. Security requires ongoing real-time visibility, intelligent threat detection, and rapid response capabilities rather than one-time implementation.

  • GuardDuty protects your AWS accounts, workloads, and data with intelligent threat detection.
  • CloudWatch provides comprehensive monitoring and observability, collecting metrics, monitoring log files, setting alarms, and automatically reacting to AWS resource changes.
  • CloudTrail provides governance, compliance, and operational auditing by logging all API calls in your AWS account, creating comprehensive audit trails for security analysis and compliance reporting.
  • AI-powered enhancement with Amazon Bedrock provides automated threat analysis; generating natural language summaries of GuardDuty findings and CloudWatch logs, pattern recognition identifying coordinated attacks across multiple security signals, incident response recommendations based on your architecture and compliance requirements, security posture assessment with improvement recommendations, and automated response through Lambda and Amazon EventBridge that isolates compromised resources, revokes suspicious credentials, or notifies security teams through Amazon SNS when threats are detected.

Conclusion

Securing serverless microservices presents significant challenges, but as demonstrated, using AWS services alongside AI-powered capabilities creates a resilient defense-in-depth architecture that protects against current and emerging threats while proving that security and agility are not mutually exclusive.

Security is an ongoing process—continuously monitor your environment, regularly review security controls, stay informed about emerging threats and best practices, and treat security as a fundamental architectural principle rather than an afterthought.

Further reading

If you have feedback about this blog post, submit them in the Comments section below. If you have questions about using this solution, start a thread in the EventBridge, GuardDuty, or Security Hub forums, or contact AWS Support.

Roger Nem Roger Nem
Roger is an Enterprise Technical Account Manager (TAM) supporting Healthcare & Life Science customers at Amazon Web Services (AWS). As a Security Technical Field community specialist, he helps enterprise customers design secure cloud architectures aligned with industry best practices. Beyond his professional pursuits, Roger finds joy in quality time with family and friends, nurturing his passion for music, and exploring new destinations through travel.

16th February – Threat Intelligence Report

By: lorenf
16 February 2026 at 18:57

For the latest discoveries in cyber research for the week of 16th February, please download our Threat Intelligence Bulletin.

TOP ATTACKS AND BREACHES

  • Dutch telecom provider Odido was hit by a data breach following unauthorized access to its customer management system. Attackers extracted personal data of 6.2 million customers, including names, addresses, phone numbers, email addresses, bank account details, dates of birth, and passport or ID numbers.
  • BridgePay Network Solutions, a US payment gateway, has confirmed a ransomware attack that forced it to take core systems offline. The outage disrupted portals for municipalities and merchants nationwide, though initial findings indicate no payment card data exposure and accessed files were encrypted. No ransomware group claimed responsibility for the attack.
  • Flickr, a photo sharing platform, has experienced a security incident at a third-party email service provider on February 5. The exposure may include names, usernames, email addresses, IP addresses, location data, and more. Passwords and payment card numbers were not affected.
  • ApolloMD, a US physician and practice management services firm, has disclosed a breach impacting 626,000 individuals. The incident occurred during May 2025, while the attackers accessed patient information from affiliated practices, exposing data such as names, addresses, and medical details.

AI THREATS

  • Google has released an analysis of adversarial AI misuse, detailing model extraction “distillation” attacks, AI-augmented phishing, and malware experimentation in late 2025. The report identified attempts to coerce disclosure of internal reasoning, AI-assisted reconnaissance by DPRK, PRC, Iranian, and Russian actors, and AI-integrated malware such as HONESTCUE leveraging Gemini’s API for second-stage payload generation.
  • Researchers have investigated a UNC1069 intrusion targeting a cryptocurrency FinTech through AI-enabled social engineering and a fake Zoom ClickFix lure. The attack deployed seven malware families enabling TCC bypass, credential and browser data theft, keystroke logging, and C2 communications over RC4-encrypted configurations.

Check Point Threat Emulation provides protection against this threat (Trojan.Wins.SugarLoader)

  • Researchers have detailed the abuse of AI website builders to clone major brands for phishing and fraud. They analyzed a Malwarebytes lookalike site created using Vercel’s v0 tool, which replicated branding and integrated opaque PayPal payment flows. The domain leveraged SEO poisoning and spam links, with registration data indicating links to India.

VULNERABILITIES AND PATCHES

  • Microsoft has released its February 2026 Patch Tuesday updates. The release addresses 58 vulnerabilities, including six zero days under active exploitation, among them CVE-2026-21510, a Windows Shell Security Feature Bypass vulnerability that can be triggered by opening a specially crafted link or shortcut file. Successful exploitation requires convincing a user to open a malicious link or shortcut file.
  • Google has patched 11 vulnerabilities in Chrome 145 for Windows, macOS, and Linux, including CVE-2026-2313, a use-after-free vulnerability in CSS. This high-severity flaw could allow remote code execution. Two additional high severity bugs in Codecs (CVE-2026-2314) and WebGPU (CVE-2026-2315) also enable code execution.
  • BeyondTrust has addressed CVE-2026-1731, a CVSS 9.9 pre-authentication remote code execution flaw in Remote Support and older Privileged Remote Access versions. Shortly after a proof of concept was published, threat actors began exploiting exposed instances, prompting urgent upgrades for self-hosted deployments.

Check Point IPS provides protection against this threat (BeyondTrust Multiple Products Command Injection (CVE-2026-1731))

THREAT INTELLIGENCE REPORTS

  • Check Point Research analyzed global cyber-attacks in January averaging 2,090 per organization per week, up 3% from December and 17% year over year. Education remained the most targeted sector with 4,364 attacks per organization, ransomware recorded 678 incidents with 52% in North America, and 1 in 30 GenAI prompts posed high data leak risk.
  • Check Point Research identified a sharp increase in Valentine-themed phishing websites, fraudulent stores, and fake dating platforms designed to steal personal data and payment information. Valentine-related domain registrations rose 44% in January 2026, with 97.5% unclassified, while 710 Tinder-impersonating domains were detected.
  • A Phorpiex-driven phishing campaign has been observed delivering Global Group ransomware via ZIP attachments with double-extension LNK files, using CMD and PowerShell to execute the payload. The ransomware runs offline with locally generated ChaCha20-Poly1305 keys, deletes shadow copies and itself, and terminates analysis and database processes.
  • Researchers have analyzed the latest GuLoader (aka CloudEye) downloader, which delivers Remcos, Vidar, and Raccoon, and now evades detection by leveraging encrypted payloads hosted on Google Drive and OneDrive. The malware uses polymorphic code to generate constants via XOR and ADD/SUB operations, along with anti-analysis techniques such as sandbox checks and exception handlers.

Check Point Harmony Endpoint and Threat Emulation provide protection against this threat (Trojan.Wins.GuLoader; InfoStealer.Win.GuLoader; Dropper.Wins.GuLoader.ta.*; Dropper.Win.CloudEyE; RAT.Wins.Remcos; InfoStealer.Win.Vidar; InfoStealer.Win.Raccoon; InfoStealer.Wins.Raccoon)

The post 16th February – Threat Intelligence Report appeared first on Check Point Research.

ClickFix added nslookup commands to its arsenal for downloading RATs

16 February 2026 at 14:09

ClickFix malware campaigns are all about tricking the victim into infecting their own machine.

Apparently, the criminals behind these campaigns have figured out that mshta and Powershell commands are increasingly being blocked by security software, so they have developed a new method using nslookup.

The initial stages are pretty much the same as we have seen before: fake CAPTCHA instructions to prove you’re not a bot, solving non-existing computer problems or updates, causing browser crashes,  and even instruction videos.

The idea is to get victims to run malicious commands to infect their own machine. The malicious command often gets copied to the victim’s clipboard with instructions to copy it into the Windows Run dialog or the Mac terminal.

Nslookup is a built‑in tool to use the internet “phonebook,” and the criminals are basically abusing that phonebook to smuggle in instructions and malware instead of just getting an address.

It exists to troubleshoot network problems, check if DNS is configured correctly, and investigate odd domains, not to download or run programs. But the criminals configured a server to reply with data that is crafted so that part of the “answer” is actually another command or a pointer to malware, not just a normal IP address.

Microsoft provided these examples of malicious commands:

nslookup command examples

These commands start an infection chain that downloads a ZIP archive from an external server. From that archive, it extracts a malicious Python script that runs routines to conduct reconnaissance, run discovery commands, and eventually drop a Visual Basic Script which drops and executes ModeloRAT.

ModeloRAT is a Python‑based remote access trojan (RAT) that gives attackers hands‑on control over an infected Windows machine.

Long story short, the cybercriminals have found yet another way to use a trusted technical tool and make it secretly carry the next step of the attack, all triggered by the victim following what looks like harmless copy‑paste support instructions. At which point they might hand over the control over their system.

How to stay safe

With ClickFix running rampant—and it doesn’t look like it’s going away anytime soon—it’s important to be aware, careful, and protected.

  • Slow down. Don’t rush to follow instructions on a webpage or prompt, especially if it asks you to run commands on your device or copy-paste code. Attackers rely on urgency to bypass your critical thinking, so be cautious of pages urging immediate action. Sophisticated ClickFix pages add countdowns, user counters, or other pressure tactics to make you act quickly.
  • Avoid running commands or scripts from untrusted sources. Never run code or commands copied from websites, emails, or messages unless you trust the source and understand the action’s purpose. Verify instructions independently. If a website tells you to execute a command or perform a technical action, check through official documentation or contact support before proceeding.
  • Limit the use of copy-paste for commands. Manually typing commands instead of copy-pasting can reduce the risk of unknowingly running malicious payloads hidden in copied text.
  • Secure your devices. Use an up-to-date, real-time anti-malware solution with a web protection component.
  • Educate yourself on evolving attack techniques. Understanding that attacks may come from unexpected vectors and evolve helps maintain vigilance. Keep reading our blog!

Pro tip: Did you know that the free Malwarebytes Browser Guard extension warns you when a website tries to copy something to your clipboard?


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Network Intelligence: Your Questions, Global Answers

16 February 2026 at 01:00

The Problem with Pre-Packaged Intelligence

Security teams are drowning in threat intelligence feeds. Hundreds of vendors promise comprehensive coverage, real-time alerts, and actionable insights. Yet sophisticated adversaries continue to operate undetected, incidents take weeks to scope, and attribution remains elusive.

The fundamental issue isn't quality but control. Traditional network visibility solutions force passive consumption: their alerts, their priorities, their timeline. This one-size-fits-all approach assumes threats targeting financial services match those facing critical infrastructure, or that yesterday's patterns predict tomorrow's campaigns.

Network intelligence flips this model. With global visibility spanning billions of connections across 150+ sensors in 35+ countries, you can investigate what matters to your organization using your own selectors, questions, and mission requirements.

What Network Intelligence Actually Means

Effective network intelligence requires global visibility at scale: distributed sensors across dozens of countries processing billions of packets daily, generating tens of millions of network flow records. But collection methodology matters equally. Metadata-only approaches capture source and destination IPs, ports, protocols, flow counts, and timestamps without payloads or deep packet inspection. This enables operation at internet scale while better maintaining ethical boundaries and data minimization standards.

At Recorded Future, our network intelligence capabilities provide this access to such global network traffic observations for specific IP addresses of interest. Our Insikt Group uses this same infrastructure to research 500+ malware families and threat actors. Government CERTs use these capabilities to analyze adversary infrastructure at national scale.

What This Means in Practice

Consider what changes when your security operations can query global network intelligence.

Faster SOC Triage

Your team flags a suspicious IP at 2 AM. Instead of guessing whether it's noise or the start of something worse, query the network intelligence platform. See its global communication patterns instantly. Understand whether you're looking at commodity scanning or infrastructure that's been quietly staging against targets for weeks. Internet scanner detection capabilities automatically classify the behavior and reveal specific ports targeted, web requests made, and geographic distribution. Triage in minutes, not hours.

Targeted or Opportunistic? Now You'll Know

When threats hit your industry, the first question is always: are we specifically in the crosshairs, or is this spray-and-pray? Network intelligence lets you track adversary infrastructure across your sector before it reaches your perimeter. See the pattern. Understand the targeting. Brief leadership with confidence because you're no longer guessing. You're showing them the actual traffic patterns that prove whether your organization is in the crosshairs or caught in the spray.

Fraud Infrastructure Exposed

Fraud campaigns depend on infrastructure that moves fast but leaves traces. Your selectors, run against global network intelligence, can reveal the networks behind credential stuffing, account takeover, and payment fraud before the campaign fully scales.

Attribution That Actually Holds Up

Mapping adversary infrastructure is hard. Connecting it to broader campaigns and ultimate operators is harder. Network intelligence gives you the longitudinal visibility to trace how infrastructure evolves, clusters, and connects. Administrative traffic analysis reveals patterns operators use to manage C2 infrastructure. When you identify admin flows from a common source connecting to multiple C2 servers, you're mapping the operator's pattern based on observed behavior across hundreds of global vantage points. You're turning indicators into intelligence.

Integration Into Security Workflows

Network intelligence integrates directly into existing security workflows through API access to SIEMs, SOAR platforms, and custom analysis tools. When your SIEM flags suspicious traffic, automated queries reveal global context: Is this IP conducting C2 communications? Scanning your sector specifically? Connected to infrastructure from last month's campaign? Curated threat lists reduce noise from legitimate security research while enabling early blocking of targeted reconnaissance, turning your existing tools into instruments for active investigation rather than passive alerting.

When Expertise Becomes Essential

For organizations facing persistent, sophisticated adversaries, network intelligence capabilities alone aren't sufficient. The difference between having access to global network visibility and operationalizing it effectively comes down to tradecraft.

Recorded Future's Global Network Intelligence Advisory program addresses this by pairing technical capabilities with forward-deployed analysts and embedded engineers who work directly inside your SOC or intelligence fusion center. This becomes especially critical when nation-states are mapping your critical infrastructure, when advanced persistent threats are staging for long-term access, or when attribution could influence strategic decision-making. You need the ability to investigate specific questions with global visibility and the expertise to interpret what you find.

The Compliance Framework That Enables Trust

Network intelligence operates under strict ethical and legal guidelines. All use is subject to our Acceptable Use Policy and surveillance, profiling of individuals, or political targeting is prohibited. Access is invitation-only, requiring vetting and agreement to specific terms of use.

These aren't just policies but foundational to how this capability operates. The metadata-only collection model, the data minimization approach, and the geographic distribution that prevents any single point of visibility into user communications are design choices. These constraints aren't obstacles to effectiveness but enablers of trust. They allow powerful intelligence capabilities to exist while promoting appropriate boundaries.

Moving Forward

The gap between what most security programs need and what traditional threat intelligence provides continues to widen. Adversaries operate at scale, evolving infrastructure faster than feeds can update. Internal telemetry shows only what touches your perimeter. Point-in-time observations lack the context to distinguish targeted attacks from noise.

Network intelligence addresses this gap with the ability to query global visibility using your own selectors. At Recorded Future, we've developed capabilities that operate at this scale, with the compliance framework and operational expertise to make them effective. For organizations ready to move beyond pre-packaged feeds, we're offering these capabilities to select customers through an invitation-only program.

What matters now is recognizing that your questions matter more than their answers and building security programs that reflect that reality.

Alle ogen op de Olympische Spelen, de hack bij Odido en een verbod op uitkleedapps

15 February 2026 at 10:02
  • De persoonsgegevens die horen bij 6,2 miljoen accounts van telecomprovider Odido zijn in handen van criminele hackers. Daar zitten ook persoonsgegevens van klanten van provider Ben tussen, een dochteronderneming van Odido. Het gaat om na

    am, adres, mobiele nummers, klantnummer, e-mailadres, IBAN, geboortedatum en paspoortnummer, rijbewijsnummer en de geldigheidsduur van die documenten.

    Subscribe now

  • Een soort “postcodeloterij voor criminelen“, een ramp voor alle klanten die nu moeten vrezen dat hun gegevens op straat liggen, maar vooral een signaal dat zo’n berg aan klantgegevens ontzettend kwetsbaar kan zijn.

  • Social engineering was waarschijnlijk de sleutel waarmee de criminelen in de systemen van Odido kwamen. Ze gebruikten de inloggegevens van medewerkers van de klantenservice, meldt de NOS. Die hebben ze met phishing buitgemaakt. MFA omzeilden ze door de medewerkers te bellen en hen over te halen om de inlogpoging goed te keuren.

  • Voor iedereen die slachtoffer is, geldt vooral het advies dat ze alert moeten zijn. Want met al deze gegevens kunnen criminelen veel geloofwaardiger overkomen als zij proberen mensen op te lichten. Denk aan onverwachte telefoongesprekken, mails of berichten waarin ze zich voordoen als Odido, een bank of andere instelling. Als een crimineel dan je paspoortnummer en je IBAN noemt, ben je eerder geneigd zo’n gesprek te vertrouwen. Opletten dus. Maar zet vooral ook tweestapsverificatie aan waar je kunt én houd je accounts in de gaten voor verdachte activiteiten.

  • Telecomproviders zijn vaker doelwit van cyberaanvallen. In 2024 lagen de gegevens van 24 miljoen abonnees van het Franse Free SAS en Free Mobile op straat, na een cyberaanval. En vorig jaar gebeurde bij het Zuid-Koreaanse SK Telecom zo’n beetje hetzelfde. Gegevens van 27 miljoen klanten lekten uit. De aanvallers zaten al drie jaar in de systemen van SK Telecom, blijkt achteraf. Pijnlijk, maar ook duur. Want door alle kosten als gevolg van de aanval, daalde de operationele winst in het derde kwartaal van 2025 met 90 procent.

Offlimits wil een verbod op nudify-tools

  • Meer dan honderd internationale organisaties roepen op tot een wereldwijd verbod op AI-uitkleedapps. Expertisecentrum Offlimits en de gemeente Amsterdam hebben het manifest ondertekend.

  • De afbeeldingen die met dit soort apps worden gemaakt, maken veel slachtoffers. Ze worden gebruikt voor afpersing, pesten en misbruik van minderjarigen. En de slachtoffers krijgen vaak last van ernstige psychologische gevolgen.

  • Volgens Offlimits is het een van de snelst groeiende vormen van misbruik. Het aantal apps groeide in korte tijd tot zo’n 47, meldde RTL vorig jaar. En het aantal meldingen steeg in een jaar tijd met liefst 260 procent, zegt Offlimits-bestuurder Robbert Hoving tegen de NOS.

Bij RTL Nieuws deelde slachtoffer Lisa haar aangrijpende verhaal:

Lisa’s stiefvader maakte nep-naaktfoto’s van haar: zo makkelijk is het

  • Vanuit wetgeving zijn er wel mogelijkheden, maar zeker bij internationale apps en websites is dat een stuk ingewikkelder, vanwege uiteenlopende jurisdicties. Toch zou alle aandacht voor dit probleem een katalysator kunnen blijken. Van misbruik van AI-tools tot desinformatie: we zien nu in dat het verkeerd inzetten van deze technologie een reëel veiligheidsrisico is in de fysieke wereld. Met één klik op de knop kun je met AI en het bereik op internet een leven compleet verwoesten. Daarom is het ontzettend belangrijk dat we dit onderwerp bespreekbaar maken, op scholen, met jongeren en met ouders.

Geen Olympisch goud voor hackers

  • De stroom aan gouden, zilveren en bronzen medailles voor de Nederlandse sporters begint lekker op gang te komen. En terwijl kanjers als Jutta Leerdam, Jens van ‘t Wout en Xandra Velzeboer shinen, zijn ook cybercriminelen op jacht naar goud, soort van.

  • Italië wist al meerdere cyberaanvallen te voorkomen, gericht op kantoren van het ministerie van Buitenlandse Zaken, hotels en websites van het evenement, meldt persbureau AP.

  • Dat is precies waar onderzoekers van Palo Alto Networks deze week extra voor waarschuwen. Staatshackers, cybercriminelen en hacktivisten: allemaal zoomen ze in op de Spelen. En dat is niet gek. Want grote internationale evenementen trekken onvermijdelijk wereldwijde aandacht. Aanvallers die uit zijn op aandacht of die nu activistisch, crimineel of politiek gemotiveerd zijn weten dat een succesvolle inbreuk in verband met de Spelen wereldwijd wordt uitvergroot.

Leave a comment

  • Eerdere edities van de Olympische Spelen kregen al te maken met cyberincidenten, zoals de cyberaanval tijdens de openingsceremonie van de Winterspelen in 2018 in het Zuid-Koreaanse Pyeongchang, of de circa 450 miljoen aanvalspogingen rond de Zomerspelen in Tokio.

  • Het laat zien dat dit geen theoretische risico’s meer zijn. Dit soort cyberdreigingen zijn al actief ver voor het eerste fluitsignaal of startschot geklonken heeft. Iets waar gastlanden rekening mee moeten houden en hun verdedigingsmaatregelen op af moeten stemmen. En dat doen ze gelukkig ook.

  • Sportliefhebbers zouden dat ook moeten doen. Want oplichters spelen in op onze interesse en het gevoel van urgentie en erbij willen zijn. Denk aan phishingmails in Olympische stijl, valse tickets die in omloop zijn en oplichting met accommodaties via platformen zoals Airbnb.

Tot volgende week! Reageren? Hit reply. Delen? Forward dit of stuur de link aan je kennissen en vrienden.

-H.

Harm Teunis | Cybersecurity Evangelist | Podcastmaker Het Digitale Front | Spreker

Phishing on the Edge of the Web and Mobile Using QR Codes

We discuss the extensive use of malicious QR codes using URL shorteners, in-app deep links and direct APK downloads to bypass mobile security.

The post Phishing on the Edge of the Web and Mobile Using QR Codes appeared first on Unit 42.

The Human Element: Turning Threat Actor OPSEC Fails into Investigative Breakthroughs

13 February 2026 at 20:09

Blogs

Blog

The Human Element: Turning Threat Actor OPSEC Fails into Investigative Breakthroughs

In this post, we explore how the psychological traps of operational security can unmask even the most sophisticated actors.

SHARE THIS:
Default Author Image
February 13, 2026
Table Of Contents

The threat intelligence landscape is often dominated with talks of sophisticated TTPs (tactics, tools, and procedures), zero-day vulnerabilities, and ransomware. While these technical threats are formidable, they are still managed by human beings, and it is the human element that often provides the most critical breakthroughs in attributing these attacks and de-anonymizing the threat actors behind them.

In our latest webinar, “OPSEC Fails: The Secret Weapon for People-Centric OSINT”,  Flashpoint was joined by Joshua Richards, founder of OSINT Praxis. Josh shared an intriguing case study where an attacker’s digital breadcrumbs led to a life-saving intervention. 

Here is how OSINT techniques, leveraged by Flashpoint’s expansive data capabilities, can dismantle illegal threat actor campaigns by turning a technical investigation into a human one.

Leveraging OPSEC as a Mindset

In a technical context, OPSEC is a risk management process that identifies seemingly innocuous pieces of information that, when gathered by an adversary, could be pieced together to reveal a larger, sensitive picture.

In the webinar, we break down the OPSEC mindset into three core pillars that every practitioner, and threat actor, must navigate. When these pillars fail, the investigation begins.

  • Analyzing the Signature: Every human has a digital signature, such as the way they type (stylometry), the times they are active, and the tools they prefer.
  • Identity Masking & Persona Management: This involves ensuring that your investigative identity has zero overlap with your real life. A common failure includes using the same browser for personal use and investigative research, which allows cookies to bridge the two identities.
  • Traffic Obfuscation: Even with a VPN, certain behaviors such as posting on a dark web forum and then using that same connection to check personal banking can expose an IP address, linking it to a practitioner or threat actor.

“Effective OPSEC isn’t about the tools you use; it’s about what breadcrumbs you are leaving behind that hackers, investigation subjects, or literally anyone could find about you.”

Joshua Richards, founder of Osint Praxis

Leveraging the Mindset for CTI

Understanding the OPSEC mindset allows security teams to think like the target. When we know the psychological traps attackers fall in, we know exactly where to look for their mistakes.

AssumptionThe Mindset TrapThe Investigative Reality
Insignificant“I’m not a high-value target; no one is looking for me.”Automated Aggression: Hackers use scripts to scan millions of accounts. You aren’t “chosen”; you are “discovered” via automation.
Invisible“I don’t have a LinkedIn or X account, so I don’t have a footprint.”Shadow Data: Public birth records, property taxes, and historical data breaches create a footprint you didn’t even build yourself.
Invincible“I have 2FA and complex passwords; I’m unhackable.”Session Hijacking: Infostealer malware steals “session tokens” (cookies). This allows an actor to be you in a browser without ever needing your 2FA code.

During the webinar, Joshua shares a masterclass in how leveraging these concepts can turn a vague dark web threat into a real-world arrest. Check out the on-demand webinar to see exactly how the investigation started on Torum, a dark web forum, and ended with an arrest that saved the lives of two individuals.

Turn the Tables Using Flashpoint

The insights shared in this session powerfully illustrate that even the most dangerous threat actors are rarely as anonymous as they believe. Their downfall isn’t usually a failure of their technical prowess, but a failure of their mindset. By understanding these OSINT techniques, intelligence practitioners can transform a sea of digital noise into a clear path toward attribution.

The most effective way to dismantle threats is to bridge the gap between technical indicators and human behavior. Whether your teams are conducting high-stakes OSINT or protecting your own organization’s digital footprint, every breadcrumb counts. By leveraging Flashpoint’s expansive threat intelligence collections and real-time data, you can stay one step ahead of adversaries. Request a demo to learn more.

Request a demo today.

The post The Human Element: Turning Threat Actor OPSEC Fails into Investigative Breakthroughs appeared first on Flashpoint.

How to find and remove credential-stealing Chrome extensions

13 February 2026 at 14:27

Researchers have found yet another family of malicious extensions in the Chrome Web Store. This time, 30 different Chrome extensions were found stealing credentials from more than 260,000 users.

The extensions rendered a full-screen iframe pointing to a remote domain. This iframe overlaid the current webpage and visually appeared as the extension’s interface. Because this functionality was hosted remotely, it was not included in the review that allowed the extensions into the Web Store.

In other recent findings, we reported about extensions spying on ChatGPT chats, sleeper extensions that monitored browser activity, and a fake extension that deliberately caused a browser crash.

To spread the risk of detections and take-downs, the attackers used a technique known as “extension spraying.” This means they used different names and unique identifiers for basically the same extension.

What often happens is that researchers provide a list of extension names and IDs, and it’s up to users to figure out whether they have one of these extensions installed.

Searching by name is easy when you open your “Manage extensions” tab, but unfortunately extension names are not unique. You could, for example, have the legitimate extension installed that a criminal tried to impersonate.

Searching by unique identifier

For Chrome and Edge, a browser extension ID is a unique 32‑character string of lowercase letters that stays the same even if the extension is renamed or reshipped.

When we’re looking at the extensions from a removal angle, there are two kinds: those installed by the user, and those force‑installed by other means (network admin, malware, Group Policy Object (GPO), etc.).

We will only look at the first type in this guide—the ones users installed themselves from the Web Store. The guide below is aimed at Chrome, but it’s almost the same for Edge.

How to find installed extensions

You can review the installed Chrome extensions like this:

  • In the address bar type chrome://extensions/.
  • This will open the Extensions tab and show you the installed extensions by name.
  • Now toggle Developer mode to on and you will also see their unique ID.
Extensions tab showing Malwarebytes Browser Guard
Don’t remove this one. It’s one of the good ones.

Removal method in the browser

Use the Remove button to get rid of any unwanted entries.

If it disappears and stays gone after restart, you’re done. If there is no Remove button or Chrome says it’s “Installed by your administrator,” or the extension reappears after a restart, there’s a policy, registry entry, or malware forcing it.

Alternative

Alternatively, you can also search the Extensions folder. On Windows systems this folder lives here: C:\Users\<your‑username>\AppData\Local\Google\Chrome\User Data\Default\Extensions.

Please note that the AppData folder is hidden by default. To unhide files and folders in Windows, open Explorer, click the View tab (or menu), and check the Hidden items box. For more advanced options, choose Options > Change folder and search options > View tab, then select Show hidden files, folders, and drives.

Chrome extensions folder
Chrome extensions folder

You can organize the list alphabetically by clicking on the Name column header once or twice. This makes it easier to find extensions if you have a lot of them installed.

Deleting the extension folder here has one downside. It leaves an orphaned entry in your browser. When you start Chrome again after doing this, the extension will no longer load because its files are gone. But it will still show up in the Extensions tab, only without the appropriate icon.

So, our advice is to remove extensions in the browser when possible.

Malicious extensions

Below is the list of credential-stealing extensions using the iframe method, as provided by the researchers.

Extension IDExtension name
acaeafediijmccnjlokgcdiojiljfpbeChatGPT Translate
baonbjckakcpgliaafcodddkoednpjgfXAI
bilfflcophfehljhpnklmcelkoiffapbAI For Translation
cicjlpmjmimeoempffghfglndokjihhnAI Cover Letter Generator
ckicoadchmmndbakbokhapncehanaeniAI Email Writer
ckneindgfbjnbbiggcmnjeofelhflhajAI Image Generator Chat GPT
cmpmhhjahlioglkleiofbjodhhiejheiAI Translator
dbclhjpifdfkofnmjfpheiondafpkoedAi Wallpaper Generator
djhjckkfgancelbmgcamjimgphaphjdlAI Sidebar
ebmmjmakencgmgoijdfnbailknaaiffhChat With Gemini
ecikmpoikkcelnakpgaeplcjoickgacjAi Picture Generator
fdlagfnfaheppaigholhoojabfaapnhbGoogle Gemini
flnecpdpbhdblkpnegekobahlijbmfokChatGPT Picture Generator
fnjinbdmidgjkpmlihcginjipjaoapolEmail Generator AI
fpmkabpaklbhbhegegapfkenkmpipickChat GPT for Gmail
fppbiomdkfbhgjjdmojlogeceejinadgGemini AI Sidebar
gcfianbpjcfkafpiadmheejkokcmdkjlLlama
gcdfailafdfjbailcdcbjmeginhncjkbGrok Chatbot
gghdfkafnhfpaooiolhncejnlgglhkheAI Sidebar
gnaekhndaddbimfllbgmecjijbbfpabcAsk Gemini
gohgeedemmaohocbaccllpkabadoogplDeepSeek Chat
hgnjolbjpjmhepcbjgeeallnamkjnfgiAI Letter Generator
idhknpoceajhnjokpnbicildeoligdghChatGPT Translation
kblengdlefjpjkekanpoidgoghdngdglAI GPT
kepibgehhljlecgaeihhnmibnmikbngaDeepSeek Download
lodlcpnbppgipaimgbjgniokjcnpiiadAI Message Generator
llojfncgbabajmdglnkbhmiebiinohekChatGPT Sidebar
nkgbfengofophpmonladgaldioelckbeChat Bot GPT
nlhpidbjmmffhoogcennoiopekbiglbpAI Assistant
phiphcloddhmndjbdedgfbglhpkjcffhAsking Chat Gpt
pgfibniplgcnccdnkhblpmmlfodijppgChatGBT
cgmmcoandmabammnhfnjcakdeejbfimnGrok

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Unzipping the Threat: How to Block Malware Hidden in Password-Protected ZIP Files

13 February 2026 at 13:00

As malware evades detection by hiding inside password-protect zip files, new Threat Emulation capabilities enable inspecting and blocking malicious ZIP files without requiring their password. As cyber defenses evolve, so do attacker tactics. One of the most persistent evasion techniques in the wild involves embedding malware inside password-protected ZIP files, making it difficult for traditional security tools to inspect their content. The Challenge: Breaking the Password Delivery Chain Attackers have adapted. Their new strategy? Splitting the delivery path: The malicious ZIP file is sent via email. The password arrives through an out-of-band channel, often SMS or messaging apps. This multi-channel […]

The post Unzipping the Threat: How to Block Malware Hidden in Password-Protected ZIP Files appeared first on Check Point Blog.

Fake shops target Winter Olympics 2026 fans

13 February 2026 at 10:00

If you’ve seen the two stoat siblings serving as official mascots of the Milano Cortina 2026 Winter Olympics, you already know Tina and Milo are irresistible.

Designed by Italian schoolchildren and chosen from more than 1,600 entries in a public poll, the duo has already captured hearts worldwide. So much so that the official 27 cm Tina plush toy on the official Olympics web shop is listed at €40 and currently marked out of stock.

Tina and Milo are in huge demand, and scammers have noticed.

When supply runs out, scam sites rush in

In roughly the past week alone, we’ve identified nearly 20 lookalike domains designed to imitate the official Olympic merchandise store.

These aren’t crude copies thrown together overnight. The sites use the same polished storefront template, complete with promotional videos and background music designed to mirror the official shop.olympics.com experience.

Fake site offering Tina at a huge discount
Fake site offering Tina at a huge discount
Real Olympic site showing Tina out of stock
Real Olympic site showing Tina out of stock

The layout and product pages are the same—the only thing that changes is the domain name. At a quick glance, most people wouldn’t notice anything unusual.

Here’s a sample of the domains we’ve been tracking:

2026winterdeals[.]top
olympics-save[.]top
olympics2026[.]top
postolympicsale[.]com
sale-olympics[.]top
shopolympics-eu[.]top
winter0lympicsstore[.]top (note the zero replacing the letter “o”)
winterolympics[.]top
2026olympics[.]shop
olympics-2026[.]shop
olympics-2026[.]top
olympics-eu[.]top
olympics-hot[.]shop
olympics-hot[.]top
olympics-sale[.]shop
olympics-sale[.]top
olympics-top[.]shop
olympics2026[.]store
olympics2026[.]top

Based on telemetry, additional registrations are actively emerging.

Reports show users checking these domains from multiple regions including Ireland, the Czech Republic, the United States, Italy, and China—suggesting this is a global campaign targeting fans worldwide.

Malwarebytes blocks these domains as scams.

Anatomy of a fake Olympic shop

The fake sites are practically identical. Each one loads the same storefront, with the same layout, product pages, and promotional banners.

That’s usually a sign the scammers are using a ready-made template and copying it across multiple domains. One obvious giveaway, however, is the pricing.

On the official store, the Tina plush costs €40 and is currently out of stock. On the fake sites, it suddenly reappears at a hugely discounted price—in one case €20, with banners shouting “UP & SAVE 80%.” When an item is sold out everywhere official and a random .top domain has it for half price, you’re looking at bait.

The goal of these sites typically includes:

  • Stealing payment card details entered at checkout
  • Harvesting personal information such as names, addresses, and phone numbers
  • Sending follow-up phishing emails
  • Delivering malware through fake order confirmations or “tracking” links
  • Taking your money and shipping nothing at all

The Olympics are a scammer’s playground

This isn’t the first time cybercriminals have piggybacked on Olympic fever. Fake ticket sites proliferated as far back as the Beijing 2008 Games. During Paris 2024, analysts observed significant spikes in Olympics-themed phishing and DDoS activity.

The formula is simple. Take a globally recognized brand, add urgency and emotional appeal (who doesn’t want an adorable stoat plush for their kid?), mix in limited availability, and serve it up on a convincing-looking website. With over 3 billion viewers expected for Milano Cortina, the pool of potential victims is enormous.

Scammers are getting smarter. AI-powered tools now let them generate convincing phishing pages in multiple languages at scale. The days of spotting a scam by its broken images and multiple typos are fading fast.

Protect yourself from Winter Olympics scams

As excitement builds ahead of the Winter Olympics in Milano Cortina, expect scammers to ramp up their efforts across fake shops, fraudulent ticket sites, bogus livestreams, and social media phishing campaigns.

  • Buy only from shop.olympics.com. Type the address directly into your browser and bookmark it. Don’t click links from ads or emails.
  • Don’t trust extreme discounts. If it’s sold out officially but “50–80% off” elsewhere, it’s likely a scam.
  • Check the domain closely. Watch for odd extensions like .top or .shop, extra hyphens, or letter swaps like “winter0lympicsstore.”
  • Never enter payment details on unfamiliar sites. If something feels off, leave immediately.
  • Use browser protection. Tools like Malwarebytes Browser Guard block known scam sites in real time, for free. Scam Guard can help you check suspicious websites before you buy.

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Top 10 actions to build agents securely with Microsoft Copilot Studio

Organizations are rapidly adopting Copilot Studio agents, but threat actors are equally fast at exploiting misconfigured AI workflows. Mis-sharing, unsafe orchestration, and weak authentication create new identity and data‑access paths that traditional controls don’t monitor. As AI agents become integrated into operational systems, exposure becomes both easier and more dangerous. Understanding and detecting these misconfigurations early is now a core part of AI security posture.

Copilot Studio agents are becoming a core part of business workflows- automating tasks, accessing data, and interacting with systems at scale.

That power cuts both ways. In real environments, we repeatedly see small, well‑intentioned configuration choices turn into security gaps: agents shared too broadly, exposed without authentication, running risky actions, or operating with excessive privileges. These issues rarely look dangerous- until they are abused.

If you want to find and stop these risks before they turn into incidents, this post is for you. We break down ten common Copilot Studio agent misconfigurations we observe in the wild and show how to detect them using Microsoft Defender and Advanced Hunting via the relevant Community Hunting Queries.

Short on time? Start with the table below. It gives you a one‑page view of the risks, their impact, and the exact detections that surface them. If something looks familiar, jump straight to the relevant scenario and mitigation.

Each section then dives deeper into a specific risk and recommended mitigations- so you can move from awareness to action, fast.

#Misconfiguration & RiskSecurity ImpactAdvanced Hunting Community Queries (go to: Security portal>Advanced hunting>Queries> Community Queries>AI Agent folder)
1Agent shared with entire organization or broad groupsUnintended access, misuse, expanded attack surface• AI Agents – Organization or Multi‑tenant Shared
2Agents that do not require authenticationPublic exposure, unauthorized access, data leakage• AI Agents – No Authentication Required
3Agents with HTTP Request actions using risky configurationsGovernance bypass, insecure communications, unintended API access• AI Agents – HTTP Requests to connector endpoints
• AI Agents – HTTP Requests to non‑HTTPS endpoints
• AI Agents – HTTP Requests to non‑standard ports
4Agents capable of email‑based data exfiltrationData exfiltration via prompt injection or misconfiguration• AI Agents – Sending email to AI‑controlled input values
• AI Agents – Sending email to external mailboxes
5Dormant connections, actions, or agentsHidden attack surface, stale privileged access• AI Agents – Published Dormant (30d)
• AI Agents – Unpublished Unmodified (30d)
• AI Agents – Unused Actions
• AI Agents – Dormant Author Authentication Connection
6Agents using author (maker) authenticationPrivilege escalation, separation of duties bypass‑of‑duties bypass• AI Agents – Published Agents with Author Authentication
• AI Agents – MCP Tool with Maker Credentials
7Agents containing hard‑coded credentialsCredential leakage, unauthorized system access• AI Agents – Hard‑coded Credentials in Topics or Actions
8Agents with Model Context Protocol (MCP) tools configuredUndocumented access paths, unintended system interactions• AI Agents – MCP Tool Configured
9Agents with generative orchestration lacking instructionsPrompt abuse, behavior drift, unintended actions• AI Agents – Published Generative Orchestration without Instructions
10Orphaned agents (no active owner)Lack of governance, outdated logic, unmanaged access• AI Agents – Orphaned Agents with Disabled Owners

Top 10 risks you can detect and prevent

Imagine this scenario: A help desk agent is created in your organization with simple instructions.

The maker, someone from the support team, connects it to an organizational Dataverse using an MCP tool, so it can pull relevant customer information from internal tables and provide better answers. So far, so good.

Then the maker decides, on their own, that the agent doesn’t need authentication. After all, it’s only shared internally, and the data belongs to employees anyway (See example in Figure 1). That might already sound suspicious to you. But it doesn’t to everyone.

You might be surprised how often agents like this exist in real environments and how rarely security teams get an active signal when they’re created. No alert. No review. Just another helpful agent quietly going live.

Now here’s the question: Out of the 10 risks described in this article, how many do you think are already present in this simple agent?

The answer comes at the end of the blog.

Figure 1 – Example Help Desk agent.

1: Agent shared with the entire organization or broad groups

Sharing an agent with your entire organization or broad security groups exposes its capabilities without proper access boundaries. While convenient, this practice expands the attack surface. Users unfamiliar with the agent’s purpose might unintentionally trigger sensitive actions, and threat actors with minimal access could use the agent as an entry point.

In many organizations, this risk occurs because broad sharing is fast and easy, often lacking controls to ensure only the right users have access. This results in agents being visible to everyone, including users with unrelated roles or inappropriate permissions. This visibility increases the risk of data exposure, misuse, and unintended activation of sensitive connectors or actions.

2: Agents that do not require authentication

Agents that you can access without authentication, or that only prompt for authentication on demand, create a significant exposure point. When an agent is publicly reachable or unauthenticated, anyone with the link can use its capabilities. Even if the agent appears harmless, its topics, actions, or knowledge sources might unintentionally reveal internal information or allow interactions that were never for public access.

This gap appears because authentication was deactivated for testing, left in its default state, or misunderstood as optional. The results in an agent that behaves like a public entry point into organizational data or logic. Without proper controls, this creates a risk of data leakage, unintended actions, and misuse by external or anonymous users.

3: Agents with HTTP request action with risky configurations

Agents that perform direct HTTP requests introduce a unique risks, especially when those requests target non-standard ports, insecure schemes, or sensitive services that already have built in Power Platform connectors. These patterns often bypass the governance, validation, throttling, and identity controls that connectors provide. As a result, they can expose the organization to misconfigurations, information disclosure, or unintended privilege escalation.

These configurations appear unintentionally. A maker might copy a sample request, test an internal endpoint, or use HTTP actions for flexibility during testing and convenience. Without proper review, this can lead to agents issuing unsecured calls over HTTP or invoking critical Microsoft APIs directly through URLs instead of secured connectors. Each of these behaviors represent an opportunity for misuse or accidental exposure of organizational data.

4: Agents capable of email-based aata exfiltration

Agents that send emails using dynamic or externally controlled inputs present a significant risk. When an agent uses generative orchestration to send email, the orchestrator determines the recipient and message content at runtime. In a successful cross-prompt injection (XPIA) attack, a threat actor could instruct the agent to send internal data to external recipients.

A similar risk exists when an agent is explicitly configured to send emails to external domains. Even for legitimate business scenarios, unaudited outbound email can allow sensitive information to leave the organization. Because email is an immediate outbound channel, any misconfiguration can lead to unmonitored data exposure.

Many organizations create this gap unintentionally. Makers often use email actions for testing, notifications, or workflow automation without restricting recipient fields. Without safeguards, these agents can become exfiltration channels for any user who triggers them or for a threat actor exploiting generative orchestration paths.

5: Dormant connections, actions, or agents within the organization

Dormant agents and unused components might seem harmless, but they can create significant organizational risk. Unmonitored entry points often lack active ownership. These include agents that haven’t been invoked for weeks, unpublished drafts, or actions using Maker authentication. When these elements stay in your environment without oversight, they might contain outdated logic or sensitive connections That don’t meet current security standards.

Dormant assets are especially risky because they often fall outside normal operational visibility. While teams focus on active agents, older configurations are easily forgotten. Threat actors, frequently target exactly these blind spots. For example:

  • A published but unused agent can still be called.
  • A dormant maker-authenticated action might trigger elevated operations.
  • Unused actions in classic orchestration can expose sensitive connectors if they are activated.

Without proper governance, these artifacts can expose sensitive connectors if they are activated.

6: Agents using author authentication

When agents use the maker’s personal authentication, they act on behalf of the creator rather than the end user.  In this configuration, every user of the agent inherits the maker’s permissions. If those permissions include access to sensitive data, privileged operations, or high impact connectors, the agent becomes a path for privilege escalation.

This exposure often happens unintentionally. Makers might allow author authentication for convenience during development or testing because it is the default setting of certain tools. However, once published, the agent continues to run with elevated permissions even when invoked by regular users. In more severe cases, Model Context Protocol (MCP) tools configured with maker credentials allow threat actors to trigger operations that rely directly on the creator’s identity.

Author authentication weakens separation of duties and bypasses the principle of least privilege. It also increases the risk of credential misuse, unauthorized data access, and unintended lateral movement

7: Agents containing hard-coded credentials

Agents that contain hard-coded credentials inside topics or actions introduce a severe security risk. Clear-text secrets embedded directly in agent logic can be read, copied, or extracted by unintended users or automated systems. This often occurs when makers paste API keys, authentication tokens, or connection strings during development or debugging, and the values remain embedded in the production configuration. Such credentials can expose access to external services, internal systems, or sensitive APIs, enabling unauthorized access or lateral movement.

Beyond the immediate leakage risk, hard-coded credentials bypass the standard enterprise controls normally applied to secure secret storage. They are not rotated, not governed by Key Vault policies, and not protected by environment variable isolation. As a result, even basic visibility into agent definitions may expose valuable secrets.

8: Agents with model context protocol (MCP) tools configured

AI agents that include Model Context Protocol (MCP) tools provide a powerful way to integrate with external systems or run custom logic. However, if these MCP tools aren’t actively maintained or reviewed, they can introduce undocumented access patterns into the environment.

This risk when MCP configurations are:

  • Activated by default
  • Copied between agents
  • Left active after the original integration is no longer needed

Unmonitored MCP tools might expose capabilities that exceed the agent’s intended purpose. This is especially true if they allow access to privileged operations or sensitive data sources. Without regular oversight, these tools can become hidden entry points that user or threat actors might trigger unintended system interactions.

9: Agents with generative orchestration lacking instructions

AI agents that use generative orchestration without defined instructions face a high risk of unintended behavior. Instructions are the primary way to align a generative model with its intended purpose. If instructions are missing, incomplete, or misconfigured, the orchestrator lacks the context needed to limit its output. This makes the agent more vulnerable to user influence from user inputs or hostile prompts.

A lack of guidance can cause an agent to;

  • Drift from its expected behaviors. The agent might not follow its intended logic.
  • Use unexpected reasoning. The model might follow logic paths that don’t align with business needs.
  • Interact with connected systems in unintended ways. The agent might trigger actions that were never planned.

For organizations that need predictable and safe behavior, behavior, missing instructions area significant configuration gap.

10: Orphaned agents

Orphaned agents are agents whose owners are no longer with organization or their accounts deactivated. Without a valid owner, no one is responsible for oversight, maintenance, updates, or lifecycle management. These agents might continue to run, interact with users, or access data without an accountable individual ensuring the configuration remains secure.

Because ownerless agents bypass standard review cycles, they often contain outdated logic, deprecated connections, or sensitive access patterns that don’t align with current organizational requirements.

Remember the help desk agent we started with? That simple agent setup quietly checked off more than half of the risks in this list.

Keep reading and running the Advanced Hunting queries in the AI Agents folder, to find agents carrying these risks in your own environment before it’s too late.

Figure 2: The example Help Desk agent was detected by a query for unauthenticated agents.

From findings to fixes: A practical mitigation playbook

The 10 risks described above manifest in different ways, but they consistently stem from a small set of underlying security gaps: over‑exposure, weak authentication boundaries, unsafe orchestration, and missing lifecycle governance.

Figure 3 – Underlying security gaps.

Damage doesn’t begin with the attack. It starts when risks are left untreated.

The section below is a practical checklist of validations and actions that help close common agent security gaps before they’re exploited. Read it once, apply it consistently, and save yourself the cost of cleaning up later. Fixing security debt is always more expensive than preventing it.

1. Verify intent and ownership

Before changing configurations, confirm whether the agent’s behavior is intentional and still aligned with business needs.

  • Validate the business justification for broad sharing, public access, external communication, or elevated permissions with the agent owner.
  • Confirm whether agents without authentication are explicitly designed for public use and whether this aligns with organizational policy.
  • Review agent topics, actions, and knowledge sources to ensure no internal, sensitive, or proprietary information is exposed unintentionally.
  • Ensure every agent has an active, accountable owner. Reassign ownership for orphaned agents or retire agents that no longer have a clear purpose. For step-by-step instructions, see Microsoft Copilot Studio: Agent ownership reassignment.
  • Validate whether dormant agents, connections, or actions are still required, and decommission those that are not.
  • Perform periodic reviews for agents and establish a clear organizational policy for agents’ creation. For more information, see Configure data policies for agents.

2. Reduce exposure and tighten access boundaries

Most Copilot Studio agent risks are amplified by unnecessary exposure. Reducing who can reach the agent, and what it can reach, significantly lowers risk.

  • Restrict agent sharing to well‑scoped, role‑based security groups instead of entire organizations or broad groups. See Control how agents are shared.
  • Establish and enforce organizational policies defining when broad sharing or public access is allowed and what approvals are required.
  • Enforce full authentication by default. Only allow unauthenticated access when explicitly required and approved. For more information see Configure user authentication.
  • Limit outbound communication paths:
    • Restrict email actions to approved domains or hard‑coded recipients.
    • Avoid AI‑controlled dynamic inputs for sensitive outbound actions such as email or HTTP requests.
  • Perform periodic reviews of shared agents to ensure visibility and access remain appropriate over time.

3. Enforce strong authentication and least privilege

Agents must not inherit more privilege than necessary, especially through development shortcuts.

Replace author (maker) authentication with user‑based or system‑based authentication wherever possible. For more information, see Control maker-provided credentials for authentication – Microsoft Copilot Studio | Microsoft Learn and Configure user authentication for actions.

  • Review all actions and connectors that run under maker credentials and reconfigure those that expose sensitive or high‑impact services.
  • Audit MCP tools that rely on creator credentials and remove or update them if they are no longer required.
  • Apply the principle of least privilege to all connectors, actions, and data access paths, even when broad sharing is justified.

4. Harden orchestration and dynamic behavior

Generative agents require explicit guardrails to prevent unintended or unsafe behavior.

  • Ensure clear, well‑structured instructions are configured for generative orchestration to define the agent’s purpose, constraints, and expected behavior. For more information, see Orchestrate agent behavior with generative AI.
  • Avoid allowing the model to dynamically decide:
    • Email recipients
    • External endpoints
    • Execution logic for sensitive actions
  • Review HTTP Request actions carefully:
    • Confirm endpoint, scheme, and port are required for the intended use case.
    • Prefer built‑in Power Platform connectors over raw HTTP requests to benefit from authentication, governance, logging, and policy enforcement.
    • Enforce HTTPS and avoid non‑standard ports unless explicitly approved.

5. Eliminate Dead Weight and Protect Secrets

Unused capabilities and embedded secrets quietly expand the attack surface.

  • Remove or deactivate:
    • Dormant agents
    • Unpublished or unmodified agents
    • Unused actions
    • Stale connections
    • Outdated or unnecessary MCP tool configurations
  • Clean up Maker‑authenticated actions and classic orchestration actions that are no longer referenced.
  • Move all secrets to Azure Key Vault and reference them via environment variables instead of embedding them in agent logic.
  • When Key Vault usage is not feasible, enable secure input handling to protect sensitive values.
  • Treat agents as production assets, not experiments, and include them in regular lifecycle and governance reviews.

Effective posture management is essential for maintaining a secure and predictable Copilot Studio environment. As agents grow in capability and integrate with increasingly sensitive systems, organizations must adopt structured governance practices that identify risks early and enforce consistent configuration standards.

The scenarios and detection rules presented in this blog provide a foundation to help you;

  • Discovering common security gaps
  • Strengthening oversight
  • Reduce the overall attack surface

By combining automated detection with clear operational policies, you can ensure that their Copilot Studio agents remain secure, aligned, and resilient.

This research is provided by Microsoft Defender Security Research with contributions from Dor Edry and Uri Oren.

Learn more

The post Top 10 actions to build agents securely with Microsoft Copilot Studio appeared first on Microsoft Security Blog.

Your complete guide to Microsoft experiences at RSAC™ 2026 Conference

12 February 2026 at 18:00

The era of AI is reshaping both opportunity and risk faster than any shift security leaders have seen. Every organization is feeling the momentum; and for security teams, the question is no longer if AI will transform their work, but how to stay ahead of what comes next.

At Microsoft, we see this moment giving rise to what we call the Frontier Firm: organizations that are human-led and agent-operated. With more than 80% of leaders already using agents or planning to within the year, we’re entering a world where every person may soon have an entire agentic team at their side1. By 2028, IDC projects 1.3 billion agents in use—a scale that changes everything about how we work and how we secure2.

In the agentic era, security must be ambient and autonomous, just like the AI it protects. This is our vision for security as the core primitive, woven into and around everything we build and throughout everything we do. At RSAC 2026, we’ll share how we are delivering on that vision through our AI-first, end-to-end, security platform that helps you protect every layer of the AI stack and secure with agentic AI.

Join us at RSAC Conference 2026—March 22–26 in San Francisco

RSAC 2026 will give you a front‑row seat to how AI is transforming the global threat landscape, and how defenders can stay ahead with:

  • A deeper understanding of how AI is reshaping the global threat landscape
  • Insight into how Microsoft can help you protect every layer of the AI stack and secure with agentic AI
  • Product demos, curated sessions, executive conversations, and live meetings with our experts in the booth

This is your moment to see what’s next and what’s possible as we enter the era of agentic security.

Microsoft at RSAC™ 2026

From Microsoft Pre‑Day to innovation sessions, networking opportunities, and 1:1 meetings, explore experiences designed to help you navigate the age of AI with clarity and impact.

Microsoft Pre-Day: Your first look at what’s next in security

Kick off RSAC 2026 on Sunday, March 22 at the Palace Hotel for Microsoft Pre‑Day, an exclusive experience designed to set the tone for the week ahead.

Hear keynote insights from Vasu Jakkal, CVP of Microsoft Security Business and other Microsoft security leaders as they explore how AI and agents are reshaping the security landscape.

You’ll discover how Microsoft is advancing agentic defense, informed by more than 100 trillion security signals each day. You’ll learn how solutions like Agent 365 deliver observability at every layer, and how Microsoft’s purpose‑built security capabilities help you secure every layer of the AI stack. You’ll also explore how our expert-led services can help you defend against cyberthreats, build cyber resilience, and transform your security operations.

The experience concludes with opportunities to connect, including a networking reception and an invite-only dinner for CISOs and security executives.

Microsoft Pre‑Day is your chance to hear what is coming next and prepare for the week ahead. Secure your spot today.

Executive events: Exclusive access to insights, strategy, and connections

For CISOs and senior security decision makers, RSAC 2026 offers curated experiences designed to deliver maximum value:

  • CISO Dinner (Sunday, March 22): Join Microsoft Security executives and fellow CISOs for an intimate dinner following Microsoft Pre-Day. Share insights, compare strategies, and build connections that matter.
  • The CISO and CIO Mandate for Securing and Governing AI (Monday, March 23): A session outlining why organizations need integrated AI security and governance to manage new risks and accelerate responsible innovation.
  • Executive Lunch & Learn: AI Agents are here! Are you Ready? (Tuesday, March 24): A panel exploring how observability, governance, and security are essential to safely scaling AI agents and unlocking human potential.
  • The AI Risk Equation: Visibility, Control, and Threat Acceleration (Wednesday, March 25): A deeply interactive discussion on how CISOs address AI proliferation, visibility challenges, and expanding attack surfaces while guiding enterprise risk strategy.
  • Post-Day Forum (Thursday, March 26): Wrap up RSAC with an immersive, half‑day program at the Microsoft Experience Center in Silicon Valley—designed for deeper conversations, direct access to Microsoft’s security and AI experts, and collaborative sessions that go beyond the main‑stage content. Explore securing and managing AI agents, protecting multicloud environments, and deploying agentic AI through interactive discussions. Transportation from the city center will be provided. Space is limited, so register early.

These experiences are designed to help CISOs move beyond theory and into actionable strategies for securing their organizations in an AI-first world.

Keynote and sessions: Insights you can act on

On Monday, March 23, don’t miss the RSAC 2026 keynote featuring Vasu Jakkal, CVP of Microsoft Security. In Ambient and Autonomous Security: Building Trust in the Agentic AI Era (3:55 PM-4:15 PM PDT), learn how ambient, autonomous platforms with deep observability are evolving to address AI-powered threats and build a trusted digital foundation.

Here are two sessions you don’t want to miss:

1. Security, Governance, and Control for Agentic AI 

  • Monday, March 23 | 2:20–3:10 PM. Learn the core principles that keep autonomous agents secure and governed so organizations can innovate with AI without sprawl, misuse, or unintended actions.
    • Speakers: Neta Haiby, Partner, Product Manager and Tina Ying, Director, Product Marketing, Microsoft 

2. Advancing Cyber Defense in the Era of AI Driven Threats 

  • Tuesday, March 24 | 9:40–10:30 AM. Explore how AI elevates threat sophistication and what resilient, intelligence-driven defenses look like in this new era.
    • Speaker: Brad Sarsfield, Senior Director, Microsoft Security, NEXT.ai

Plus, don’t miss our sessions throughout the week: 

Microsoft Booth #5744: Theater sessions and interactive experiences

Visit the Microsoft booth at Moscone Center for an immersive look at how modern security teams protect AI‑powered environments. Connect with Microsoft experts, explore security and governance capabilities built for agentic AI, and see how solutions work together across identity, data, cloud, and security operations.

People talking near a Microsoft Security booth.

Test your skills and compete in security games

At the center of the booth is an interactive single‑player experience that puts you in a high‑stakes security scenario, working with adaptive agents to triage incidents, optimize conditional access, surface threat intelligence, and keep endpoints secure and compliant, then guiding you to demo stations for deeper exploration.

Quick sessions, big takeaways, plus a custom pet sticker

You can also stop by the booth theater for short, expert‑led sessions highlighting real‑world use cases and practical guidance, giving you a clear view of how to strengthen your security approach across the AI landscape—and while you’re there, don’t miss the Security Companion Sticker activation, where you can upload a photo of your pet and receive a curated AI-generated sticker.

Microsoft Security Hub: Your space to connect

People talking around tables at a conference.

Throughout the week, the iconic Palace Hotel will serve as Microsoft’s central gathering place—a welcoming hub where you can step away from the bustle of the conference. It’s a space to recharge and connect with Microsoft security experts and executives, participate in focused thought leadership sessions and roundtable discussions, and take part in networking experiences designed to spark meaningful conversations. Full details on sessions and activities are available on the Microsoft Security Experiences at RSAC™ 2026 page.

Customers can also take advantage of scheduled one-on-one meetings with Microsoft security experts during the week. These meetings offer an opportunity to dig deeper into today’s threat landscape, discuss specific product questions, and explore strategies tailored to your organization. To schedule a one-on-one meeting with Microsoft executives and subject matter experts, speak with your account representative or submit a meeting request form.

Partners: Building security together

Microsoft’s presence at RSAC 2026 isn’t just about our technology. It’s about the ecosystem. Visit the booth and the Security Hub to meet members of the Microsoft Intelligent Security Association (MISA) and explore how our partners extend and enhance Microsoft Security solutions. From integrated threat intelligence to compliance automation, these collaborations help you build a stronger, more resilient security posture.

Special thanks to Ascent Solutions, Avertium, BlueVoyant, CyberProof, Darktrace, and Huntress for sponsoring the Microsoft Security Hub and karaoke party.

Details on M I S A Theater Sessions at R S A C 2026.
Calendar for M I S A Demo Station at R S A C 2026.

Why join us at RSAC?

Attending RSAC™ 2026? By engaging with Microsoft Security, you’ll gain clear perspective on how AI agents are reshaping risk and response, practical guidance to help you focus on what matters most, and meaningful connections with peers and experts facing the same challenges.

Together, we can make the world safer for all. Join us in San Francisco and be part of the conversation defining the next era of cybersecurity.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


1According to data from the 2025 Work Trend Index, 82% of leaders say this is a pivotal year to rethink key aspects of strategy and operations, and 81% say they expect agents to be moderately or extensively integrated into their company’s AI strategy in the next 12–18 months. At the same time, adoption on the ground is spreading but uneven: 24% of leaders say their companies have already deployed AI organization-wide, while just 12% remain in pilot mode.

2IDC Info Snapshot, sponsored by Microsoft, 1.3 Billion AI Agents by 2028, May 2025 #US53361825

The post Your complete guide to Microsoft experiences at RSAC™ 2026 Conference appeared first on Microsoft Security Blog.

Outlook add-in goes rogue and steals 4,000 credentials and payment data

12 February 2026 at 15:35

Researchers found a malicious Microsoft Outlook add-in which was able to steal 4,000 stolen Microsoft account credentials, credit card numbers, and banking security answers. 

How is it possible that the Microsoft Office Add-in Store ended listing an add-in that silently loaded a phishing kit inside Outlook’s sidebar?

A developer launched an add-in called AgreeTo, an open-source meeting scheduling tool with a Chrome extension. It was a popular tool, but at some point, it was abandoned by its developer, its backend URL on Vercel expired, and an attacker later claimed that same URL.

That requires some explanation. Office add-ins are essentially XML manifests that tell Outlook to load a specific URL in an iframe. Microsoft reviews and signs the manifest once but does not continuously monitor what that URL serves later.

So, when the outlook-one.vercel.app subdomain became free to claim, a cybercriminal jumped at the opportunity to scoop it up and abuse the powerful ReadWriteItem permissions requested and approved in 2022. These permissions meant the add-in could read and modify a user’s email when loaded. The permissions were appropriate for a meeting scheduler, but they served a different purpose for the criminal.

While Google removed the dead Chrome extension in February 2025, the Outlook add-in stayed listed in Microsoft’s Office Store, still pointing to a Vercel URL that no longer belonged to the original developer.

An attacker registered that Vercel subdomain and deployed a simple four-page phishing kit consisting of fake Microsoft login, password collection, Telegram-based data exfiltration, and a redirect to the real login.microsoftonline.com.

What make this work was simple and effective. When users opened the add-in, they saw what looked like a normal Microsoft sign-in inside Outlook. They entered credentials, which were sent via a JavaScript function to the attacker’s Telegram bot along with IP data, then were bounced to the real Microsoft login so nothing seemed suspicious.

The researchers were able to access the attacker’s poorly secured Telegram-based exfiltration channel and recovered more than 4,000 sets of stolen Microsoft account credentials, plus payment and banking data, indicating the campaign was active and part of a larger multi-brand phishing operation.

“The same attacker operates at least 12 distinct phishing kits, each impersonating a different brand – Canadian ISPs, banks, webmail providers. The stolen data included not just email credentials but credit card numbers, CVVs, PINs, and banking security answers used to intercept Interac e-Transfer payments. This is a professional, multi-brand phishing operation. The Outlook add-in was just one of its distribution channels.”

What to do

If you are or ever have used the AgreeTo add-in after May 2023:

  • Make sure it’s removed. If not, uninstall the add-in.
  • Change the password for your Microsoft account.
  • If that password (or close variants) was reused on other services (email, banking, SaaS, social), change those as well and make each one unique.
  • Review recent sign‑ins and security activity on your Microsoft account, looking for logins from unknown locations or devices, or unusual times.
  • Review other sensitive information you may have shared via email.
  • Scan your mailbox for signs of abuse: messages you did not send, auto‑forwarding rules you did not create, or password‑reset emails for other services you did not request.
  • Watch payment statements closely for at least the next few months, especially small “test” charges and unexpected e‑transfer or card‑not‑present transactions, and dispute anything suspicious immediately.

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

❌