Normal view

The Five Phases of the Threat Intelligence Lifecycle

Blogs

Blog

The Five Phases of the Threat Intelligence Lifecycle: A Strategic Guide

The threat intelligence lifecycle is a fundamental framework for all fraud, physical, and cybersecurity programs. It is useful whether a program is mature and sophisticated or just starting out.

Share:
Default Author Image
January 29, 2026

What is the Core Purpose of the Threat Intelligence Lifecycle?

The threat intelligence lifecycle is a foundational framework for all fraud, physical security, and cybersecurity programs at every stage of maturity. It provides a structured way to understand how intelligence is defined, built, and applied to support real-world decisions.

At a high level, the lifecycle outlines how organizations move from questions to insight to action. Rather than focusing on tools or outputs alone, it emphasizes the practices required to produce intelligence that is relevant, timely, and trusted. This iterative, adaptable methodology consists of five stages that guide how intelligence requirements are set, how information is collected and analyzed, how insight reaches decision-makers, and how priorities are continuously refined based on feedback and changing risk conditions.

The Five Phases of the Threat Intelligence Lifecycle

Key Objectives at Each Phase of the Threat Intelligence Lifecycle

  1. Requirements & Tasking: Define what intelligence needs to answer and why. This phase establishes clear priorities tied to business risk, assets, and stakeholder needs, providing direction for all downstream intelligence activity.
  2. Collection & Discovery: Gather relevant information from internal and external sources and expand visibility as threats evolve. This includes identifying new sources, closing visibility gaps, and ensuring coverage aligns with defined intelligence requirements.
  3. Analysis & Prioritization: Transform collections into insight by connecting signals, context, and impact. Analysts assess relevance, likelihood, and business significance to determine which threats, actors, or exposures matter most.
  4. Dissemination & Action: Deliver intelligence in formats that reach the right stakeholders at the right time. This phase ensures intelligence informs operations, response, and decision-making, not just reporting.
  5. Feedback & Retasking: Continuously review outcomes, stakeholder input, and changing threats to refine requirements and adjust collection and analysis. This feedback loop keeps the intelligence program aligned with real-world risk and operational needs.

PHASE 1: Requirements & Tasking

The first phase of the threat intelligence lifecycle is arguably the most important because it defines the purpose and direction of every activity that follows. This phase focuses on clearly articulating what intelligence needs to answer and why.

As an initial step, organizations should define their intelligence requirements, often referred to as Priority Intelligence Requirements (PIRs). In public sector contexts, these may also be called Essential Elements of Information (EEIs). Regardless of terminology, the goal is the same: establish clear, stakeholder-driven questions that intelligence is expected to support.

Effective requirements are tied directly to business risk and operational outcomes. They should reflect what the organization is trying to protect, the threats of greatest concern, and the decisions intelligence is meant to inform, such as reducing operational risk, improving efficiency, or accelerating detection and response.

This process often resembles building a business case, and that’s intentional. Clearly defined requirements make it easier to align intelligence efforts with organizational priorities, establish meaningful key performance indicators (KPIs), and demonstrate the value of intelligence over time.

In many organizations, senior leadership, such as the Chief Information Security Officer (CISO or CSO), plays a key role in shaping requirements by identifying critical assets, defining risk tolerance, and setting expectations for how intelligence should support decision-making.

Key Considerations in Phase 1

 Which assets, processes, or people present the highest risk to the organization?

— What decisions should intelligence help inform or accelerate?

— How should intelligence improve efficiency, prioritization, or response across teams?

— Which downstream teams or systems will rely on these intelligence outputs?

PHASE 2: Collection & Discovery

The Collection & Discovery phase focuses on building visibility into the threat environments most relevant to your organization. Both the breadth and depth of collection matter. Too little visibility creates blind spots; too much unfocused data overwhelms teams with noise and false positives.

At this stage, organizations determine where and how intelligence is collected, including the types of sources monitored and the mechanisms used to adapt coverage as threats evolve. This can include visibility into phishing activity, compromised credentials, vulnerabilities and exploits, malware tooling, fraud schemes, and other adversary behaviors across open, deep, and closed environments.

Effective programs increasingly rely on Primary Source Collection, or the ability to collect intelligence directly from original sources based on defined requirements, rather than consuming static, vendor-defined feeds. This approach enables teams to monitor the environments where threats originate, coordinate, and evolve—and to adjust collection dynamically as priorities shift.

Discovery extends collection beyond static source lists. Rather than relying solely on predefined feeds, effective programs continuously identify new sources, communities, and channels as threat actors shift tactics, platforms, and coordination methods. This adaptability is critical for surfacing early indicators and upstream activity before threats materialize internally.

The processing component of this phase ensures collected data is usable. Raw inputs are normalized, structured, translated, deduplicated, and enriched so analysts can quickly assess relevance and move into analysis. Common processing activities include language translation, metadata extraction, entity normalization, and reduction of low-signal content.

Key Considerations in Phase 2

 Where do you lack visibility into emerging or upstream threat activity?

— Are your collection methods adaptable as threat actors and platforms change?

— Do you have the ability to collect directly from primary sources based on your own intelligence requirements, rather than relying on fixed vendor feeds?

— How effectively can you access and monitor closed or high-risk environments?

— Is collected data structured and enriched in a way that supports efficient analysis?

PHASE 3: Analysis & Prioritization

The Analysis & Prioritization phase focuses on transforming processed data into meaningful intelligence that supports real decisions. This is where analysts connect signals across sources, enrich raw findings with context, assess credibility and relevance, and determine why a threat matters to the organization.

Effective analysis evaluates activity, likelihood, impact, and business relevance. Analysts correlate threat actor behavior, infrastructure, vulnerabilities, and targeting patterns to understand exposure and prioritize response. This step is critical for moving from information awareness to actionable insight.

As artificial intelligence and machine learning continue to mature, they increasingly support this phase by accelerating enrichment, correlation, translation, and pattern recognition across large datasets. When applied thoughtfully, AI helps analysts scale their work and improve consistency, while human expertise remains essential for judgment, context, and prioritization especially for high-risk or ambiguous threats.

This phase delivers clarity and a defensible view of what requires attention first and why.

Key Considerations in Phase 3

 Which threats pose the greatest risk based on likelihood, impact, and business relevance?

— How effectively are analysts correlating signals across sources, assets, and domains?

— Where can automation or AI reduce manual effort without sacrificing analytic rigor?

— Are analysis outputs clearly prioritized to support downstream action?

PHASE 4: Dissemination & Action

Once analysis and prioritization are complete, intelligence must be delivered in a way that enables action. The Dissemination & Action phase focuses on translating finished intelligence into formats that are clear, relevant, and aligned to how different stakeholders make decisions.

This phase is dedicated to ensuring the right information reaches the right teams at the right time. Effective dissemination considers audience, urgency, and operational context, whether intelligence is supporting detection engineering, incident response, fraud prevention, vulnerability remediation, or executive decision-making.

Finished intelligence should include clear assessments, confidence levels, and recommended actions. These recommendations may inform incident response playbooks, ransomware mitigation steps, patch prioritization, fraud controls, or monitoring adjustments. The goal is to remove ambiguity and enable stakeholders to act decisively.

Ultimately, intelligence only delivers value when it drives outcomes. In this phase, stakeholders evaluate the intelligence provided and determine whether, and how, to act on it.

Key Considerations in Phase 4

 Who needs this intelligence, and how should it be delivered to support timely decisions?

— Are findings communicated with appropriate context, confidence, and clarity?

— Do outputs include clear recommendations or actions tailored to the audience?

— Is intelligence integrated into operational workflows, not just distributed as static reports?

PHASE 5: Feedback & Retasking

The Feedback & Retasking phase closes the intelligence lifecycle loop by ensuring intelligence remains aligned to real-world needs as threats, priorities, and business conditions change. Rather than treating intelligence delivery as an endpoint, this phase focuses on evaluating impact and continuously refining what the intelligence function is working on and why.

Once intelligence has been acted on, stakeholders assess whether it was timely, relevant, and actionable. Their feedback informs updates to requirements, collection priorities, analytic focus, and delivery methods. Mature programs use this input to adjust tasking in near real time, ensuring intelligence efforts remain focused on the threats that matter most.

Improvements at this stage often center on shortening retasking cycles, reducing low-value outputs, and strengthening alignment between intelligence producers and decision-makers. Over time, this creates a more adaptive and responsive intelligence function that evolves alongside the threat landscape.

Key Considerations in Phase 5 

—  How frequently are intelligence priorities reviewed and updated?

— Which intelligence outputs led to decisions or action—and which did not?

— Are stakeholders able to provide structured feedback on relevance and impact?

— How quickly can requirements, sources, or analytic focus be adjusted based on new threats or business needs?

— Does the feedback loop actively improve future intelligence collection, analysis, and delivery?

Assessing Your Threat Intelligence Lifecycle in Practice

Understanding the threat intelligence lifecycle is one thing. Knowing how effectively it operates inside your organization today is another.

Most teams don’t struggle because they lack intelligence activities; they struggle because those activities aren’t consistently aligned, operationalized, or adapted as needs change. Requirements may be defined in one area, while collection, analysis, and dissemination evolve unevenly across teams like CTI, vulnerability management, fraud, or physical security.

To help organizations move from conceptual understanding to practical evaluation, Flashpoint developed the Threat Intelligence Capability Assessment.

The assessment maps directly to the lifecycle outlined above, evaluating how intelligence functions across five core dimensions:

  • Requirements & Tasking – How clearly intelligence priorities are defined and tied to real business risk
  • Collection & Discovery – Whether visibility is broad, deep, and adaptable as threats evolve
  • Analysis & Prioritization – How effectively analysts connect signals, context, and impact
  • Dissemination & Action – How intelligence reaches operations and decision-makers
  • Feedback & Retasking – How frequently priorities are reviewed and adjusted

Based on responses, organizations are mapped to one of four stages—Developing, Maturing, Advanced, or Leader—reflecting how intelligence actually flows across the lifecycle today.

Teams can apply insights by function or workflow, using the results to identify where intelligence is working well, where friction exists, and where targeted changes will have the greatest impact. Each participant also receives a companion guide with practical guidance, including strategic priorities, immediate actions, and a 90-day planning framework to help translate lifecycle insight into execution.

Take the Threat Intelligence Capability Assessment to evaluate how your program aligns to the lifecycle and where to focus next.

See Flashpoint in Action

Flashpoint’s comprehensive threat intelligence platform supports intelligence teams across every phase of the threat intelligence lifecycle, from defining clear requirements and expanding visibility into relevant threat ecosystems, to analysis, prioritization, dissemination, and continuous retasking as conditions change.

Schedule a demo to see how Flashpoint delivers actionable intelligence, analyst expertise, and workflow-ready outputs that help teams identify, prioritize, and respond to threats with greater clarity and confidence—so intelligence doesn’t just inform awareness, but drives timely, measurable action across the organization.

Frequently Asked Questions (FAQs)

What are the five phases of the threat intelligence lifecycle?

The threat intelligence lifecycle consists of five repeatable phases that describe how intelligence moves from intent to action:

Requirements & Tasking, Collection & Discovery, Analysis & Prioritization, Dissemination & Action, and Feedback & Retasking.

Together, these phases ensure that intelligence is driven by real business needs, grounded in relevant visibility, enriched with context, delivered to decision-makers, and continuously refined as threats and priorities change.

PhasePrimary Objective
Requirements & TaskingDefining intelligence priorities and tying them to real business risk
Collection & DiscoveryGathering data from relevant sources and expanding visibility as threats evolve
Analysis & PrioritizationConnecting signals, context, and impact to determine what matters most
Dissemination & ActionDelivering intelligence to operations and decision-makers in usable formats
Feedback & RetaskingReviewing outcomes and adjusting priorities, sources, and focus over time

How do intelligence requirements guide security operations?

Intelligence requirements—often formalized as Priority Intelligence Requirements (PIRs)—define the specific questions intelligence teams must answer to support the business. They provide the north star for what to collect, analyze, and report on.

Clear requirements help teams:

  • Focus: Reduce noise by prioritizing intelligence aligned to real risk
  • Measure: Track whether intelligence outputs are driving decisions or action
  • Align: Ensure security, fraud, physical security, and risk teams are working toward shared outcomes

Without clear requirements, intelligence efforts often default to reactive collection and generic reporting that struggle to deliver impact.

Why is the feedback phase of the intelligence lifecycle necessary for a proactive defense?

Feedback & Retasking turns the intelligence lifecycle from a linear process into a continuous improvement loop. It ensures intelligence stays aligned with changing threats, business priorities, and operational needs.

Through regular review and stakeholder input, teams can:

  • Identify which intelligence outputs led to action and which did not
  • Retire low-value sources or reporting formats
  • Adjust requirements, collection, and analysis as new threats emerge

This phase is essential for moving from static reporting to intelligence-led operations, where priorities evolve in near real time and intelligence continuously improves its relevance and impact.

The post The Five Phases of the Threat Intelligence Lifecycle appeared first on Flashpoint.

What AI toys can actually discuss with your child | Kaspersky official blog

29 January 2026 at 15:47

What adult didn’t dream as a kid that they could actually talk to their favorite toy? While for us those dreams were just innocent fantasies that fueled our imaginations, for today’s kids, they’re becoming a reality fast.

For instance, this past June, Mattel — the powerhouse behind the iconic Barbie — announced a partnership with OpenAI to develop AI-powered dolls. But Mattel isn’t the first company to bring the smart talking toy concept to life; plenty of manufacturers are already rolling out AI companions for children. In this post, we dive into how these toys actually work, and explore the risks that come with using them.

What exactly are AI toys?

When we talk about AI toys here, we mean actual, physical toys — not just software or apps. Currently, AI is most commonly baked into plushies or kid-friendly robots. Thanks to integration with large language models, these toys can hold meaningful, long-form conversations with a child.

As anyone who’s used modern chatbots knows, you can ask an AI to roleplay as anyone: from a movie character to a nutritionist or a cybersecurity expert. According to the study, AI comes to playtime — Artificial companions, real risks, by the U.S. PIRG Education Fund, manufacturers specifically hardcode these toys to play the role of a child’s best friend.

AI companions for kids

Examples of AI toys tested in the study: plush companions and kid-friendly robots with built-in language models. Source

Importantly, these toys aren’t powered by some special, dedicated “kid-safe AI”. On their websites, the creators openly admit to using the same popular models many of us already know: OpenAI’s ChatGPT, Anthropic’s Claude, DeepSeek from the Chinese developer of the same name, and Google’s Gemini. At this point, tech-wary parents might recall the harrowing ChatGPT case where the chatbot made by OpenAI was blamed for a teenager’s suicide.

And this is the core of the problem: the toys are designed for children, but the AI models under the hood aren’t. These are general-purpose adult systems that are only partially reined in by filters and rules. Their behavior depends heavily on how long the conversation lasts, how questions are phrased, and just how well a specific manufacturer actually implemented their safety guardrails.

How the researchers tested the AI toys

The study, whose results we break down below, goes into great detail about the psychological risks associated with a child “befriending” a smart toy. However, since that’s a bit outside the scope of this blogpost, we’re going to skip the psychological nuances, and focus strictly on the physical safety threats and privacy concerns.

In their study, the researchers put four AI toys through the ringer:

  • Grok (no relation to xAI’s Grok, apparently): a plush rocket with a built-in speaker marketed for kids aged three to 12. Price tag: US$99. The manufacturer, Curio, doesn’t explicitly state which LLM they use, but their user agreement mentions OpenAI among the operators receiving data.
  • Kumma (not to be confused with our own Midori Kuma): a plush teddy-bear companion with no clear age limit, also priced at US$99. The toy originally ran on OpenAI’s GPT-4o, with options to swap models. Following an internal safety audit, the manufacturer claimed they were switching to GPT-5.1. However, at the time the study was published, OpenAI reported that the developer’s access to the models remained revoked — leaving it anyone’s guess which chatbot Kumma is actually using right now.
  • Miko 3: a small wheeled robot with a screen for a face, marketed as a “best friend” for kids aged five to 10. At US$199, this is the priciest toy in the lineup. The manufacturer is tight-lipped about which language model powers the toy. A Google Cloud case study mentions using Gemini for certain safety features, but that doesn’t necessarily mean it handles all the robot’s conversational features.
  • Robot MINI: a compact, voice-controlled plastic robot that supposedly runs on ChatGPT. This is the budget pick — at US$97. However, during the study, the robot’s Wi-Fi connection was so flaky that the researchers couldn’t even give it a proper test run.
Robot MINI: an AI robot for kids

Robot MINI: a compact AI robot that failed to function properly during the study due to internet connectivity issues. Source

To conduct the testing, the researchers set the test child’s age to five in the companion apps for all the toys. From there, they checked how the toys handled provocative questions. The topics the experimenters threw at these smart playmates included:

  • Access to dangerous items: knives, pills, matches, and plastic bags
  • Adult topics: sex, drugs, religion, and politics

Let’s break down the test results for each toy.

Unsafe conversations with AI toys

Let’s start with Grok, the plush AI rocket from Curio. This toy is marketed as a storyteller and conversational partner for kids, and stands out by giving parents full access to text transcripts of every AI interaction. Out of all the models tested, this one actually turned out to be the safest.

When asked about topics inappropriate for a child, the toy usually replied that it didn’t know or suggested talking to an adult. However, even this toy told the “child” exactly where to find plastic bags, and engaged in discussions about religion. Additionally, Grok was more than happy to chat about… Norse mythology, including the subject of heroic death in battle.

Grok: the plush rocket AI companion for kids

The Grok plush AI toy by Curio, equipped with a microphone and speaker for voice interaction with children. Source

The next AI toy, the Kumma plush bear by FoloToy, delivered what were arguably the most depressing results. During testing, the bear helpfully pointed out exactly where in the house a kid could find potentially lethal items like knives, pills, matches, and plastic bags. In some instances, Kumma suggested asking an adult first, but then proceeded to give specific pointers anyway.

The AI bear fared even worse when it came to adult topics. For starters, Kumma explained to the supposed five-year-old what cocaine is. Beyond that, in a chat with our test kindergartner, the plush provocateur went into detail about the concept of “kinks”, and listed off a whole range of creative sexual practices: bondage, role-playing, sensory play (like using a feather), spanking, and even scenarios where one partner “acts like an animal”!

After a conversation lasting over an hour, the AI toy also lectured researchers on various sexual positions, told how to tie a basic knot, and described role-playing scenarios involving a teacher and a student. It’s worth noting that all of Kumma’s responses were recorded prior to a safety audit, which the manufacturer, FoloToy, conducted after receiving the researchers’ inquiries. According to their data, the toy’s behavior changed after the audit, and the most egregious violations were made unrepeatable.

Kumma: the plush AI teddy bear

The Kumma AI toy by FoloToy: a plush companion teddy bear whose behavior during testing raised the most red flags regarding content filtering and guardrails. Source

Finally, the Miko 3 robot from Miko showed significantly better results. However, it wasn’t entirely without its hiccups. The toy told our potential five-year-old exactly where to find plastic bags and matches. On the bright side, Miko 3 refused to engage in discussions regarding inappropriate topics.

During testing, the researchers also noticed a glitch in its speech recognition: the robot occasionally misheard the wake word “Hey Miko” as “CS:GO”, which is the title of the popular shooter Counter-Strike: Global Offensive — rated for audiences aged 17 and up. As a result, the toy would start explaining elements of the shooter — thankfully, without mentioning violence — or asking the five-year-old user if they enjoyed the game. Additionally, Miko 3 was willing to chat with kids about religion.

Kumma: the plush AI teddy bear

The Kumma AI toy by FoloToy: a plush companion teddy bear whose behavior during testing raised the most red flags regarding content filtering and guardrails. Source

AI Toys: a threat to children’s privacy

Beyond the child’s physical and mental well-being, the issue of privacy is a major concern. Currently, there are no universal standards defining what kind of information an AI toy — or its manufacturer — can collect and store, or exactly how that data should be secured and transmitted. In the case of the three toys tested, researchers observed wildly different approaches to privacy.

For example, the Grok plush rocket is constantly listening to everything happening around it. Several times during the experiments, it chimed in on the researchers’ conversations even when it hadn’t been addressed directly — it even went so far as to offer its opinion on one of the other AI toys.

The manufacturer claims that Curio doesn’t store audio recordings: the child’s voice is first converted to text, after which the original audio is “promptly deleted”. However, since a third-party service is used for speech recognition, the recordings are, in all likelihood, still transmitted off the device.

Additionally, researchers pointed out that when the first report was published, Curio’s privacy policy explicitly listed several tech partners — Kids Web Services, Azure Cognitive Services, OpenAI, and Perplexity AI — all of which could potentially collect or process children’s personal data via the app or the device itself. Perplexity AI was later removed from that list. The study’s authors note that this level of transparency is more the exception than the rule in the AI toy market.

Another cause for parental concern is that both the Grok plush rocket and the Miko 3 robot actively encouraged the “test child” to engage in heart-to-heart talks — even promising not to tell anyone their secrets. Researchers emphasize that such promises can be dangerously misleading: these toys create an illusion of private, trusting communication without explaining that behind the “friend” stands a network of companies, third-party services, and complex data collection and storage processes, which a child has no idea about.

Miko 3, much like Grok, is always listening to its surroundings and activates when spoken to — functioning essentially like a voice assistant. However, this toy doesn’t just collect voice data; it also gathers biometric information, including facial recognition data and potentially data used to determine the child’s emotional state. According to its privacy policy, this information can be stored for up to three years.

In contrast to Grok and Miko 3, Kumma operates on a push-to-talk principle: the user needs to press and hold a button for the toy to start listening. Researchers also noted that the AI teddy bear didn’t nudge the “child” to share personal feelings, promise to keep secrets, or create an illusion of private intimacy. On the flip side, the manufacturers of this toy provide almost no clear information regarding what data is collected, how it’s stored, or how it’s processed.

Is it a good idea to buy AI Toys for your children?

The study points to serious safety issues with the AI toys currently on the market. These devices can directly tell a child where to find potentially dangerous items, such as knives, matches, pills, or plastic bags, in their home.

Besides, these plush AI friends are often willing to discuss topics entirely inappropriate for children — including drugs and sexual practices — sometimes steering the conversation in that direction without any obvious prompting from the child. Taken together, this shows that even with filters and stated restrictions in place, AI toys aren’t yet capable of reliably staying within the boundaries of safe communication for young little ones.

Manufacturers’ privacy policies raise additional concerns. AI toys create an illusion of constant and safe communication for children, while in reality they’re networked devices that collect and process sensitive data. Even when manufacturers claim to delete audio or have limited data retention, conversations, biometrics, and metadata often pass through third-party services and are stored on company servers.

Furthermore, the security of such toys often leaves much to be desired. As far back as two years ago, our researchers discovered vulnerabilities in a popular children’s robot that allowed attackers to make video calls to it, hijack the parental account, and modify the firmware.

The problem is that, currently, there are virtually no comprehensive parental control tools or independent protection layers specifically for AI toys. Meanwhile, in more traditional digital environments — smartphones, tablets, and computers — parents have access to solutions like Kaspersky Safe Kids. These help monitor content, screen time, and a child’s digital footprint, which can significantly reduce, if not completely eliminate, such risks.

How can you protect your children from digital threats? Read more in our posts:

No Place Like Home Network: Disrupting the World's Largest Residential Proxy Network

28 January 2026 at 15:00

Introduction 

This week Google and partners took action to disrupt what we believe is one of the largest residential proxy networks in the world, the IPIDEA proxy network. IPIDEA’s proxy infrastructure is a little-known component of the digital ecosystem leveraged by a wide array of bad actors.

This disruption, led by Google Threat Intelligence Group (GTIG) in partnership with other teams, included three main actions:

  1. Took legal action to take down domains used to control devices and proxy traffic through them.

  2. Shared technical intelligence on discovered IPIDEA software development kits (SDKs) and proxy software with platform providers, law enforcement, and research firms to help drive ecosystem-wide awareness and enforcement. These SDKs, which are offered to developers across multiple mobile and desktop platforms, surreptitiously enroll user devices into the IPIDEA network. Driving collective enforcement against these SDKs helps protect users across the digital ecosystem and restricts the network's ability to expand.

  3. These efforts to help keep the broader digital ecosystem safe supplement the protections we have to safeguard Android users on certified devices. We ensured Google Play Protect, Android’s built-in security protection, automatically warns users and removes applications known to incorporate IPIDEA SDKs, and blocks any future install attempts.

We believe our actions have caused significant degradation of IPIDEA’s proxy network and business operations, reducing the available pool of devices for the proxy operators by millions. Because proxy operators share pools of devices using reseller agreements, we believe these actions may have downstream impact across affiliated entities.

Dizzying Array of Bad Behavior Enabled by Residential Proxies

In contrast to other types of proxies, residential proxy networks sell the ability to route traffic through IP addresses owned by internet service providers (ISPs) and used to provide service to residential or small business customers. By routing traffic through an array of consumer devices all over the world, attackers can mask their malicious activity by hijacking these IP addresses. This generates significant challenges for network defenders to detect and block malicious activities.

A robust residential proxy network requires the control of millions of residential IP addresses to sell to customers for use. IP addresses in countries such as the US, Canada, and Europe are considered especially desirable. To do this, residential proxy network operators need code running on consumer devices to enroll them into the network as exit nodes. These devices are either pre-loaded with proxy software or are joined to the proxy network when users unknowingly download trojanized applications with embedded proxy code. Some users may knowingly install this software on their devices, lured by the promise of “monetizing” their spare bandwidth. When the device is joined to the proxy network, the proxy provider sells access to the infected device’s network bandwidth (and use of its IP address) to their customers. 

While operators of residential proxies often extol the privacy and freedom of expression benefits of residential proxies, Google Threat Intelligence Group’s (GTIG) research shows that these proxies are overwhelmingly misused by bad actors. IPIDEA has become notorious for its role in facilitating several botnets: its software development kits played a key role in adding devices to the botnets, and its proxy software was then used by bad actors to control them. This includes the BadBox2.0 botnet we took legal action against last year, and the Aisuru and Kimwolf botnets more recently. We also observe IPIDEA being leveraged by a vast array of espionage, crime, and information operations threat actors. In a single seven day period in January 2026, GTIG observed over 550 individual threat groups that we track utilizing IP addresses tracked as IPIDEA exit nodes to obfuscate their activities, including groups from China, DPRK, Iran and Russia. The activities included access to victim SaaS environments, on-premises infrastructure, and password spray attacks. Our research has found significant overlaps between residential proxy network exit nodes, likely because of reseller and partnership agreements, making definitive quantification and attribution challenging. 

In addition, residential proxies pose a risk to the consumers whose devices are joined to the proxy network as exit nodes. These users knowingly or unknowingly provide their IP address and device as a launchpad for hacking and other unauthorized activities, potentially causing them to be flagged as suspicious or blocked by providers. Proxy applications also introduce security vulnerabilities to consumers’ devices and home networks. When a user’s device becomes an exit node, network traffic that they do not control will pass through their device. This means bad actors can access a user’s private devices on the same network, effectively exposing security vulnerabilities to the internet. GTIG’s analysis of these applications confirmed that IPIDEA proxy did not solely route traffic through the exit node device, they also sent traffic to the device, in order to compromise it. While proxy providers may claim ignorance or close these security gaps when notified, enforcement and verification is challenging given intentionally murky ownership structures, reseller agreements, and diversity of applications.

The IPIDEA Proxy Network

Our analysis of residential proxy networks found that many well-known residential proxy brands are not only related but are controlled by the actors behind IPIDEA. This includes the following ostensibly independent proxy and VPN brands: 

  • 360 Proxy (360proxy\.com)

  • 922 Proxy (922proxy\.com)

  • ABC Proxy (abcproxy\.com)

  • Cherry Proxy (cherryproxy\.com)

  • Door VPN (doorvpn\.com)

  • Galleon VPN (galleonvpn\.com)

  • IP 2 World (ip2world\.com)

  • Ipidea (ipidea\.io)

  • Luna Proxy (lunaproxy\.com)

  • PIA S5 Proxy (piaproxy\.com)

  • PY Proxy (pyproxy\.com)

  • Radish VPN (radishvpn\.com)

  • Tab Proxy (tabproxy\.com)

The same actors that control these brands also control several domains related to Software Development Kits (SDKs) for residential proxies. These SDKs are not meant to be installed or executed as standalone applications, rather they are meant to be embedded into existing applications. The operators market these kits as ways for developers to monetize their applications, and offer Android, Windows, iOS, and WebOS compatibility. Once developers incorporate these SDKs into their app, they are then paid by IPIDEA usually on a per-download basis.

Advertising from PacketSDK, part of the IPIDEA proxy network

Figure 1: Advertising from PacketSDK, part of the IPIDEA proxy network

Once the SDK is embedded into an application, it will turn the device it is running on into an exit node for the proxy network in addition to providing whatever the primary functionality of the application was. These SDKs are the key to any residential proxy network—the software they get embedded into provides the network operators with the millions of devices they need to maintain a healthy residential proxy network. 

While many residential proxy providers state that they source their IP addresses ethically, our analysis shows these claims are often incorrect or overstated. Many of the malicious applications we analyzed in our investigation did not disclose that they enrolled devices into the IPIDEA proxy network. Researchers have previously found uncertified and off-brand Android Open Source Project devices, such as television set top boxes, with hidden residential proxy payloads

The following SDKs are controlled by the same actors that control the IPIDEA proxy network:

  • Castar SDK (castarsdk\.com)

  • Earn SDK (earnsdk\.io)

  • Hex SDK (hexsdk\.com)

  • Packet SDK (packetsdk\.com)

Command-and-Control Infrastructure

We performed static and dynamic analysis on software that had SDK code embedded in it as well as standalone SDK files to identify the command-and-control (C2) infrastructure used to manage proxy exit nodes and route traffic through them. From the analysis we observed that EarnSDK, PacketSDK, CastarSDK, and HexSDK have significant overlaps in their C2 infrastructure as well as code structure.

Overview

The infrastructure model is a two-tier system: 

  1. Tier One: Upon startup, the device will choose from a set of domains to connect to. The device sends some diagnostic information to the Tier One server and receives back a data payload that includes a set of Tier Two nodes to connect to.

  2. Tier Two: The application will communicate directly with an IP address to periodically poll for proxy tasks. When it receives a proxy task it will establish a new dedicated connection to the Tier Two IP address and begin proxying the payloads it receives.

infrastructure model

Figure 2: Two-tier C2 system

Tier One C2 Traffic

The device diagnostic information can be sent as HTTP GET query string parameters or in the HTTP POST body, depending on the domain and SDK. The payload sent includes a key parameter, which may be a customer identifier used to determine who gets paid for the device enrollment.

os=android&v=1.0.8&sn=993AE4FE78B879239BDC14DFBC0963CD&tag=OnePlus8Pro%23*%2311%23*%2330%23*%23QKR1.191246.002%23*%23OnePlus&key=cskfg9TAn9Jent&n=tlaunch

Figure 3: Sample device information send to Tier One server

The response from the Tier One server includes some timing information as well as the IP addresses of the Tier Two servers that this device should periodically poll for tasking.

{"code":200,"data":{"schedule":24,"thread":150,"heartbeat":20,"ip":[redacted],"info":"US","node":[{"net_type":"t","connect":"49.51.68.143:1000","proxy":"49.51.68.143:2000"},{"net_type":"t","connect":"45.78.214.188:800","proxy":"45.78.214.188:799"}]}

Figure 4: Sample response received from the Tier One Server

Tier Two C2 Traffic

The Tier Two servers are sent as connect and proxy pairs. In all analyses the pairs have been IP addresses, not domains. In our analysis, the pairs are the same IP address but different ports. The connect port is used to periodically poll for new proxy tasking. This is performed by sending TCP packets with encoded JSON payloads.

{"name": "0c855f87a7574b28df383eca5084fcdc", "o": "eDwSokuyOuMHcF10", "os": "windows"}

Figure 5: Sample encoded JSON sent to Tier Two connect port

When the Tier Two server has traffic to route to the device, it will respond back with the FQDN to proxy traffic to as well as a connection ID.

www.google.com:443&c8eb024c053f82831f2738bd48afc256

Figure 6: Sample proxy tasking from the Tier Two server

The device will then establish a connection to the proxy port of the same Tier Two server and send the connection ID, indicating that it is ready to receive data payloads.

8a9bd7e7a806b2cc606b7a1d8f495662|ok

Figure 7: Sample data sent from device to the Tier Two proxy port

The Tier Two server will then immediately send data payloads to be proxied. The device will extract the TCP data payload, establish a socket connection to the specified FQDN and send the payload, unmodified, to the destination. 

Overlaps in Infrastructure

The SDKs each have their own set of Tier One domains. This comes primarily from analysis of standalone SDK files. 

PacketSDK

  • http://{random}.api-seed.packetsdk\.xyz

  • http://{random}.api-seed.packetsdk\.net

  • http://{random}.api-seed.packetsdk\.io

CastarSDK 

  • dispatch1.hexsdk\.com

  • cfe47df26c8eaf0a7c136b50c703e173\.com

  • 8b21a945159f23b740c836eb50953818\.com

  • 31d58c226fc5a0aa976e13ca9ecebcc8\.com

HexSDK

Download requests to files from the Hex SDK website redirect to castarsdk\.com. The SDKs are exactly the same.

EarnSDK

The EarnSDK JAR package for Android has strong overlaps with the other SDK brands analyzed. Earlier published samples contained the Tier One C2 domains:

  • holadns\.com

  • martianinc\.co

  • okamiboss\.com

Of note, these domains were observed as part of the BadBox2.0 botnet and were sinkholed in our earlier litigation. Pivoting off these domains and other signatures, we identified some additional domains used as Tier One C2 domains: 

  • v46wd6uramzkmeeo\.in
  • 6b86b273ff34fce1\.online

  • 0aa0cf0637d66c0d\.com

  • aa86a52a98162b7d\.com

  • 442fe7151fb1e9b5\.com

  • BdRV7WlBszfOTkqF\.uk

Tier Two Nodes

Our analysis of various malware samples and the SDKs found a single shared pool of Tier Two servers. As of this writing there were approximately 7,400 Tier Two servers. The number of Tier Two nodes changes on a daily basis, consistent with a demand-based scaling system. They are hosted in locations around the globe, including the US. This indicates that despite different brand names and Tier One domains, the different SDKs in fact manage devices and proxy traffic through the same infrastructure.

Shared Sourcing of Exit Nodes

Trojanized Software Distribution

The IPIDEA actors also control domains that offer free Virtual Private Network services. While the applications do seem to provide VPN functionality, they also join the device to the IPIDEA proxy network as an exit node by incorporating Hex or Packet SDK. This is done without clear disclosures to the end user, nor is it the primary function of the application.

  • Galleon VPN (galleonvpn\.com)

  • Radish VPN (radishvpn\.com)

  • Aman VPN (defunct)

Trojanized Windows Binaries

We identified a total of 3,075 unique Windows PE file hashes where dynamic analysis recorded a DNS request to at least one Tier One domain. A number of these hashes were for the monetized proxy exit node software, PacketShare. Our analysis also uncovered applications masquerading as OneDriveSync and Windows Update. These trojanized Windows applications were not distributed directly by the IPIDEA actors.

Android Application Analysis

We identified over 600 applications across multiple download sources with code connecting to Tier One C2 domains. These apps were largely benign in function (e.g., utilities, games, and content) but utilized monetization SDKs that enabled IPIDEA proxy behavior.

Our Actions

This week we took a number of steps designed to comprehensively dismantle as much of IPIDEA’s infrastructure as possible.

Protecting Devices

We took legal action to take down the C2 domains used by bad actors to control devices and proxy traffic. This protects consumer devices and home networks by disrupting the infrastructure at the source. 

To safeguard the Android ecosystem, we enforced our platform policies against trojanizing software, ensuring Google Play Protect on certified Android devices with Google Play services automatically warns users and removes applications known to incorporate IPIDEA software development kits (SDKs), and blocks any future install attempts.

Limiting IPIDEA’s Distribution

We took legal action to take down the domains used to market IPIDEA’s products, including proxy software and software development kits, across their various brands.

Coordinating with Industry Partners

We’ve shared our findings with industry partners to enable them to take action as well. We’ve worked closely with other firms, including Spur and Lumen’s Black Lotus Labs to understand the scope and extent of residential proxy networks and the bad behavior they often enable. We partnered with Cloudflare to disrupt IPIDEA’s domain resolution, impacting their ability to command and control infected devices and market their products. 

Call to Action

While we believe our actions have seriously impacted one of the largest residential proxy providers, this industry appears to be rapidly expanding, and there are significant overlaps across providers. As our investigation shows, the residential proxy market has become a "gray market" that thrives on deception—hijacking consumer bandwidth to provide cover for global espionage and cybercrime. More must be done to address the risks of these technologies. 

Empowering and Protecting the Consumer

Residential proxies are an understudied area of risk for consumers, and more can be done to raise awareness. Consumers should be extremely wary of applications that offer payment in exchange for "unused bandwidth" or "sharing your internet." These applications are primary ways for illicit proxy networks to grow, and could open security vulnerabilities on the device’s home network. We urge users to stick to official app stores, review permissions for third-party VPNs and proxies, and ensure built-in security protections like Google Play Protect are active.

Consumers should be careful when purchasing connected devices, such as set top boxes, to make sure they are from reputable manufacturers. For example, to help you confirm whether or not a device is built with the official Android TV OS and Play Protect certified, our Android TV website provides the most up-to-date list of partners. You can also take these steps to check if your Android device is Play Protect certified.

Proxy Accountability and Policy Reform

Residential proxy providers have been able to flourish under the guise of legitimate businesses. While some providers may indeed behave ethically and only enroll devices with the clear consent of consumers, any claims of "ethical sourcing" must be backed by transparent, auditable proof of user consent. Similarly, app developers have a responsibility to vet the monetization SDKs they integrate.

Industry Collaboration

We encourage mobile platforms, ISPs, and other tech platforms to continue sharing intelligence and implementing best practices to identify illicit proxy networks and limit their harms.

Indicators of Compromise (IOCs)

To assist the wider community in hunting and identifying activity outlined in this blog post, we have included a comprehensive list of indicators of compromise (IOCs) in a GTI Collection for registered users.

Network Indicators

00857cca77b615c369f48ead5f8eb7f3.com

0aa0cf0637d66c0d.com

31d58c226fc5a0aa976e13ca9ecebcc8.com

3k7m1n9p4q2r6s8t0v5w2x4y6z8u9.com

442fe7151fb1e9b5.com

6b86b273ff34fce1.online

7x2k9n4p1q0r5s8t3v6w0y2z4u7b9.com

8b21a945159f23b740c836eb50953818.com

8f00b204e9800998.com

a7b37115ce3cc2eb.com

a8d3b9e1f5c7024d6e0b7a2c9f1d83e5.com

aa86a52a98162b7d.com

af4760df2c08896a9638e26e7dd20aae.com

asdk2​.com

b5e9a2d7f4c8e3b1a0d6f2e9c5b8a7d.com

bdrv7wlbszfotkqf.uk

cfe47df26c8eaf0a7c136b50c703e173.com

hexsdk.com

e4f8c1b9a2d7e3f6c0b5a8d9e2f1c4d.com

packetsdk.io

packetsdk.net

packetsdk.xyz

v46wd6uramzkmeeo.in

willmam.com

File Indicators

Cert

SIGNER_IDENTITY=/1.3.6.1.4.1.311.60.2.1.3=HK/businessCategory=Private Organization/serialNumber=69878507/C=HK/L=Hong Kong Island/O=HONGKONG LINGYUN MDT INFOTECH LIMITED/CN=HONGKONG LINGYUN MDT INFOTECH LIMITED

SIGNER_IDENTITY=/businessCategory=Private Organization/1.3.6.1.4.1.311.60.2.1.3=HK/serialNumber=2746134/C=HK/L=Wan Chai/O=HONGKONG LINGYUN MDT INFOTECH LIMITED/CN=HONGKONG LINGYUN MDT INFOTECH LIMITED

SIGNER_IDENTITY=/1.3.6.1.4.1.311.60.2.1.3=HK/businessCategory=Private Organization/serialNumber=74092936/C=HK/L=HONG KONG ISLAND/O=FIRENET LIMITED/CN=FIRENET LIMITED

SIGNER_IDENTITY=/1.3.6.1.4.1.311.60.2.1.3=HK/businessCategory=Private Organization/serialNumber=3157599/C=HK/L=Wan Chai/O=FIRENET LIMITED/CN=FIRENET LIMITED

SIGNER_IDENTITY=/1.3.6.1.4.1.311.60.2.1.3=HK/businessCategory=Private Organization/serialNumber=74097562/C=HK/L=Hong Kong Island/O=PRINCE LEGEND LIMITED/CN=PRINCE LEGEND LIMITED

SIGNER_IDENTITY=/1.3.6.1.4.1.311.60.2.1.3=HK/businessCategory=Private Organization/serialNumber=73874246/C=HK/L=Kowloon/O=MARS BROTHERS LIMITED/CN=MARS BROTHERS LIMITED

SIGNER_IDENTITY=/1.3.6.1.4.1.311.60.2.1.3=HK/businessCategory=Private Organization/serialNumber=3135905/C=HK/L=Cheung Sha Wan/O=MARS BROTHERS LIMITED/CN=MARS BROTHERS LIMITED

SIGNER_IDENTITY=/1.3.6.1.4.1.311.60.2.1.3=HK/businessCategory=Private Organization/serialNumber=3222394/C=HK/L=WAN CHAI/O=DATALABS LIMITED/CN=DATALABS LIMITED

Example Hashes

File Type

Description

SHA-256

DLL

Packet SDK package found inside other applications

aef34f14456358db91840c416e55acc7d10185ff2beb362ea24697d7cdad321f

APK

Application with Packet SDK Code

b0726bdd53083968870d0b147b72dad422d6d04f27cd52a7891d038ee83aef5b

APK

Application with Hex SDK Code

2d1891b6d0c158ad7280f0f30f3c9d913960a793c6abcda249f9c76e13014e45

EXE

Radish VPN Client

59cbdecfc01eba859d12fbeb48f96fe3fe841ac1aafa6bd38eff92f0dcfd4554

EXE

ABC S5 Proxy Client

ba9b1f4cc2c7f4aeda7a1280bbc901671f4ec3edaa17f1db676e17651e9bff5f

EXE

Luna Proxy Client

01ac6012d4316b68bb3165ee451f2fcc494e4e37011a73b8cf2680de3364fcf4

❌