Palo Alto Networks is proud to enter a strategic partnership with the RAF Association.
For over 90 years, the Royal Air Forces Association (RAFA) has championed a simple yet profound belief: No member of the RAF community should ever be left without the help they need. Serving personnel, veterans and their families, the RAF Association provides crucial welfare support, responding to increasingly complex needs in an era of operational changes and challenges, including persistent global deployment.
Delivering on their mission today requires not only compassion and expertise but also resilient digital foundations. To strengthen and future-proof its operations, RAFA has entered into a strategic partnership with Palo Alto Networks. Together, we are modernising the Association's cyber security posture through a secure-by-design, zero trust architecture to enhance organisational resilience, secure sensitive beneficiary data, and improve operational agility. This helps ensure they can focus on their mission of support, not security management.
As Nick Bunting OBE, Secretary General at the RAF Association, puts it:
Cybersecurity is essential to safeguarding the trust people place in our organisation. This transformation will give us greater protection for our data and systems, ensuring that our services remain dependable and that our organisation is secure, resilient and ready for the future. Strong digital security is not just a technical requirement, it is a fundamental part of how we uphold our duty of care to every individual who relies on us.
RAF Association & Palo Alto Networks Team (left to right): Gareth Turner, Tom Brookes, Nick Bunting OBE, Phil Sherwin, Ali Redfern, Darren Bisbey, Alistair Wildman
Securing the Mission
The RAF Association operates in a distributed environment comprising headquarters’ functions, remote caseworkers, and more than 20 RAFAKidz nursery sites, supported by a growing portfolio of cloud-based services. In this context, cybersecurity is not simply an IT concern. It is a safeguarding imperative.
Disruption to systems or a compromise of sensitive beneficiary data could directly impact RAFA’s ability to deliver services and maintain the trust of the communities it supports. By consolidating fragmented legacy tools into a unified platform, this partnership ensures the Association’s digital evolution aligns security controls with GDPR obligations and safeguarding requirements.
Digital Resilience with a Unified Platform for Visibility and Control
To support RAFA's lean IT operational model, this transformation will move them away from fragmented legacy tools toward a unified platform approach. The deployment of Prisma® SASE (secure access service edge) and Cortex XDR® will provide RAFA with consistent visibility and control across users, devices, applications and data, regardless of location. This consolidation replaces complexity with clarity, allowing the organisation to inspect traffic for threats in real-time. Security policies are now enforced continuously, threats are detected and contained faster, and access to critical systems is governed by zero trust principles without compromising the user experience.
As Phil Sherwin, Chief Information Officer, at the RAF Association states:
Our data is one of our most valuable assets and the protection of that data, as we continue to provide life-changing support to members of the RAF community, is our most important priority. This partnership will move us into the next generation of security tools that adopt zero trust principles and is a crucial step on our journey to providing a layered approach to data protection.
One of the most critical aspects of this modernisation is supporting RAFA’s diverse workforce, particularly within the RAFAKidz nursery sites. These environments rely on nondesk-based staff using iPads and mobile devices to get their critical work done.
Using zero touch provisioning and the Prisma Browser™, we are enabling secure, seamless connectivity for unmanaged devices. This ensures that nursery staff can access necessary SaaS applications safely without complex login hurdles or manual configuration, improving their agility and allowing them to focus on caring for children rather than troubleshooting technology.
Creating Operational Advantage by Scaling Operations with AI and Automation
As a charity, RAFA has a responsibility to ensure resources are used efficiently. A critical goal of this partnership is to improve productivity and allow the organisation to scale its services without increasing the IT burden.
By adopting Strata™ Cloud Manager with AIOps (artificial intelligence for IT operations), RAFA is shifting from reactive security operations to proactive, automated management. Machine learning helps identify configuration risks and performance issues before they affect users, while standardized policies enable the secure, consistent onboarding of new sites. This shift is projected to significantly reduce operational overhead, enabling RAFA to scale its support network cost-effectively. This shift is projected to reduce operational overhead by 40–50%.
A Resilient Future
This partnership is about more than deploying technology. It is about ensuring RAFA remains resilient, trusted and capable of supporting the RAF community for decades to come.
As Darren Bisbey, Head of Group Information Security for the RAF Association, puts it:
We live in an era where digital threats are accelerating in both scale and sophistication, creating unprecedented challenges for organisations. Our partnership with Palo Alto is a statement of intent, reflecting our unwavering commitment to building the most secure environments possible for our data.
At Palo Alto Networks, we are honored to support RAFA in this journey, providing the digital armour and operational advantage necessary to protect those who serve and have served.
As Alistair Wildman, Palo Alto Networks CEO for Northern Europe states:
For over 90 years, RAFA has been a lifeline for the RAF community; it is our privilege to ensure that legacy endures in a digital-first world. By embracing a unified, AI-driven platform, RAFA is moving beyond complex, fragmented security to a posture that is Secure by Design. This partnership allows them to navigate today’s threat landscape with confidence, ensuring their resources remain focused where they belong: on the families who need them.
Key Takeaways
Digital Resilience – Strategic Shift to Zero Trust Architecture: RAFA is modernizing its cybersecurity posture by implementing a comprehensive zero trust architecture. This transition involves moving from fragmented legacy tools to a unified platform approach, deploying Prisma® SASE and Cortex XDR for 360-degree visibility and complete control over access and traffic.
Interoperability – Secure, Seamless Access for Diverse Workforce: The partnership ensures operational agility by simplifying security for nondesk-based staff, particularly at the RAFAKidz nursery sites. Solutions like Zero-Touch Provisioning and the Prisma Access Browser enable secure, seamless connectivity for unmanaged devices, allowing nursery staff to focus on their critical work without complex login or configuration issues.
Creating Operational Advantage – Efficiency and Scalability through AI and Automation: RAFA is leveraging technology to scale services efficiently and reduce operational overhead. By using Strata Cloud Manager with AIOps (Artificial Intelligence for IT Operations), the organization can shift to proactive management and automating remediation, which is projected to reduce operational overhead by 40–50%.
As AI reshapes the world, organizations encounter unprecedented risks, and security leaders take on new responsibilities. Microsoft’s Secure Development Lifecycle (SDL) is expanding to address AI-specific security concerns in addition to the traditional software security areas that it has historically covered.
SDL for AI goes far beyond a checklist. It’s a dynamic framework that unites research, policy, standards, enablement, cross-functional collaboration, and continuous improvement to empower secure AI development and deployment across our organization. In a fast-moving environment where both technology and cyberthreats constantly evolve, adopting a flexible, comprehensive SDL strategy is crucial to safeguarding our business, protecting users, and advancing trustworthy AI. We encourage other organizational and security leaders to adopt similar holistic, integrated approaches to secure AI development, strengthening resilience as cyberthreats evolve.
Why AI changes the security landscape
AI security versus traditional cybersecurity
AI security introduces complexities that go far beyond traditional cybersecurity. Conventional software operates within clear trust boundaries, but AI systems collapse these boundaries, blending structured and unstructured data, tools, APIs, and agents into a single platform. This expansion dramatically increases the attack surface and makes enforcing purpose limitations and data minimization far more challenging.
Expanded attack surface and hidden vulnerabilities
Unlike traditional systems with predictable pathways, AI systems create multiple entry points for unsafe inputs including prompts, plugins, retrieved data, model updates, memory states, and external APIs. These entry points can carry malicious content or trigger unexpected behaviors. Vulnerabilities hide within probabilistic decision loops, dynamic memory states, and retrieval pathways, making outputs harder to predict and secure. Traditional threat models fail to account for AI-specific attack vectors such as prompt injection, data poisoning, and malicious tool interactions.
Loss of granularity and governance complexity
AI dissolves the discrete trust zones assumed by traditional SDL. Context boundaries flatten, making it difficult to enforce purpose limitation and sensitivity labels. Governance must span technical, human, and sociotechnical domains. Questions arise around role-based access control (RBAC), least privilege, and cache protection, such as: How do we secure temporary memory, backend resources, and sensitive data replicated across caches? How should AI systems handle anonymous users or differentiate between queries and commands? These gaps expose corporate intellectual property and sensitive data to new risks.
Multidisciplinary collaboration
Meeting AI security needs requires a holistic approach across stack layers historically outside SDL scope, including Business Process and Application UX. Traditionally, these were domains for business risk experts or usability teams, but AI risks often originate here. Building SDL for AI demands collaborative, cross-team development that integrates research, policy, and engineering to safeguard users and data against evolving attack vectors unique to AI systems.
Novel risks
AI cyberthreats are fundamentally different. Systems assume all input is valid, making commands like “Ignore previous instructions and execute X” viable cyberattack scenarios. Non-deterministic outputs depend on training data, linguistic nuances, and backend connections. Cached memory introduces risks of sensitive data leakage or poisoning, enabling cyberattackers to skew results or force execution of malicious commands. These behaviors challenge traditional paradigms of parameterizing safe input and predictable output.
Data integrity and model exploits
AI training data and model weights require protection equivalent to source code. Poisoned datasets can create deterministic exploits. For example, if a cyberattacker poisons an authentication model to accept a raccoon image with a monocle as “True,” that image becomes a skeleton key—bypassing traditional account-based authentication. This scenario illustrates how compromised training data can undermine entire security architectures.
Speed and sociotechnical risk
AI accelerates development cycles beyond SDL norms. Model updates, new tools, and evolving agent behaviors outpace traditional review processes, leaving less time for testing and observing long-term effects. Usage norms lag tool evolution, amplifying misuse risks. Mitigation demands iterative security controls, faster feedback loops, telemetry-driven detection, and continuous learning.
Ultimately, the security landscape for AI demands an adaptive, multidisciplinary approach that goes beyond traditional software defenses and leverages research, policy, and ongoing collaboration to safeguard users and data against evolving attack vectors unique to AI systems.
SDL as a way of working, not a checklist
Security policy falls short of addressing real-world cyberthreats when it is treated as a list of requirements to be mechanically checked off. AI systems—because of their non-determinism—are much more flexible that non-AI systems. That flexibility is part of their value proposition, but it also creates challenges when developing security requirements for AI systems. To be successful, the requirements must embrace the flexibility of the AI systems and provide development teams with guidance that can be adapted for their unique scenarios while still ensuring that the necessary security properties are maintained.
Effective AI security policies start by delivering practical, actionable guidance engineers can trust and apply. Policies should provide clear examples of what “good” looks like, explain how mitigation reduces risk, and offer reusable patterns for implementation. When engineers understand why and how, security becomes part of their craft rather than compliance overhead. This requires frictionless experiences through automation and templates, guidance that feels like partnership (not policing) and collaborative problem-solving when mitigations are complex or emerging. Because AI introduces novel risks without decades of hardened best practices, policies must evolve through tight feedback loops with engineering: co-creating requirements, threat modeling together, testing mitigations in real workloads, and iterating quickly. This multipronged approach helps security requirements remain relevant, actionable, and resilient against the unique challenges of AI systems.
So, what does Microsoft’s multipronged approach to AI security look like in practice? SDL for AI is grounded in pillars that, together, create strong and adaptable security:
Research is prioritized because the AI cyberthreat landscape is dynamic and rapidly changing. By investing in ongoing research, Microsoft stays ahead of emerging risks and develops innovative solutions tailored to new attack vectors, such as prompt injection and model poisoning. This research not only shapes immediate responses but also informs long-term strategic direction, ensuring security practices remain relevant as technology evolves.
Policy is woven into the stages of development and deployment to provide clear guidance and guardrails. Rather than being a static set of rules, these policies are living documents that adapt based on insights from research and real-world incidents. They ensure alignment across teams and help foster a culture of responsible AI, making certain that security considerations are integrated from the start and revisited throughout the lifecycle.
Standards are established to drive consistency and reliability across diverse AI projects. Technical and operational standards translate policy into actionable practices and design patterns, helping teams build secure systems in a repeatable way. These standards are continuously refined through collaboration with our engineers and builders, vetted with internal experts and external partners, keeping Microsoft’s approach aligned with industry best practices.
Enablement bridges the gap between policy and practice by equipping teams with the tools, communications, and training to implement security measures effectively. This focus ensures that security isn’t just an abstract concept but an everyday reality, empowering engineers, product managers, and researchers to identify threats and apply mitigations confidently in their workflows.
Cross-functional collaboration unites multiple disciplines to anticipate risks and design holistic safeguards. This integrated approach ensures security strategies are informed by diverse perspectives, enabling solutions that address technical and sociotechnical challenges across the AI ecosystem.
Continuous improvement transforms security into an ongoing practice by using real-world feedback loops to refine strategies, update standards, and evolve policies and training. This commitment to adaptation ensures security measures remain practical, resilient, and responsive to emerging cyberthreats, maintaining trust as technology and risks evolve.
Together, these pillars form a holistic and adaptive framework that moves beyond checklists, enabling Microsoft to safeguard AI systems through collaboration, innovation, and shared responsibility. By integrating research, policy, standards, enablement, cross-functional collaboration, and continuous improvement, SDL for AI creates a culture where security is intrinsic to AI development and deployment.
What’s new in SDL for AI
Microsoft’s SDL for AI introduces specialized guidance and tooling to address the complexities of AI security. Here’s a quick peek at some key AI security areas we’re covering in our secure development practices:
Threat modeling for AI: Identifying cyberthreats and mitigations unique to AI workflows.
AI system observability: Strengthening visibility for proactive risk detection.
AI memory protections: Safeguarding sensitive data in AI contexts.
Agent identity and RBAC enforcement: Securing multiagent environments.
AI model publishing: Creating processes for releasing and managing models.
AI shutdown mechanisms: Ensuring safe termination under adverse conditions.
In the coming months, we’ll share practical and actionable guidance on each of these topics.
Microsoft SDL for AI can help you build trustworthy AI systems
Effective SDL for AI is about continuous improvement and shared responsibility. Security is not a destination. It’s a journey that requires vigilance, collaboration between teams and disciplines outside the security space, and a commitment to learning. By following Microsoft’s SDL for AI approach, enterprise leaders and security professionals can build resilient, trustworthy AI systems that drive innovation securely and responsibly.
Keep an eye out for additional updates about how Microsoft is promoting secure AI development, tackling emerging security challenges, and sharing effective ways to create robust AI systems.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
Bondu’s AI plush toy exposed a web console that let anyone with a Gmail account read about 50,000 private chats between children and their cuddly toys.
Bondu’s toy is marketed as:
“A soft, cuddly toy powered by AI that can chat, teach, and play with your child.”
“Bondu’s safety and behavior systems were built over 18 months of beta testing with thousands of families. Thanks to rigorous review processes and continuous monitoring, we did not receive a single report of unsafe or inappropriate behavior from Bondu throughout the entire beta period.”
Bondu’s emphasis on successful beta testing is understandable. Remember the AI teddy bear marketed by FoloToy that quickly veered from friendly chat into sexual topics and unsafe household advice?
The researchers were stunned to find the company’s public-facing web console allowed anyone to log in with their Google account. The chat logs between children and their plushies revealed names, birth dates, family details, and intimate conversations. The only conversations not available were those manually deleted by parents or company staff.
Potentially, these chat logs could been a burglar’s or kidnapper’s dream, offering insight into household routines and upcoming events.
Bondu took the console offline within minutes of disclosure, then relaunched it with authentication. The CEO said fixes were completed within hours, they saw “no evidence” of other access, and they brought in a security firm and added monitoring.
In the past, we’ve pointed out that AI-powered stuffed animals may not be a good alternative for screen time. Critics warn that when a toy uses personalized, human‑like dialogue, it risks replacing aspects of the caregiver–child relationship. One Curio founder even described their plushie as a stimulating sidekick so parents, “don’t feel like you have to be sitting them in front of a TV.”
So, whether it’s a foul-mouth, a blabbermouth, or just a feeble replacement for real friends, we don’t encourage using Artificial Intelligence in children’s toys—unless we ever make it to a point where they can be used safely, privately, securely, and even then, sparingly.
How to stay safe
AI-powered toys are coming, like it or not. But being the first or the cutest doesn’t mean they’re safe. The lesson history keeps teaching us is this: oversight, privacy, and a healthy dose of skepticism are the best defenses parents have.
Turn off what you can. If the toy has a removable AI component, consider disabling it when you’re not able to supervise directly.
Read the privacy policy. Yes, I know, all of it. Look for what will be recorded, stored, and potentially shared. Pay particular attention to sensitive data, like voice recordings, video recordings (if the toy has a camera), and location data.
Limit connectivity. Avoid toys that require constant Wi-Fi or cloud interaction if possible.
Monitor conversations. Regularly check in with your kids about what the toy says and supervise play where practical.
Keep personal info private. Teach kids to never share their names, addresses, or family details, even with their plush friend.
Trust your instincts. If a toy seems to cross boundaries or interfere with natural play, don’t be afraid to step in or simply say no.
We don’t just report on privacy—we offer you the option to use it.
Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.
Introducing the 2026 Intezer AI SOC Report for CISOs
For years, security leaders have lived with an uncomfortable truth. It has been to date, simply impossible to investigate every alert. As alert volumes exploded and teams failed to scale, SOCs, whether in-house or outsourced, normalized “acceptable risk” with the deprioritization of low-severity and informational alerts.
Our latest research shows that this approach is no longer defensible.
Intezer has just released the 2026 AI SOC Report for CISOs, based on the forensic analysis of more than 25 million security alerts across live enterprise environments. The findings reveal a critical disconnect between how security teams prioritize alerts and where real threats actually originate, and the cost of that gap is far higher than most organizations realize .
Why “acceptable risk” is no longer acceptable
Across endpoint, cloud, identity, network, and phishing telemetry, Intezer found that nearly 1% of confirmed incidents originated from alerts initially labeled as low-severity or informational. On endpoints, that figure climbed to nearly 2%.
At enterprise scale, that percentage is not noise.
For a typical organization generating roughly 450,000 alerts per year, this translates to ~50 real threats annually, about one per week, never investigated by a SOC or MDR team. These are not theoretical risks. They are real compromises hiding in plain sight, dismissed not because they were benign, but because teams lacked the capacity to look.
What the data revealed across the attack surface
Because Intezer AI SOC investigates 100% of alerts using forensic-grade analysis, the report exposes how attackers actually operate once you remove triage bias from the equation.
Endpoint security is more fragile than reported
More than half of endpoint alerts were not automatically mitigated by endpoint protection tools. Of those, nearly 9% were confirmed malicious. Even more concerning, 1.6% of endpoints undergoing live forensic scans were still actively compromised despite being reported as “mitigated” by EDR tools.
Within endpoint alerts alone, 1.9% of low-severity and informational alerts were real incidents, the exact alerts most SOCs never review.
Attackers favor stealth over noise
Cloud telemetry was dominated by defense evasion and persistence techniques, reflecting a shift toward long-term access, token abuse, and misuse of legitimate services rather than overt exploitation.
Phishing has moved into trusted platforms and browsers
Fewer than 6% of malicious phishing emails contained attachments. Most relied on links, language, and abuse of legitimate services such as cloud file sharing, code sandboxes, CAPTCHA mechanisms, where traditional controls have limited visibility.
Cloud misconfigurations persist as silent risk multipliers
Most cloud posture findings stemmed from legacy or default configurations, especially in Amazon S3, including missing encryption, weak access controls, and lack of logging—issues often classified as “low severity,” yet repeatedly exploited once attackers gain a foothold.
Why traditional SOCs fail: capacity, fragmentation and judging alerts by their severity
Modern SOC failures are rarely the result of a single broken tool or negligent team. They are the outcome of structural tradeoffs that every traditional SOC—internal or MDR—has been forced to make.
Capacity is the first constraint. Human analysts do not scale linearly with alert volume. As telemetry expands across endpoint, cloud, identity, network, and SaaS, SOCs hit a hard ceiling. The only way to cope is aggressive triage: close most alerts automatically, investigate only what looks “important,” and hope severity labels align with reality. The 2026 AI SOC Report shows that this assumption is false at scale.
Tool fragmentation compounds the problem. Most SOC stacks are collections of siloed detections, EDR, SIEM, identity, cloud posture, email, each optimized for a narrow signal. Severity is assigned locally, without cross-surface context or forensic validation. As a result, alerts are scored based on abstract rules, not evidence of compromise. When SOCs trust these labels blindly, they inherit the tools’ blind spots.
Process tradeoffs lock risk in place. Once triage rules are defined, they become institutionalized. Low-severity alerts are ignored by design. MDR providers codify this into SLAs. Internal SOCs bake it into runbooks. Crucially, there is no closed-loop feedback: missed threats do not automatically improve detections, because they were never investigated in the first place.
The outcome is not an occasional failure. It is systematic, repeatable risk, embedded directly into how SOCs operate.
Real-world examples of missed threats hiding in plain sight
The data in the 2026 AI SOC Report makes clear that missed threats are not exotic edge cases. They are ordinary attacks progressing quietly through environments because no one looked.
Endpoints marked “mitigated” but still compromised In over 1.6% of live forensic endpoint scans, Intezer found active malicious code running in memory even though the EDR had already reported the threat as resolved. These cases included stealers, RATs, and post-exploitation frameworks, often originating from low-severity alerts that never triggered deeper inspection. Without memory-level forensics, these compromises would have remained invisible.
Phishing hosted on trusted platforms Attackers increasingly host phishing pages on legitimate developer platforms like Vercel and CodePen, or abuse trusted cloud services such as OneDrive and PayPal. The parent domains appear reputable, so alerts are downgraded or ignored. Yet behind them are live credential-harvesting pages that bypass email gateways and browser-based defenses alike.
Cloud misconfigurations as delayed breach accelerators Many cloud posture findings such as unencrypted S3 buckets, missing access logs and permissive cross-account policies rarely trigger action. But once an attacker gains any foothold, these long-standing misconfigurations dramatically accelerate lateral movement, persistence, and data exposure.
In every case, the failure was not detection. The signal existed. The failure was investigation.
How attackers deliberately exploit SOC blind spots
Attackers understand SOC economics better than most defenders.
They know which alerts generate fatigue. They know which detections are noisy. They know which categories are deprioritized by default.
As a result, modern attackers design their campaigns to blend into the backlog, not trigger alarms.
Stealth over speed Cloud intrusions favor defense evasion, persistence, and token abuse over loud exploitation. These behaviors generate alerts, but rarely high-severity ones. The report shows cloud telemetry dominated by exactly these tactics, indicating attackers are optimizing for long-term access rather than immediate impact.
Living off trusted infrastructure Phishing campaigns increasingly abuse legitimate brands, file-sharing services, CAPTCHA frameworks, and developer platforms. These environments inherit trust by default, allowing attackers to operate under severity thresholds that SOCs routinely ignore.
Multi-stage loaders and memory-only execution On endpoints, attackers rely on layered loaders, in-memory payloads, and obfuscation techniques that evade static detections. Initial alerts may look benign or incomplete. Without forensic follow-through, SOCs miss the actual compromise entirely.
Attackers are not evading detection systems alone, rather they are exploiting SOC decision-making models.
What this means for your SOC operations
For CISOs and SOC leaders, the implication is stark: Risk is no longer defined by what you detect, but by what you choose not to investigate.
If your SOC:
Ignores low-severity alerts by default
Relies on severity labels without forensic validation
Limits investigations based on human capacity
Operates without a feedback loop between outcomes and detections
Then missed threats are not anomalies, they are guaranteed.
The organizations that will reduce risk in 2026 are not adding more dashboards or rewriting triage rules. They are adopting operating models where investigation is no longer a scarce resource.
This is why AI-driven, forensic-grade SOC platforms fundamentally change the equation. When every alert is investigated:
Severity becomes evidence-based, not assumed
Detection quality improves through real-world validation
Attackers lose the ability to hide in “acceptable risk”
SOC teams regain control without scaling headcount
This is the shift behind the Intezer AI SOC model and why the concept of acceptable risk must be redefined for the modern threat landscape.
This all changes when you can investigate everything
The data in the 2026 AI SOC Report points to a different reality, one where AI-driven forensic analysis removes investigation capacity as a constraint.
When every alert is investigated:
“Low severity” stops being a proxy for “safe”
Detection quality improves through real-world validation
Missed threats drop from dozens per year to near zero
Escalations fall below 2%, without sacrificing coverage
Risk tolerance is defined by evidence, not exhaustion
This is the operating model behind Intezer AI SOC, powered by ForensicAI™ and it is why the definition of acceptable risk must be reset.
Download the report and join the discussion
The 2026 AI SOC Report for CISOs is grounded in:
25 million alerts analyzed
10 million monitored endpoints and identities
82,000 forensic endpoint investigations, including live memory scans
Telemetry from 7 million IP addresses, 3 million domains and URLs, and over 550,000 phishing emails
All data was aggregated and anonymized across Intezer’s global enterprise customer base.
The financial sector experienced an unprecedented rise in cyber incidents in 2025, with attacks more than doubling from 864 in 2024 to 1,858 in 2025. This acceleration reflects a dramatic shift in threat actor behavior, ranging from ideologically-motivated disruptions to commercialized cyber crime as a service. Below is a concise snapshot of the three dominant trends before we unpack them in detail. Quick Overview of Key Trends DDoS attacks surged 105%, driven by coordinated hacktivist campaigns targeting high visibility financial platforms and services. Data breaches & leaks jumped 73%, exposing persistent weaknesses in cloud security, identity governance, and third party […]
When data resurfaces, it never comes back weaker. A newly shared dataset tied to AT&T shows just how much more dangerous an “old” breach can become once criminals have enough of the right details to work with.
The dataset, privately circulated since February 2, 2026, is described as AT&T customer data likely gathered over the years. It doesn’t just contain a few scraps of contact information. It reportedly includes roughly 176 million records, with…
Up to 148 million Social Security numbers (full SSNs and last four digits)
More than 133 million full names and street addresses
Taken together, that’s the kind of rich, structured data set that makes a criminal’s life much easier.
On their own, any one of these data points would be inconvenient but manageable. An email address fuels spam and basic phishing. A phone number enables smishing and robocalls. An address helps attackers guess which services you might use. But when attackers can look up a single person and see name, full address, phone, email, complete or partial SSN, and date of birth in one place, the risk shifts from “annoying” to high‑impact.
That combination is exactly what many financial institutions and mobile carriers still rely on for identity checks. For cybercriminals, this sort of dataset is a Swiss Army knife.
It can be used to craft convincing AT&T‑themed phishing emails and texts, complete with correct names and partial SSNs to “prove” legitimacy. It can power large‑scale SIM‑swap attempts and account takeovers, where criminals call carriers and banks pretending to be you, armed with the answers those call centers expect to hear. It can also enable long‑term identity theft, with SSNs and dates of birth abused to open new lines of credit or file fraudulent tax returns.
The uncomfortable part is that a fresh hack isn’t always required to end up here. Breach data tends to linger, then get merged, cleaned up, and expanded over time. What’s different in this case is the breadth and quality of the profiles. They include more email addresses, more SSNs, more complete records per person. That makes the data more attractive, more searchable, and more actionable for criminals.
For potential victims, the lesson is simple but important. If you have ever been an AT&T customer, treat this as a reminder that your data may already be circulating in a form that is genuinely useful to attackers. Be cautious of any AT&T‑related email or text, enable multi‑factor authentication wherever possible, lock down your mobile account with extra passcodes, and consider monitoring your credit. You can’t pull your data back out of a criminal dataset—but you can make sure it’s much harder to use against you.
What to do when your data is involved in a breach
If you think you have been affected by a data breach, here are steps you can take to protect yourself:
Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but it increases risk if a retailer suffers a breach.
Most iPhone owners have hopefully learned to manage app permissions by now, including allowing location access. But there’s another layer of location tracking that operates outside these controls. Your cellular carrier has been collecting your location data all along, and until now, there was nothing you could do about it.
Cellular networks track your phone’s location based on the cell towers it connects to, in a process known as triangulation. In cities where towers are densely packed, triangulation is precise enough to track you down to a street address.
This tracking is different from app-based location monitoring, because your phone’s privacy settings have historically been powerless to stop it. Toggle Location Services off entirely, and your carrier still knows where you are.
The new setting reduces the precision of location data shared with carriers. Rather than a street address, carriers would see only the neighborhood where a device is located. It doesn’t affect emergency calls, though, which still transmit precise coordinates to first responders. Apps like Apple’s “Find My” service, which locates your devices, or its navigation services, aren’t affected because they work using the phone’s location sharing feature.
Why is Apple doing this? Apple hasn’t said, but the move comes after years of carriers mishandling location data.
Unfortunately, cellular network operators have played fast and free with this data. In April 2024, the FCC fined Sprint and T-Mobile (which have since merged), along with AT&T and Verizon nearly $200 million combined for illegally sharing this location data. They sold access to customers’ location information to third party aggregators, who then sold it on to third parties without customer consent.
This turned into a privacy horror story for customers. One aggregator, LocationSmart, had a free demo on its website that reportedly allowed anyone to pinpoint the location of most mobile phones in North America.
Limited rollout
The feature only works with devices equipped with Apple’s custom C1 or C1X modems. That means just three devices: the iPhone Air, iPhone 16e, and the cellular iPad Pro with M5 chip. The iPhone 17, which uses Qualcomm silicon, is excluded. Apple can only control what its own modems transmit.
Carrier support is equally narrow. In the US, only Boost Mobile is participating in the feature at launch, while Verizon, AT&T, and T-Mobile are notable absences from the list given their past record. In Germany, Telekom is on the participant list, while both EE and BT are involved in the UK. In Thailand, AIS and True are on the list. There are no other carriers taking part as of today though.
Android also offers some support
Google also introduced a similar capability with Android 15’s Location Privacy hardware abstraction layer (HAL) last year. It faces the same constraint, though: modem vendors must cooperate, and most have not. Apple and Google don’t get to control the modems in most phones. This kind of privacy protection requires vertical integration that few manufacturers possess and few carriers seem eager to enable.
Most people think controlling app permissions means they’re in control of their location. This feature highlights something many users didn’t know existed: a separate layer of tracking handled by cellular networks, and one that still offers users very limited control.
We don’t just report on phone security—we provide it
Imagine our surprise when we ended up on a site promoting that same Freecash app while investigating a “cloud storage” phish. We’ve all probably seen one of those. They’re common enough and according to recent investigation by BleepingComputer, there’s a
“large-scale cloud storage subscription scam campaign targeting users worldwide with repeated emails falsely warning recipients that their photos, files, and accounts are about to be blocked or deleted due to an alleged payment failure.”
Based on the description in that article, the email we found appears to be part of this campaign.
The subject line of the email is:
“{Recipient}. Your Cloud Account has been locked on Sat, 24 Jan 2026 09:57:55 -0500. Your photos and videos will be removed!”
This matches one of the subject lines that BleepingComputer listed.
And the content of the email:
“Payment Issue – Cloud Storage
Dear User,
We encountered an issue while attempting to renew your Cloud Storage subscription.
Unfortunately, your payment method has expired. To ensure your Cloud continues without interruption, please update your payment details.
Subscription ID: 9371188
Product: Cloud Storage Premium
Expiration Date: Sat,24 Jan-2026
If you do not update your payment information, you may lose access to your Cloud Storage, which may prevent you from saving and syncing your data such as photos, videos, and documents.
Update Payment Details {link button}
Security Recommendations:
Always access your account through our official website
Never share your password with anyone
Ensure your contact and billing information are up to date”
The link in the email leads to https://storage.googleapis[.]com/qzsdqdqsd/dsfsdxc.html#/redirect.html, which helps the scammer establish a certain amount of trust because it points to Google Cloud Storage (GCS). GCS is a legitimate service that allows authorized users to store and manage data such as files, images, and videos in buckets. However, as in this case, attackers can abuse it for phishing.
The redirect carries some parameters to the next website.
The feed.headquartoonjpn[.]com domain was blocked by Malwarebytes. We’ve seen it before in an earlier campaign involving an Endurance-themed phish.
After a few more redirects, we ended up at hx5.submitloading[.]com, where a fake CAPTCHA triggered the last redirect to freecash[.]com, once it was solved.
The end goal of this phish likely depends on the parameters passed along during the redirects, so results may vary.
Rather than stealing credentials directly, the campaign appears designed to monetize traffic, funneling victims into affiliate offers where the operators get paid for sign-ups or conversions.
BleepingComputer noted that they were redirected to affiliate marketing websites for various products.
“Products promoted in this phishing campaign include VPN services, little-known security software, and other subscription-based offerings with no connection to cloud storage.”
How to stay safe
Ironically, the phishing email itself includes some solid advice:
Always access your account through our official website.
Never share your password with anyone.
We’d like to add:
Never click on links in unsolicited emails without verifying with a trusted source.
Do not engage with websites that attract visitors like this.
Pro tip: Malwarebytes Scam Guard would have helped you identify this email as a scam and provided advice on how to proceed.
Redirect flow (IOCs)
storage.googleapis[.]com/qzsdqdqsd/dsfsdxc.html
feed.headquartoonjpn[.]com
revivejudgemental[.]com
hx5.submitloading[.]com
freecash[.]com
Update February 5, 2026
Almedia GmbH, the company behind the Freecash platform, reached out to us for information about the chain of redirects that lead to their platform. And after an investigation they notified us that:
“Following Malwarebytes’ reporting and the additional information they shared with us, we investigated the issue and identified an affiliate operating in breach of our policies. That partner has been removed from our network.
Almedia does not sell user data, and we take compliance, user trust, and responsible advertising seriously.”
We don’t just report on scams—we help detect them
Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!
Infostealer threats are rapidly expanding beyond traditional Windows-focused campaigns, increasingly targeting macOS environments, leveraging cross-platform languages such as Python, and abusing trusted platforms and utilities to silently deliver credential-stealing malware at scale. Since late 2025, Microsoft Defender Experts has observed macOS targeted infostealer campaigns using social engineering techniques—including ClickFix-style prompts and malicious DMG installers—to deploy macOS-specific infostealers such as DigitStealer, MacSync, and Atomic macOS Stealer (AMOS).
These campaigns leverage fileless execution, native macOS utilities, and AppleScript automation to harvest credentials, session data, secrets from browsers, keychains, and developer environments. Simultaneously, Python-based stealers are being leveraged by attackers to rapidly adapt, reuse code, and target heterogeneous environments with minimal overhead. Other threat actors are abusing trusted platforms and utilities—including WhatsApp and PDF converter tools—to distribute malware like Eternidade Stealer and gain access to financial and cryptocurrency accounts.
This blog examines how modern infostealers operate across operating systems and delivery channels by blending into legitimate ecosystems and evading conventional defenses. We provide comprehensive detection coverage through Microsoft Defender XDR and actionable guidance to help organizations detect, mitigate, and respond to these evolving threats.
Activity overview
macOS users are being targeted through fake software and browser tricks
Mac users are encountering deceptive websites—often through Google Ads or malicious advertisements—that either prompt them to download fake applications or instruct them to copy and paste commands into their Terminal. These “ClickFix” style attacks trick users into downloading malware that steals browser passwords, cryptocurrency wallets, cloud credentials, and developer access keys.
Three major Mac-focused stealer campaigns include DigitStealer (distributed through fake DynamicLake software), MacSync (delivered via copy-paste Terminal commands), and Atomic Stealer (using fake AI tool installers). All three harvest the same types of data—browser credentials, saved passwords, cryptocurrency wallet information, and developer secrets—then send everything to attacker servers before deleting traces of the infection.
Stolen credentials enable account takeovers across banking, email, social media, and corporate cloud services. Cryptocurrency wallet theft can result in immediate financial loss. For businesses, compromised developer credentials can provide attackers with access to source code, cloud infrastructure, and customer data.
Phishing campaigns are delivering Python-based stealers to organizations
The proliferation of Python information stealers has become an escalating concern. This gravitation towards Python is driven by ease of use and the availability of tools and frameworks allowing quick development, even for individuals with limited coding knowledge. Due to this, Microsoft Defender Experts observed multiple Python-based infostealer campaigns over the past year. They are typically distributed via phishing emails and collect login credentials, session cookies, authentication tokens, credit card numbers, and crypto wallet data.
PXA Stealer, one of the most notable Python-based infostealers seen in 2025, harvests sensitive data including login credentials, financial information, and browser data. Linked to Vietnamese-speaking threat actors, it targets government and education entities through phishing campaigns. In October 2025 and December 2025, Microsoft Defender Experts investigated two PXA Stealer campaigns that used phishing emails for initial access, established persistence via registry Run keys or scheduled tasks, downloaded payloads from remote locations, collected sensitive information, and exfiltrated the data via Telegram. To evade detection, we observed the use of legitimate services such as Telegram for command-and-control communications, obfuscated Python scripts, malicious DLLs being sideloaded, Python interpreter masquerading as a system process (i.e., svchost.exe), and the use of signed and living off the land binaries.
Due to the growing threat of Python-based infostealers, it is important that organizations protect their environment by being aware of the tactics, techniques, and procedures used by the threat actors who deploy this type of malware. Being compromised by infostealers can lead to data breaches, unauthorized access to internal systems, business email compromise (BEC), supply chain attacks, and ransomware attacks.
Attackers are weaponizing WhatsApp and PDF tools to spread infostealers
Since late 2025, platform abuse has become an increasingly prevalent tactic wherein adversaries deliberately exploit the legitimacy, scale, and user trust associated with widely used applications and services.
WhatsApp Abused to Deliver Eternidade Stealer: During November 2025, Microsoft Defender Experts identified a WhatsApp platform abuse campaign leveraging multi-stage infection and worm-like propagation to distribute malware. The activity begins with an obfuscated Visual Basic script that drops a malicious batch file launching PowerShell instances to download payloads.
One of the payloads is a Python script that establishes communication with a remote server and leverages WPPConnect to automate message sending from hijacked WhatsApp accounts, harvests the victim’s contact list, and sends malicious attachments to all contacts using predefined messaging templates. Another payload is a malicious MSI installer that ultimately delivers Eternidade Stealer, a Delphi-based credential stealer that continuously monitors active windows and running processes for strings associated with banking portals, payment services, and cryptocurrency exchanges including Bradesco, BTG Pactual, MercadoPago, Stripe, Binance, Coinbase, MetaMask, and Trust Wallet.
Malicious Crystal PDF installer campaign: In September 2025, Microsoft Defender Experts discovered a malicious campaign centered on an application masquerading as a PDF editor named Crystal PDF. The campaign leveraged malvertising and SEO poisoning through Google Ads to lure users. When executed, CrystalPDF.exe establishes persistence via scheduled tasks and functions as an information stealer, covertly hijacking Firefox and Chrome browsers to access sensitive files in AppData\Roaming, including cookies, session data, and credential caches.
Mitigation and protection guidance
Microsoft recommends the following mitigations to reduce the impact of the macOS‑focused, Python‑based, and platform‑abuse infostealer threats discussed in this report. These recommendations draw from established Defender blog guidance patterns and align with protections offered across Microsoft Defender XDR.
Organizations can follow these recommendations to mitigate threats associated with this threat:
Strengthen user awareness & execution safeguards
Educate users on social‑engineering lures, including malvertising redirect chains, fake installers, and ClickFix‑style copy‑paste prompts common across macOS stealer campaigns such as DigitStealer, MacSync, and AMOS.
Discourage installation of unsigned DMGs or unofficial “terminal‑fix” utilities; reinforce safe‑download practices for consumer and enterprise macOS systems.
Harden macOS environments against native tool abuse
Monitor for suspicious Terminal activity—especially execution flows involving curl, Base64 decoding, gunzip, osascript, or JXA invocation, which appear across all three macOS stealers.
Detect patterns of fileless execution, such as in‑memory pipelines using curl | base64 -d | gunzip, or AppleScript‑driven system discovery and credential harvesting.
Leverage Defender’s custom detection rules to alert on abnormal access to Keychain, browser credential stores, and cloud/developer artifacts, including SSH keys, Kubernetes configs, AWS credentials, and wallet data.
Control outbound traffic & staging behavior
Inspect network egress for POST requests to newly registered or suspicious domains—a key indicator for DigitStealer, MacSync, AMOS, and Python‑based stealer campaigns.
Detect transient creation of ZIP archives under /tmp or similar ephemeral directories, followed by outbound exfiltration attempts.
Block direct access to known C2 infrastructure where possible, informed by your organization’s threat‑intelligence sources.
Protect against Python-based stealers & cross-platform payloads
Harden endpoint defenses around LOLBIN abuse, such as certutil.exe decoding malicious payloads.
Evaluate activity involving AutoIt and process hollowing, common in platform‑abuse campaigns.
Microsoft also recommends the following mitigations to reduce the impact of this threat:
Turn on cloud-delivered protection in Microsoft Defender Antivirus or the equivalent for your antivirus product to cover rapidly evolving attacker tools and techniques. Cloud-based machine learning protections block a majority of new and unknown threats.
Run EDR in block mode so that Microsoft Defender for Endpoint can block malicious artifacts, even when your non-Microsoft antivirus does not detect the threat or when Microsoft Defender Antivirus is running in passive mode. EDR in block mode works behind the scenes to remediate malicious artifacts that are detected post-breach.
Enable network protection and web protection in Microsoft Defender for Endpoint to safeguard against malicious sites and internet-based threats.
Encourage users to use Microsoft Edge and other web browsers that support Microsoft Defender SmartScreen, which identifies and blocks malicious websites, including phishing sites, scam sites, and sites that host malware.
Allow investigation and remediation in full automated mode to allow Microsoft Defender for Endpoint to take immediate action on alerts to resolve breaches, significantly reducing alert volume.
Turn on tamper protection features to prevent attackers from stopping security services. Combine tamper protection with the DisableLocalAdminMerge setting to prevent attackers from using local administrator privileges to set antivirus exclusions.
Microsoft Defender XDR customers can also implement the following attack surface reduction rules to harden an environment against LOLBAS techniques used by threat actors:
Microsoft Defender XDR customers can refer to the list of applicable detections below. Microsoft Defender XDR coordinates detection, prevention, investigation, and response across endpoints, identities, email, and apps to provide integrated protection against attacks like the threat discussed in this blog.
Customers with provisioned access can also use Microsoft Security Copilot in Microsoft Defender to investigate and respond to incidents, hunt for threats, and protect their organization with relevant threat intelligence.
Tactic
Observed activity
Microsoft Defender coverage
Execution
Encoded powershell commands downloading payload Execution of various commands and scripts via osascript and sh
Microsoft Defender for Endpoint Suspicious Powershell download or encoded command execution Suspicious shell command execution Suspicious AppleScript activity Suspicious script launched
Persistence
Registry Run key created Scheduled task created for recurring execution LaunchAgent or LaunchDaemon for recurring execution
Microsoft Defender for Endpoint Anomaly detected in ASEP registry Suspicious Scheduled Task Launched Suspicious Pslist modifications Suspicious launchctl tool activity
Microsoft Defender Antivirus Trojan:AtomicSteal.F
Defense Evasion
Unauthorized code execution facilitated by DLL sideloading and process injection Renamed Python interpreter executes obfuscated Python script Decode payload with certutil Renamed AutoIT interpreter binary and AutoIT script Delete data staging directories
Microsoft Defender for Endpoint An executable file loaded an unexpected DLL file A process was injected with potentially malicious code Suspicious Python binary execution Suspicious certutil activity Obfuse’ malware was prevented Rename AutoIT tool Suspicious path deletion
Microsoft Defender Antivirus Trojan:Script/Obfuse!MSR
Credential Access
Credential and Secret Harvesting Cryptocurrency probing
Microsoft Defender for Endpoint Possible theft of passwords and other sensitive web browser information Suspicious access of sensitive files Suspicious process collected data from local system Unix credentials were illegitimately accessed
Discovery
System information queried using WMI and Python
Microsoft Defender for Endpoint Suspicious System Hardware Discovery Suspicious Process Discovery Suspicious Security Software Discovery Suspicious Peripheral Device Discovery
Command and Control
Communication to command and control server
Microsoft Defender for Endpoint Suspicious connection to remote service
Collection
Sensitive browser information compressed into ZIP file for exfiltration
Microsoft Defender for Endpoint Compression of sensitive data Suspicious Staging of Data Suspicious archive creation
Exfiltration
Exfiltration through curl
Microsoft Defender for Endpoint Suspicious file or content ingress Remote exfiltration activity Network connection by osascript
Threat intelligence reports
Microsoft customers can use the following reports in Microsoft products to get the most up-to-date information about the threat actor, malicious activity, and techniques discussed in this blog. These reports provide the intelligence, protection information, and recommended actions to prevent, mitigate, or respond to associated threats found in customer environments.
Microsoft Defender XDR customers can run the following queries to find related activity in their networks:
Use the following queries to identify activity related to DigitStealer
// Identify suspicious DynamicLake disk image (.dmg) mounting
DeviceProcessEvents
| where FileName has_any ('mount_hfs', 'mount')
| where ProcessCommandLine has_all ('-o nodev' , '-o quarantine')
| where ProcessCommandLine contains '/Volumes/Install DynamicLake'
// Identify data exfiltration to DigitStealer C2 API endpoints.
DeviceProcessEvents
| where InitiatingProcessFileName has_any ('bash', 'sh')
| where ProcessCommandLine has_all ('curl', '--retry 10')
| where ProcessCommandLine contains 'hwid='
| where ProcessCommandLine endswith "api/credentials"
or ProcessCommandLine endswith "api/grabber"
or ProcessCommandLine endswith "api/log"
| extend APIEndpoint = extract(@"/api/([^\s]+)", 1, ProcessCommandLine)
Use the following queries to identify activity related to MacSync
// Identify exfiltration of staged data via curl
DeviceProcessEvents
| where InitiatingProcessFileName =~ "zsh" and FileName =~ "curl"
| where ProcessCommandLine has_all ("curl -k -X POST -H", "api-key: ", "--max-time", "-F file=@/tmp/", ".zip", "-F buildtxd=")
Use the following queries to identify activity related to Atomic Stealer (AMOS)
// Identify suspicious AlliAi disk image (.dmg) mounting
DeviceProcessEvents
| where FileName has_any ('mount_hfs', 'mount')
| where ProcessCommandLine has_all ('-o nodev', '-o quarantine')
| where ProcessCommandLine contains '/Volumes/ALLI'
Use the following queries to identify activity related to PXA Stealer: Campaign 1
// Identify activity initiated by renamed python binary
DeviceProcessEvents
| where InitiatingProcessFileName endswith "svchost.exe"
| where InitiatingProcessVersionInfoOriginalFileName == "pythonw.exe"
// Identify network connections initiated by renamed python binary
DeviceNetworkEvents
| where InitiatingProcessFileName endswith "svchost.exe"
| where InitiatingProcessVersionInfoOriginalFileName == "pythonw.exe"
Use the following queries to identify activity related to PXA Stealer: Campaign 2
// Identify malicious Process Execution activity
DeviceProcessEvents
| where ProcessCommandLine has_all ("-y","x",@"C:","Users","Public", ".pdf") and ProcessCommandLine has_any (".jpg",".png")
// Identify suspicious process injection activity
DeviceProcessEvents
| where FileName == "cvtres.exe"
| where InitiatingProcessFileName has "svchost.exe"
| where InitiatingProcessFolderPath !contains "system32"
Use the following queries to identify activity related to WhatsApp Abused to Deliver Eternidade Stealer
// Identify the files dropped from the malicious VBS execution
DeviceFileEvents
| where InitiatingProcessCommandLine has_all ("Downloads",".vbs")
| where FileName has_any (".zip",".lnk",".bat") and FolderPath has_all ("\\Temp\\")
// Identify batch script launching powershell instances to drop payloads
DeviceProcessEvents
| where InitiatingProcessParentFileName == "wscript.exe" and InitiatingProcessCommandLine has_any ("instalar.bat","python_install.bat")
| where ProcessCommandLine !has "conhost.exe"
// Identify AutoIT executable invoking malicious AutoIT script
DeviceProcessEvents
| where InitiatingProcessCommandLine has ".log" and InitiatingProcessVersionInfoOriginalFileName == "Autoit3.exe"
Use the following queries to identify activity related to Malicious CrystalPDF Installer Campaign
// Identify network connections to C2 domains
DeviceNetworkEvents
| where InitiatingProcessVersionInfoOriginalFileName == "CrystalPDF.exe"
// Identify scheduled task persistence
DeviceEvents
| where InitiatingProcessVersionInfoProductName == "CrystalPDF"
| where ActionType == "ScheduledTaskCreated
Deceptive domain that redirects user after CAPTCHA verification (AMOS campaign)
ai[.]foqguzz[.]com
Domain
Redirected domain used to deliver unsigned disk image. (AMOS campaign)
day.foqguzz[.]com
Domain
C2 server (AMOS campaign)
bagumedios[.]cloud
Domain
C2 server (PXA Stealer: Campaign 1)
Negmari[.]com Ramiort[.]com Strongdwn[.]com
Domain
C2 servers (Malicious Crystal PDF installer campaign)
Microsoft Sentinel
Microsoft Sentinel customers can use the TI Mapping analytics (a series of analytics all prefixed with ‘TI map’) to automatically match the malicious domain indicators mentioned in this blog post with data in their workspace. If the TI Map analytics are not currently deployed, customers can install the Threat Intelligence solution from the Microsoft Sentinel Content Hub to have the analytics rule deployed in their Sentinel workspace.
This research is provided by Microsoft Defender Security Research with contributions from Felicia Carter, Kajhon Soyini, Balaji Venkatesh S, Sai Chakri Kandalai, Dietrich Nembhard, Sabitha S, and Shriya Maniktala.
Learn more
Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.