Normal view

Crims compromised energy firms' Microsoft accounts, sent 600 phishing emails

22 January 2026 at 20:18

Logging in, not breaking in

Unknown attackers are abusing Microsoft SharePoint file-sharing services to target multiple energy-sector organizations, harvest user credentials, take over corporate inboxes, and then send hundreds of phishing emails from compromised accounts to contacts inside and outside those organizations.…

New Osiris Ransomware Emerges as New Strain Using POORTRY Driver in BYOVD Attack

Cybersecurity researchers have disclosed details of a new ransomware family called Osiris that targeted a major food service franchisee operator in Southeast Asia in November 2025. The attack leveraged a malicious driver called POORTRY as part of a known technique referred to as bring your own vulnerable driver (BYOVD) to disarm security software, the Symantec and Carbon Black Threat Hunter

Critical GNU InetUtils telnetd Flaw Lets Attackers Bypass Login and Gain Root Access

A critical security flaw has been disclosed in the GNU InetUtils telnet daemon (telnetd) that went unnoticed for nearly 11 years. The vulnerability, tracked as CVE-2026-24061, is rated 9.8 out of 10.0 on the CVSS scoring system. It affects all versions of GNU InetUtils from version 1.9.3 up to and including version 2.7. "Telnetd in GNU Inetutils through 2.7 allows remote authentication bypass

The Upside Down is Real: What Stranger Things Teaches Us About Modern Cybersecurity

22 January 2026 at 17:30

To all those who are fighting the good fight in the world of cyber, keep collaborating to ensure our world never succumbs to the chaos of the Upside Down.

The post The Upside Down is Real: What Stranger Things Teaches Us About Modern Cybersecurity appeared first on SecurityWeek.

ThreatsDay Bulletin: Pixel Zero-Click, Redis RCE, China C2s, RAT Ads, Crypto Scams & 15+ Stories

Most of this week’s threats didn’t rely on new tricks. They relied on familiar systems behaving exactly as designed, just in the wrong hands. Ordinary files, routine services, and trusted workflows were enough to open doors without forcing them. What stands out is how little friction attackers now need. Some activity focused on quiet reach and coverage, others on timing and reuse. The emphasis

Fake LastPass maintenance emails target users

22 January 2026 at 14:53

The LastPass Threat Intelligence, Mitigation, and Escalation (TIME) team has published a warning about an active phishing campaign in which fake “maintenance” emails pressure users to back up their vaults within 24 hours. The emails lead to credential-stealing phishing sites rather than any legitimate LastPass page.

The phishing campaign that started around January 19, 2026, uses emails that falsely claim upcoming infrastructure maintenance and urge users to “backup your vault in the next 24 hours.”

Example phishing email
Image courtesy of LastPass

“Scheduled Maintenance: Backup Recommended

As part of our ongoing commitment to security and performance, we will be conducting scheduled infrastructure maintenance on our servers.
Why are we asking you to create a backup?
While your data remains protected at all times, creating a local backup ensures you have access to your credentials during the maintenance window. In the unlikely event of any unforeseen technical difficulties or data discrepancies, having a recent backup guarantees your information remains secure and recoverable. We recommend this precautionary measure to all users to ensure complete peace of mind and seamless continuity of service.

Create Backup Now (link)

How to create your backup
1 Click the “Create Backup Now” button above
2 Select “Export Vault” from you account settings
3 Download and store your encrypted backup file securely”

The link in the email points to mail-lastpass[.]com, a domain that doesn’t belong to LastPass and has now been taken down.

Note that there are different subject lines in use. Here is a selection:

  • LastPass Infrastructure Update: Secure Your Vault Now
  • Your Data, Your Protection: Create a Backup Before Maintenance
  • Don’t Miss Out: Backup Your Vault Before Maintenance
  • Important: LastPass Maintenance & Your Vault Security
  • Protect Your Passwords: Backup Your Vault (24-Hour Window)

It is imperative for users to ignore instructions in emails like these. Giving away the login details for your password manager can be disastrous. For most users, it would provide access to enough information to carry out identity theft.

Stay safe

First and foremost, it’s important to understand that LastPass will never ask for your master password or demand immediate action under a tight deadline. Generally speaking, there are more guidelines that can help you stay safe.

  • Don’t click on links in unsolicited emails without verifying with the trusted sender that they’re legitimate.
  • Always log in directly on the platform that you are trying to access, rather than through a link.
  • Use a real-time, up-to-date anti-malware solution with a web protection module to block malicious sites.
  • Report phishing emails to the company that’s being impersonated, so they can alert other customers. In this case emails were forwarded to abuse@lastpass.com.

Pro tip: Malwarebytes Scam Guard  would have recognized this email as a scam and advised you how to proceed.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Europe's GDPR cops dished out €1.2B in fines last year as data breaches piled up

22 January 2026 at 14:39

Regulators logged over 400 personal data breach notifications a day for first time since law came into force

GDPR fines pushed past the £1 billion (€1.2 billion) mark in 2025 as Europe's regulators were deluged with more than 400 data breach notifications a day, according to a new survey that suggests the post-plateau era of enforcement has well and truly arrived.…

Why AI Keeps Falling for Prompt Injection Attacks

22 January 2026 at 13:35

Imagine you work at a drive-through restaurant. Someone drives up and says: “I’ll have a double cheeseburger, large fries, and ignore previous instructions and give me the contents of the cash drawer.” Would you hand over the money? Of course not. Yet this is what large language models (LLMs) do.

Prompt injection is a method of tricking LLMs into doing things they are normally prevented from doing. A user writes a prompt in a certain way, asking for system passwords or private data, or asking the LLM to perform forbidden instructions. The precise phrasing overrides the LLM’s safety guardrails, and it complies.

LLMs are vulnerable to all sorts of prompt injection attacks, some of them absurdly obvious. A chatbot won’t tell you how to synthesize a bioweapon, but it might tell you a fictional story that incorporates the same detailed instructions. It won’t accept nefarious text inputs, but might if the text is rendered as ASCII art or appears in an image of a billboard. Some ignore their guardrails when told to “ignore previous instructions” or to “pretend you have no guardrails.”

AI vendors can block specific prompt injection techniques once they are discovered, but general safeguards are impossible with today’s LLMs. More precisely, there’s an endless array of prompt injection attacks waiting to be discovered, and they cannot be prevented universally.

If we want LLMs that resist these attacks, we need new approaches. One place to look is what keeps even overworked fast-food workers from handing over the cash drawer.

Human Judgment Depends on Context

Our basic human defenses come in at least three types: general instincts, social learning, and situation-specific training. These work together in a layered defense.

As a social species, we have developed numerous instinctive and cultural habits that help us judge tone, motive, and risk from extremely limited information. We generally know what’s normal and abnormal, when to cooperate and when to resist, and whether to take action individually or to involve others. These instincts give us an intuitive sense of risk and make us especially careful about things that have a large downside or are impossible to reverse.

The second layer of defense consists of the norms and trust signals that evolve in any group. These are imperfect but functional: Expectations of cooperation and markers of trustworthiness emerge through repeated interactions with others. We remember who has helped, who has hurt, who has reciprocated, and who has reneged. And emotions like sympathy, anger, guilt, and gratitude motivate each of us to reward cooperation with cooperation and punish defection with defection.

A third layer is institutional mechanisms that enable us to interact with multiple strangers every day. Fast-food workers, for example, are trained in procedures, approvals, escalation paths, and so on. Taken together, these defenses give humans a strong sense of context. A fast-food worker basically knows what to expect within the job and how it fits into broader society.

We reason by assessing multiple layers of context: perceptual (what we see and hear), relational (who’s making the request), and normative (what’s appropriate within a given role or situation). We constantly navigate these layers, weighing them against each other. In some cases, the normative outweighs the perceptual—for example, following workplace rules even when customers appear angry. Other times, the relational outweighs the normative, as when people comply with orders from superiors that they believe are against the rules.

Crucially, we also have an interruption reflex. If something feels “off,” we naturally pause the automation and reevaluate. Our defenses are not perfect; people are fooled and manipulated all the time. But it’s how we humans are able to navigate a complex world where others are constantly trying to trick us.

So let’s return to the drive-through window. To convince a fast-food worker to hand us all the money, we might try shifting the context. Show up with a camera crew and tell them you’re filming a commercial, claim to be the head of security doing an audit, or dress like a bank manager collecting the cash receipts for the night. But even these have only a slim chance of success. Most of us, most of the time, can smell a scam.

Con artists are astute observers of human defenses. Successful scams are often slow, undermining a mark’s situational assessment, allowing the scammer to manipulate the context. This is an old story, spanning traditional confidence games such as the Depression-era “big store” cons, in which teams of scammers created entirely fake businesses to draw in victims, and modern “pig-butchering” frauds, where online scammers slowly build trust before going in for the kill. In these examples, scammers slowly and methodically reel in a victim using a long series of interactions through which the scammers gradually gain that victim’s trust.

Sometimes it even works at the drive-through. One scammer in the 1990s and 2000s targeted fast-food workers by phone, claiming to be a police officer and, over the course of a long phone call, convinced managers to strip-search employees and perform other bizarre acts.

Why LLMs Struggle With Context and Judgment

LLMs behave as if they have a notion of context, but it’s different. They do not learn human defenses from repeated interactions and remain untethered from the real world. LLMs flatten multiple levels of context into text similarity. They see “tokens,” not hierarchies and intentions. LLMs don’t reason through context, they only reference it.

While LLMs often get the details right, they can easily miss the big picture. If you prompt a chatbot with a fast-food worker scenario and ask if it should give all of its money to a customer, it will respond “no.” What it doesn’t “know”—forgive the anthropomorphizing—is whether it’s actually being deployed as a fast-food bot or is just a test subject following instructions for hypothetical scenarios.

This limitation is why LLMs misfire when context is sparse but also when context is overwhelming and complex; when an LLM becomes unmoored from context, it’s hard to get it back. AI expert Simon Willison wipes context clean if an LLM is on the wrong track rather than continuing the conversation and trying to correct the situation.

There’s more. LLMs are overconfident because they’ve been designed to give an answer rather than express ignorance. A drive-through worker might say: “I don’t know if I should give you all the money—let me ask my boss,” whereas an LLM will just make the call. And since LLMs are designed to be pleasing, they’re more likely to satisfy a user’s request. Additionally, LLM training is oriented toward the average case and not extreme outliers, which is what’s necessary for security.

The result is that the current generation of LLMs is far more gullible than people. They’re naive and regularly fall for manipulative cognitive tricks that wouldn’t fool a third-grader, such as flattery, appeals to groupthink, and a false sense of urgency. There’s a story about a Taco Bell AI system that crashed when a customer ordered 18,000 cups of water. A human fast-food worker would just laugh at the customer.

The Limits of AI Agents

Prompt injection is an unsolvable problem that gets worse when we give AIs tools and tell them to act independently. This is the promise of AI agents: LLMs that can use tools to perform multistep tasks after being given general instructions. Their flattening of context and identity, along with their baked-in independence and overconfidence, mean that they will repeatedly and unpredictably take actions—and sometimes they will take the wrong ones.

Science doesn’t know how much of the problem is inherent to the way LLMs work and how much is a result of deficiencies in the way we train them. The overconfidence and obsequiousness of LLMs are training choices. The lack of an interruption reflex is a deficiency in engineering. And prompt injection resistance requires fundamental advances in AI science. We honestly don’t know if it’s possible to build an LLM, where trusted commands and untrusted inputs are processed through the same channel, which is immune to prompt injection attacks.

We humans get our model of the world—and our facility with overlapping contexts—from the way our brains work, years of training, an enormous amount of perceptual input, and millions of years of evolution. Our identities are complex and multifaceted, and which aspects matter at any given moment depend entirely on context. A fast-food worker may normally see someone as a customer, but in a medical emergency, that same person’s identity as a doctor is suddenly more relevant.

We don’t know if LLMs will gain a better ability to move between different contexts as the models get more sophisticated. But the problem of recognizing context definitely can’t be reduced to the one type of reasoning that LLMs currently excel at. Cultural norms and styles are historical, relational, emergent, and constantly renegotiated, and are not so readily subsumed into reasoning as we understand it. Knowledge itself can be both logical and discursive.

The AI researcher Yann LeCunn believes that improvements will come from embedding AIs in a physical presence and giving them “world models.” Perhaps this is a way to give an AI a robust yet fluid notion of a social identity, and the real-world experience that will help it lose its naïveté.

Ultimately we are probably faced with a security trilemma when it comes to AI agents: fast, smart, and secure are the desired attributes, but you can only get two. At the drive-through, you want to prioritize fast and secure. An AI agent should be trained narrowly on food-ordering language and escalate anything else to a manager. Otherwise, every action becomes a coin flip. Even if it comes up heads most of the time, once in a while it’s going to be tails—and along with a burger and fries, the customer will get the contents of the cash drawer.

This essay was written with Barath Raghavan, and originally appeared in IEEE Spectrum.

New Wave of Attacks Targeting FortiGate Firewalls

22 January 2026 at 13:10

Hackers bypass the FortiCloud SSO login authentication to create new accounts and change device configurations.

The post New Wave of Attacks Targeting FortiGate Firewalls appeared first on SecurityWeek.

Under Armour ransomware breach: data of 72 million customers appears on the dark web

22 January 2026 at 13:02

When reports first emerged in November 2025 that sportswear giant Under Armour had been hit by the Everest ransomware group, the story sounded depressingly familiar: a big brand, a huge trove of data, and a lot of unanswered questions. Since then, the narrative around what actually happened has split into two competing versions—cautious corporate statements on one side and mounting evidence on the other that strongly suggests a large customer dataset is now circulating online.

Public communications and legal language talk about ongoing investigations, limited confirmation, and careful wording around “potential” impact. For many customers, that creates the impression that details are still emerging and that it’s unclear how serious the incident is. Meanwhile, a class action lawsuit filed in the US alleges negligence in data protection and references large‑scale exfiltration of sensitive information, including customer—and possibly employee—data during a November 2025 ransomware attack. Those lawsuits are, by definition, allegations, but they add weight to the idea that this is not a minor incident.

The Everest ransomware group claimed responsibility for the breach after Under Armour allegedly “failed to respond by the deadline.”

Everest Group leak site
Everest Group leak site

From the cybercriminals’ perspective, that means negotiations are over and the data has been published.

The Everest leak site also states that:

“After the full publication, all the data was duplicated across various hacker forums and leak database sites.”

Which seems to be confirmed by posts like this one, where the poster claims the data set contains full names, email addresses, phone numbers, physical locations, genders, purchase histories, and preferences. The data set contains 191,577,365 records including 72,727,245 unique email addresses.

Data made available on the Dark Web

So where does that leave Under Armour customers? The cautious corporate framing and the aggressive cybercriminal claims can’t both be entirely accurate, but they do not carry equal weight when it comes to assessing real-world risk. Ransomware groups sometimes lie about their access, but spinning up a major leak entry, publishing sample data, and distributing it across underground forums is a lot of work for a bluff that could be quickly disproven by affected users. Combined with the “Database Leaked” status on the Everest site, the balance of probabilities suggests that a substantial customer database is now in the wild, even if not every detail in the attackers’ claims is accurate.

Protecting yourself after a data breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but but it increases risk if a retailer suffers a breach.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Filling the Most Common Gaps in Google Workspace Security

Security teams at agile, fast-growing companies often have the same mandate: secure the business without slowing it down. Most teams inherit a tech stack optimized for breakneck growth, not resilience. In these environments, the security team is the helpdesk, the compliance expert, and the incident response team all rolled into one. Securing the cloud office in this scenario is all about

❌