Copenhagen, Denmark – January 15, 2026 – Heimdal today announced that its Next-Gen Antivirus (NGAV) with Extended Threat Protection (XTP) has achieved OPSWAT Gold Certification for Anti-Malware, validating its compatibility and effectiveness within OPSWAT’s industry-leading Access Control Certification Program. What the OPSWAT Certification Validates The certification confirms that Heimdal meets OPSWAT’s strict requirements for anti-malware […]
Recently, fake LinkedIn profiles have started posting comment replies claiming that a user has “engaged in activities that are not in compliance” with LinkedIn’s policies and that their account has been “temporarily restricted” until they submit an appeal through a specified link in the comment.
The comments come in different shapes and sizes, but here’s one example we found.
The accounts posting the comments all try to look like official LinkedIn bots and use various names. It’s likely they create new accounts when LinkedIn removes them. Either way, multiple accounts similar to the “Linked Very” one above were reported in a short period, suggesting automated creation and posting at scale.
The same pattern is true for the links. The shortened link used in the example above has already been disabled, while others point directly to phishing sites. Scammers often use shortened LinkedIn links to build trust, making targets believe the messages are legitimate. Because LinkedIn can quickly disable these links, attackers likely test different approaches to see which last the longest.
Here’s another example:
Malwarebytes blocks this last link based on the IP address:
If users follow these links, they are taken to a phishing page designed to steal their LinkedIn login details:
Image courtesy of BleepingComputer
A LinkedIn spokesperson confirmed to BleepingComputer they are aware of the situation:
“I can confirm that we are aware of this activity and our teams are working to take action.”
Stay safe
In situations like this awareness is key—and now you know what to watch for. Some additional tips:
Don’t click on unsolicited links in private messages and comments without verifying with the trusted sender that they’re legitimate.
Always log in directly on the platform that you are trying to access, rather than through a link.
Use a password manager, which won’t auto-fill in credentials on fake websites.
Use a real-time, up-to-date anti-malware solution with a web protection module to block malicious sites.
Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!
Recently, fake LinkedIn profiles have started posting comment replies claiming that a user has “engaged in activities that are not in compliance” with LinkedIn’s policies and that their account has been “temporarily restricted” until they submit an appeal through a specified link in the comment.
The comments come in different shapes and sizes, but here’s one example we found.
The accounts posting the comments all try to look like official LinkedIn bots and use various names. It’s likely they create new accounts when LinkedIn removes them. Either way, multiple accounts similar to the “Linked Very” one above were reported in a short period, suggesting automated creation and posting at scale.
The same pattern is true for the links. The shortened link used in the example above has already been disabled, while others point directly to phishing sites. Scammers often use shortened LinkedIn links to build trust, making targets believe the messages are legitimate. Because LinkedIn can quickly disable these links, attackers likely test different approaches to see which last the longest.
Here’s another example:
Malwarebytes blocks this last link based on the IP address:
If users follow these links, they are taken to a phishing page designed to steal their LinkedIn login details:
Image courtesy of BleepingComputer
A LinkedIn spokesperson confirmed to BleepingComputer they are aware of the situation:
“I can confirm that we are aware of this activity and our teams are working to take action.”
Stay safe
In situations like this awareness is key—and now you know what to watch for. Some additional tips:
Don’t click on unsolicited links in private messages and comments without verifying with the trusted sender that they’re legitimate.
Always log in directly on the platform that you are trying to access, rather than through a link.
Use a password manager, which won’t auto-fill in credentials on fake websites.
Use a real-time, up-to-date anti-malware solution with a web protection module to block malicious sites.
Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!
Researchers have been tracking a Magecart campaign that targets several major payment providers, including American Express, Diners Club, Discover, and Mastercard.
Magecart is an umbrella term for criminal groups that specialize in stealing payment data from online checkout pages using malicious JavaScript, a technique known as web skimming.
In the early days, Magecart started as a loose coalition of threat actors targeting Magento‑based web stores. Today, the name is used more broadly to describe web-skimming operations against many e‑commerce platforms. In these attacks, criminals inject JavaScript into legitimate checkout pages to capture card data and personal details as shoppers enter them.
The campaign described by the researchers has been active since early 2022. They found a vast network of domains related to a long-running credit card skimming operation with a wide reach.
“This campaign utilizes scripts targeting at least six major payment network providers: American Express, Diners Club, Discover (a subsidiary of Capital One), JCB Co., Ltd., Mastercard, and UnionPay. Enterprise organizations that are clients of these payment providers are the most likely to be impacted.”
Web skimmers usually hook into the checkout flow using JavaScript. They are designed to read form fields containing card numbers, expiry dates, card verification codes (CVC), and billing or shipping details, then send that data to the attackers.
To avoid detection, the JavaScript is heavily obfuscated to and may even trigger a self‑destruct routine to remove the skimmer from the page. This can cause investigations performed through an admin session to appear unsuspicious.
Besides other methods to stay hidden, the campaign uses bulletproof hosting for a stable environment. Bulletproof hosting refers to web hosting services designed to shield cybercriminals by deliberately ignoring abuse complaints, takedown requests, and law enforcement actions.
How to stay safe
Magecart campaigns affect three groups: customers, merchants, and payment providers. Because web skimmers operate inside the browser, they can bypass many traditional server‑side fraud controls.
While shoppers cannot fix compromised checkout pages themselves, they can reduce their exposure and improve their chances of spotting fraud early.
A few things you can protect against the risk of web skimmers:
Use virtual or single‑use cards for online purchases so any skimmed card number has a limited lifetime and spending scope.
Where possible, turn on transaction alerts (SMS, email, or app push) for card activity and review statements regularly to spot unsolicited charges quickly.
Use strong, unique passwords on bank and card portals so attackers cannot easily pivot from stolen card data to full account takeover.
Use a web protection solution to avoid connecting to malicious domains.
Researchers have been tracking a Magecart campaign that targets several major payment providers, including American Express, Diners Club, Discover, and Mastercard.
Magecart is an umbrella term for criminal groups that specialize in stealing payment data from online checkout pages using malicious JavaScript, a technique known as web skimming.
In the early days, Magecart started as a loose coalition of threat actors targeting Magento‑based web stores. Today, the name is used more broadly to describe web-skimming operations against many e‑commerce platforms. In these attacks, criminals inject JavaScript into legitimate checkout pages to capture card data and personal details as shoppers enter them.
The campaign described by the researchers has been active since early 2022. They found a vast network of domains related to a long-running credit card skimming operation with a wide reach.
“This campaign utilizes scripts targeting at least six major payment network providers: American Express, Diners Club, Discover (a subsidiary of Capital One), JCB Co., Ltd., Mastercard, and UnionPay. Enterprise organizations that are clients of these payment providers are the most likely to be impacted.”
Web skimmers usually hook into the checkout flow using JavaScript. They are designed to read form fields containing card numbers, expiry dates, card verification codes (CVC), and billing or shipping details, then send that data to the attackers.
To avoid detection, the JavaScript is heavily obfuscated to and may even trigger a self‑destruct routine to remove the skimmer from the page. This can cause investigations performed through an admin session to appear unsuspicious.
Besides other methods to stay hidden, the campaign uses bulletproof hosting for a stable environment. Bulletproof hosting refers to web hosting services designed to shield cybercriminals by deliberately ignoring abuse complaints, takedown requests, and law enforcement actions.
How to stay safe
Magecart campaigns affect three groups: customers, merchants, and payment providers. Because web skimmers operate inside the browser, they can bypass many traditional server‑side fraud controls.
While shoppers cannot fix compromised checkout pages themselves, they can reduce their exposure and improve their chances of spotting fraud early.
A few things you can protect against the risk of web skimmers:
Use virtual or single‑use cards for online purchases so any skimmed card number has a limited lifetime and spending scope.
Where possible, turn on transaction alerts (SMS, email, or app push) for card activity and review statements regularly to spot unsolicited charges quickly.
Use strong, unique passwords on bank and card portals so attackers cannot easily pivot from stolen card data to full account takeover.
Use a web protection solution to avoid connecting to malicious domains.
You need to set up remote access to a colleague’s computer. You do a Google search for “RustDesk download,” click one of the top results, and land on a polished website with documentation, downloads, and familiar branding.
You install the software, launch it, and everything works exactly as expected.
What you don’t see is the second program that installs alongside it—one that quietly gives attackers persistent access to your computer.
That’s exactly what we observed in a campaign using the fake domain rustdesk[.]work.
The bait: a near-perfect impersonation
We identified a malicious website at rustdesk[.]work impersonating the legitimate RustDesk project, which is hosted at rustdesk.com. The fake site closely mirrors the real one, complete with multilingual content and prominent warnings claiming (ironically) that rustdesk[.]work is the onlyofficial domain.
This campaign doesn’t exploit software vulnerabilities or rely on advanced hacking techniques. It succeeds entirely through deception. When a website looks legitimate and the software behaves normally, most users never suspect anything is wrong.
What happens when you run the installer
The installer performs a deliberate bait-and-switch:
It installs real RustDesk, fully functional and unmodified
It quietly installs a hidden backdoor, a malware framework known as Winos4.0
The user sees RustDesk launch normally. Everything appears to work. Meanwhile, the backdoor quietly establishes a connection to the attacker’s server.
By bundling malware with working software, attackers remove the most obvious red flag: broken or missing functionality. From the user’s point of view, nothing feels wrong.
Inside the infection chain
The malware executes through a staged process, with each step designed to evade detection and establish persistence:
Stage 1: The trojanized installer
The downloaded file (rustdesk-1.4.4-x86_64.exe) acts as both dropper and decoy. It writes two files to disk:
The legitimate RustDesk installer, which is executed to maintain cover
logger.exe, the Winos4.0 payload
The malware hides in plain sight. While the user watches RustDesk install normally, the malicious payload is quietly staged in the background.
Stage 2: Loader execution
The logger.exe file is a loader — its job is to set up the environment for the main implant. During execution, it:
Creates a new process
Allocates executable memory
Transitions execution to a new runtime identity: Libserver.exe
This loader-to-implant handoff is a common technique in sophisticated malware to separate the initial dropper from the persistent backdoor.
By changing its process name, the malware makes forensic analysis harder. Defenders looking for “logger.exe” won’t find a running process with that name.
Stage 3: In-memory module deployment
The Libserver.exe process unpacks the actual Winos4.0 framework entirely in memory. Several WinosStager DLL modules—and a large ~128 MB payload—are loaded without being written to disk as standalone files.
Traditional antivirus tools focus on scanning files on disk (file-based detection). By keeping its functional components in memory only, the malware significantly reduces the effectiveness of file-based detection. This is why behavioral analysis and memory scanning are critical for detecting threats like Winos4.0.
The hidden payload: Winos4.0
The secondary payload is identified as Winos4.0 (WinosStager): a sophisticated remote access framework that has been observed in multiple campaigns, particularly targeting users in Asia.
Once active, it allows attackers to:
Monitor victim activity and capture screenshots
Log keystrokes and steal credentials
Download and execute additional malware
Maintain persistent access even after system reboots
This isn’t simple malware—it’s a full-featured attack framework. Once installed, attackers have a foothold they can use to conduct espionage, steal data, or deploy ransomware at a time of their choosing.
Technical detail: How the malware hides
The malware employs several techniques to avoid detection:
What it does
How it achieves this
Why it matters
Runs entirely in memory
Loads executable code without writing files
Evades file-based detection
Detects analysis environments
Checks available system memory and looks for debugging tools
Prevents security researchers from analyzing its behavior
Checks system language
Queries locale settings via the Windows registry
May be used to target (or avoid) specific geographic regions
Clears browser history
Invokes system APIs to delete browsing data
Removes evidence of how the victim found the malicious site
Hides configuration in the registry
Stores encrypted data in unusual registry paths
Hides configuration from casual inspection
Command-and-control activity
Shortly after installation, the malware connects to an attacker-controlled server:
IP: 207.56.13[.]76
Port: 5666/TCP
This connection allows attackers to send commands to the infected machine and receive stolen data in return. Network analysis confirmed sustained two-way communication consistent with an established command-and-control session.
How the malware blends into normal traffic
The malware is particularly clever in how it disguises its network activity:
Destination
Purpose
207.56.13[.]76:5666
Malicious: Command-and-control server
209.250.254.15:21115-21116
Legitimate: RustDesk relay traffic
api.rustdesk.com:443
Legitimate: RustDesk API
Because the victim installed real RustDesk, the malware’s network traffic is mixed with legitimate remote desktop traffic. This makes it much harder for network security tools to identify the malicious connections: the infected computer looks like it’s just running RustDesk.
What this campaign reveals
This attack demonstrates a troubling trend: legitimate software used as camouflage for malware.
The attackers didn’t need to find a zero-day vulnerability or craft a sophisticated exploit. They simply:
Registered a convincing domain name
Cloned a legitimate website
Bundled real software with their malware
Let the victim do the rest
This approach works because it exploits human trust rather than technical weaknesses. When software behaves exactly as expected, users have no reason to suspect compromise.
The rustdesk[.]work campaign shows how attackers can gain access without exploits, warnings, or broken software. By hiding behind trusted open-source tools, this attack achieved persistence and cover while giving victims no reason to suspect compromise.
The takeaway is simple: software behaving normally does not mean it’s safe. Modern threats are designed to blend in, making layered defenses and behavioral detection essential.
For individuals:
Always verify download sources. Before downloading software, check that the domain matches the official project. For RustDesk, the legitimate site is rustdesk.com—not rustdesk.work or similar variants.
Be suspicious of search results. Attackers use SEO poisoning to push malicious sites to the top of search results. When possible, navigate directly to official websites rather than clicking search links.
Use security software.Malwarebytes Premium Security detects malware families like Winos4.0, even when bundled with legitimate software.
For businesses:
Monitor for unusual network connections. Outbound traffic on port 5666/TCP, or connections to unfamiliar IP addresses from systems running remote desktop software, should be investigated.
Implement application allowlisting. Restrict which applications can run in your environment to prevent unauthorized software execution.
Educate users about typosquatting. Training programs should include examples of fake websites and how to verify legitimate download sources.
Block known malicious infrastructure. Add the IOCs listed above to your security tools.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.
You need to set up remote access to a colleague’s computer. You do a Google search for “RustDesk download,” click one of the top results, and land on a polished website with documentation, downloads, and familiar branding.
You install the software, launch it, and everything works exactly as expected.
What you don’t see is the second program that installs alongside it—one that quietly gives attackers persistent access to your computer.
That’s exactly what we observed in a campaign using the fake domain rustdesk[.]work.
The bait: a near-perfect impersonation
We identified a malicious website at rustdesk[.]work impersonating the legitimate RustDesk project, which is hosted at rustdesk.com. The fake site closely mirrors the real one, complete with multilingual content and prominent warnings claiming (ironically) that rustdesk[.]work is the onlyofficial domain.
This campaign doesn’t exploit software vulnerabilities or rely on advanced hacking techniques. It succeeds entirely through deception. When a website looks legitimate and the software behaves normally, most users never suspect anything is wrong.
What happens when you run the installer
The installer performs a deliberate bait-and-switch:
It installs real RustDesk, fully functional and unmodified
It quietly installs a hidden backdoor, a malware framework known as Winos4.0
The user sees RustDesk launch normally. Everything appears to work. Meanwhile, the backdoor quietly establishes a connection to the attacker’s server.
By bundling malware with working software, attackers remove the most obvious red flag: broken or missing functionality. From the user’s point of view, nothing feels wrong.
Inside the infection chain
The malware executes through a staged process, with each step designed to evade detection and establish persistence:
Stage 1: The trojanized installer
The downloaded file (rustdesk-1.4.4-x86_64.exe) acts as both dropper and decoy. It writes two files to disk:
The legitimate RustDesk installer, which is executed to maintain cover
logger.exe, the Winos4.0 payload
The malware hides in plain sight. While the user watches RustDesk install normally, the malicious payload is quietly staged in the background.
Stage 2: Loader execution
The logger.exe file is a loader — its job is to set up the environment for the main implant. During execution, it:
Creates a new process
Allocates executable memory
Transitions execution to a new runtime identity: Libserver.exe
This loader-to-implant handoff is a common technique in sophisticated malware to separate the initial dropper from the persistent backdoor.
By changing its process name, the malware makes forensic analysis harder. Defenders looking for “logger.exe” won’t find a running process with that name.
Stage 3: In-memory module deployment
The Libserver.exe process unpacks the actual Winos4.0 framework entirely in memory. Several WinosStager DLL modules—and a large ~128 MB payload—are loaded without being written to disk as standalone files.
Traditional antivirus tools focus on scanning files on disk (file-based detection). By keeping its functional components in memory only, the malware significantly reduces the effectiveness of file-based detection. This is why behavioral analysis and memory scanning are critical for detecting threats like Winos4.0.
The hidden payload: Winos4.0
The secondary payload is identified as Winos4.0 (WinosStager): a sophisticated remote access framework that has been observed in multiple campaigns, particularly targeting users in Asia.
Once active, it allows attackers to:
Monitor victim activity and capture screenshots
Log keystrokes and steal credentials
Download and execute additional malware
Maintain persistent access even after system reboots
This isn’t simple malware—it’s a full-featured attack framework. Once installed, attackers have a foothold they can use to conduct espionage, steal data, or deploy ransomware at a time of their choosing.
Technical detail: How the malware hides
The malware employs several techniques to avoid detection:
What it does
How it achieves this
Why it matters
Runs entirely in memory
Loads executable code without writing files
Evades file-based detection
Detects analysis environments
Checks available system memory and looks for debugging tools
Prevents security researchers from analyzing its behavior
Checks system language
Queries locale settings via the Windows registry
May be used to target (or avoid) specific geographic regions
Clears browser history
Invokes system APIs to delete browsing data
Removes evidence of how the victim found the malicious site
Hides configuration in the registry
Stores encrypted data in unusual registry paths
Hides configuration from casual inspection
Command-and-control activity
Shortly after installation, the malware connects to an attacker-controlled server:
IP: 207.56.13[.]76
Port: 5666/TCP
This connection allows attackers to send commands to the infected machine and receive stolen data in return. Network analysis confirmed sustained two-way communication consistent with an established command-and-control session.
How the malware blends into normal traffic
The malware is particularly clever in how it disguises its network activity:
Destination
Purpose
207.56.13[.]76:5666
Malicious: Command-and-control server
209.250.254.15:21115-21116
Legitimate: RustDesk relay traffic
api.rustdesk.com:443
Legitimate: RustDesk API
Because the victim installed real RustDesk, the malware’s network traffic is mixed with legitimate remote desktop traffic. This makes it much harder for network security tools to identify the malicious connections: the infected computer looks like it’s just running RustDesk.
What this campaign reveals
This attack demonstrates a troubling trend: legitimate software used as camouflage for malware.
The attackers didn’t need to find a zero-day vulnerability or craft a sophisticated exploit. They simply:
Registered a convincing domain name
Cloned a legitimate website
Bundled real software with their malware
Let the victim do the rest
This approach works because it exploits human trust rather than technical weaknesses. When software behaves exactly as expected, users have no reason to suspect compromise.
The rustdesk[.]work campaign shows how attackers can gain access without exploits, warnings, or broken software. By hiding behind trusted open-source tools, this attack achieved persistence and cover while giving victims no reason to suspect compromise.
The takeaway is simple: software behaving normally does not mean it’s safe. Modern threats are designed to blend in, making layered defenses and behavioral detection essential.
For individuals:
Always verify download sources. Before downloading software, check that the domain matches the official project. For RustDesk, the legitimate site is rustdesk.com—not rustdesk.work or similar variants.
Be suspicious of search results. Attackers use SEO poisoning to push malicious sites to the top of search results. When possible, navigate directly to official websites rather than clicking search links.
Use security software.Malwarebytes Premium Security detects malware families like Winos4.0, even when bundled with legitimate software.
For businesses:
Monitor for unusual network connections. Outbound traffic on port 5666/TCP, or connections to unfamiliar IP addresses from systems running remote desktop software, should be investigated.
Implement application allowlisting. Restrict which applications can run in your environment to prevent unauthorized software execution.
Educate users about typosquatting. Training programs should include examples of fake websites and how to verify legitimate download sources.
Block known malicious infrastructure. Add the IOCs listed above to your security tools.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.
California’s privacy regulator has fined a Texas data broker $45,000 and banned it from selling Californians’ personal information after it sold Alzheimer patients’ data. Texan company Rickenbacher Data LLC, which does business as Datamasters, bought and resold the names, addresses, phone numbers, and email addresses of people that suffered from serious health conditions, according to the California Privacy Protection Agency (CPPA).
The CPPA’s final order against Datamasters says that the company maintained a database containing 435,245 postal addresses for Alzheimer’s patients. But it didn’t stop there. Also up for grabs were records for 2,317,141 blind or visually impaired people, and 133,142 addiction sufferers. It also sold records for 857,449 people with bladder control issues.
Health-related data wasn’t the only category Datamasters trafficked in. The company also sold information tied to ethnicity, including so-called “Hispanic lists” containing more than 20 million names, as well as age-based “senior lists” and indicators of financial vulnerability. For example, it sold records of people holding high-interest mortgages.
And if buyers wanted data on other likely customer characteristics and actions, such as who was likely a liberal vs a right-winger, it could give you that, too, thanks to 3,370 “Consumer Predictor Models” spanning automotive preferences, financial activity, media use, political affiliation, and nonprofit activity.
Datamasters offers outright purchase of records from its national consumer database, which it claims covers 114 million households and 231 million individuals. Customers can also buy subscription-based updates too.
California regulators began investigating Datamasters after discovering the company had failed to register as a data broker in the state, as required under California’s Delete Act. The law has required data brokers to register since January 31, 2025.
The company originally denied that it did business in California or had data on Californians. However, that claim collapsed when regulators found an Excel spreadsheet on the website listing 204,218 California student records.
Datamasters first said it had not screened its national database to remove Californians’ data. After getting a lawyer, it changed its story, asserting that it did in fact filter Californians out of the data set. That didn’t convince the CPPA though.
The regulator acknowledged that Datamasters did try to comply with Californian privacy laws, but that it
“lacked sufficient written policies and procedures to ensure compliance with the Delete Act.”
The fine imposed on Datamasters also takes into account that it hadn’t registered on the state’s data broker registry. Data brokers that don’t register are liable for $200 per day in fines, and failing to delete consumer data will incur $200 per consumer per day in fines.
Starting January 1, 2028, data brokers registered in California will also be required to undergo independent third-party compliance audits every three years.
Why selling extra-sensitive customer data is so dangerous
“History teaches us that certain types of lists can be dangerous,”
Michael Macko, the CPPA’s head of enforcement, pointed out.
Research has told us that Alzheimer’s patients are especially vulnerable to financial exploitation. If you think that scammers don’t seek out such lists, think again; criminals were found to have accessed data from at least three data brokers in the past. While there’s no suggestion that Datamasters knowingly sold data to scammers, it seems easy for people to buy data broker lists.
It also doesn’t take a PhD to see why many of these records (which, remember, the company holds about people nationwide) could be especially sensitive in the current US political climate.
There’s a broader privacy issue here, too. While many Americans might assume that the federal Health Insurance Portability and Accountability Act (HIPAA) protects their health data, it only applies to healthcare providers. Amazingly, data brokers sit outside its purview.
So what can you do to protect yourself?
Your first port of call should be your state’s data protection law. California introduced the Data Request and Opt-out Platform (DROP) system this year under the Delete Act. It’s an opt-out system for California residents to make all data brokers on the registry delete data held about them.
If you don’t live in a state that takes sensitive data seriously, your options are more limited. You could move—maybe to Europe, where privacy protections are considerably stronger.
We don’t just report on data privacy—we help you remove your personal information
Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.
California’s privacy regulator has fined a Texas data broker $45,000 and banned it from selling Californians’ personal information after it sold Alzheimer patients’ data. Texan company Rickenbacher Data LLC, which does business as Datamasters, bought and resold the names, addresses, phone numbers, and email addresses of people that suffered from serious health conditions, according to the California Privacy Protection Agency (CPPA).
The CPPA’s final order against Datamasters says that the company maintained a database containing 435,245 postal addresses for Alzheimer’s patients. But it didn’t stop there. Also up for grabs were records for 2,317,141 blind or visually impaired people, and 133,142 addiction sufferers. It also sold records for 857,449 people with bladder control issues.
Health-related data wasn’t the only category Datamasters trafficked in. The company also sold information tied to ethnicity, including so-called “Hispanic lists” containing more than 20 million names, as well as age-based “senior lists” and indicators of financial vulnerability. For example, it sold records of people holding high-interest mortgages.
And if buyers wanted data on other likely customer characteristics and actions, such as who was likely a liberal vs a right-winger, it could give you that, too, thanks to 3,370 “Consumer Predictor Models” spanning automotive preferences, financial activity, media use, political affiliation, and nonprofit activity.
Datamasters offers outright purchase of records from its national consumer database, which it claims covers 114 million households and 231 million individuals. Customers can also buy subscription-based updates too.
California regulators began investigating Datamasters after discovering the company had failed to register as a data broker in the state, as required under California’s Delete Act. The law has required data brokers to register since January 31, 2025.
The company originally denied that it did business in California or had data on Californians. However, that claim collapsed when regulators found an Excel spreadsheet on the website listing 204,218 California student records.
Datamasters first said it had not screened its national database to remove Californians’ data. After getting a lawyer, it changed its story, asserting that it did in fact filter Californians out of the data set. That didn’t convince the CPPA though.
The regulator acknowledged that Datamasters did try to comply with Californian privacy laws, but that it
“lacked sufficient written policies and procedures to ensure compliance with the Delete Act.”
The fine imposed on Datamasters also takes into account that it hadn’t registered on the state’s data broker registry. Data brokers that don’t register are liable for $200 per day in fines, and failing to delete consumer data will incur $200 per consumer per day in fines.
Starting January 1, 2028, data brokers registered in California will also be required to undergo independent third-party compliance audits every three years.
Why selling extra-sensitive customer data is so dangerous
“History teaches us that certain types of lists can be dangerous,”
Michael Macko, the CPPA’s head of enforcement, pointed out.
Research has told us that Alzheimer’s patients are especially vulnerable to financial exploitation. If you think that scammers don’t seek out such lists, think again; criminals were found to have accessed data from at least three data brokers in the past. While there’s no suggestion that Datamasters knowingly sold data to scammers, it seems easy for people to buy data broker lists.
It also doesn’t take a PhD to see why many of these records (which, remember, the company holds about people nationwide) could be especially sensitive in the current US political climate.
There’s a broader privacy issue here, too. While many Americans might assume that the federal Health Insurance Portability and Accountability Act (HIPAA) protects their health data, it only applies to healthcare providers. Amazingly, data brokers sit outside its purview.
So what can you do to protect yourself?
Your first port of call should be your state’s data protection law. California introduced the Data Request and Opt-out Platform (DROP) system this year under the Delete Act. It’s an opt-out system for California residents to make all data brokers on the registry delete data held about them.
If you don’t live in a state that takes sensitive data seriously, your options are more limited. You could move—maybe to Europe, where privacy protections are considerably stronger.
We don’t just report on data privacy—we help you remove your personal information
Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.
If you were still questioning whether iOS 26+ is for you, now is the time to make that call.
Why?
On December 12, 2025, Apple patched two WebKit zero‑day vulnerabilities linked to mercenary spyware and is now effectively pushing iPhone 11 and newer users toward iOS 26+, because that’s where the fixes and new memory protections live. These vulnerabilities were primarily used in highly targeted attacks, but such campaigns are likely to expand over time.
WebKit powers the Safari browser and many other iOS applications, so it’s a big attack surface to leave exposed and isn’t limited to “risky” behavior. These vulnerabilities allowed an attacker to execute arbitrary code on a device after exploitation via malicious web content.
Apple has confirmed that attackers are already exploiting these vulnerabilities in the wild, making installation of the update a high‑priority security task for every user. Campaigns that start with diplomats, journalists, or executives often lead to tooling and exploits leaking or being repurposed, so “I’m not a target” is not a viable safety strategy.
Due to public resistance to new features like Liquid Glass, many iPhone users have not yet upgraded to iOS 26.2. Reports suggest adoption of iOS 26 has been unusually slow. As of January 2026, only about 4.6% of active iPhones are on iOS 26.2, and roughly 16% are on any version of iOS 26, leaving the vast majority on older releases such as iOS 18.
However, Apple only ships these fixes and newer protections, such as Memory Integrity Enforcement, on iOS 26+ for supported devices. Users on older, unsupported devices won’t be able to access these protections at all.
Another important factor in the upgrade cycle is restarting the device. What many people don’t realize is that when you restart your device, any memory-resident malware is flushed—unless it has somehow gained persistence, in which case it will return. High-end spyware tools tend to avoid leaving traces needed for persistence and often rely on users not restarting their devices.
Upgrading requires a restart, which makes this a win-win: you get the latest protections, and any memory-resident malware is flushed at the same time.
For iOS and iPadOS users, you can check if you’re using the latest software version, go to Settings > General > Software Update. It’s also worth turning on Automatic Updates if you haven’t already. You can do that on the same screen.
How to stay safe
The most important fix—however painful you may find it—is to upgrade to iOS 26.2. Not doing means missing an accumulating list of security fixes, leaving your device vulnerable to more and more newly found vulnerabilities.
But here are some other useful tips:
Make it a habit to restart your device on a regular basis. The NSA recommends doing this weekly.
Do not open unsolicited links and attachments without verifying with the trusted sender.
Remember, Apple threat notifications will never ask users to click links, open files, install apps or ask for account passwords or verification code.
For Apple Mail users specifically, these vulnerabilities create risk when viewing HTML-formatted emails containing malicious web content.
Malwarebytes for iOScan help keep your device secure, with Trusted Advisor alerting you when important updates are available.
If you are a high-value target, or you want the extra level of security, consider using Apple’s Lockdown Mode.
We don’t just report on phone security—we provide it
If you were still questioning whether iOS 26+ is for you, now is the time to make that call.
Why?
On December 12, 2025, Apple patched two WebKit zero‑day vulnerabilities linked to mercenary spyware and is now effectively pushing iPhone 11 and newer users toward iOS 26+, because that’s where the fixes and new memory protections live. These vulnerabilities were primarily used in highly targeted attacks, but such campaigns are likely to expand over time.
WebKit powers the Safari browser and many other iOS applications, so it’s a big attack surface to leave exposed and isn’t limited to “risky” behavior. These vulnerabilities allowed an attacker to execute arbitrary code on a device after exploitation via malicious web content.
Apple has confirmed that attackers are already exploiting these vulnerabilities in the wild, making installation of the update a high‑priority security task for every user. Campaigns that start with diplomats, journalists, or executives often lead to tooling and exploits leaking or being repurposed, so “I’m not a target” is not a viable safety strategy.
Due to public resistance to new features like Liquid Glass, many iPhone users have not yet upgraded to iOS 26.2. Reports suggest adoption of iOS 26 has been unusually slow. As of January 2026, only about 4.6% of active iPhones are on iOS 26.2, and roughly 16% are on any version of iOS 26, leaving the vast majority on older releases such as iOS 18.
However, Apple only ships these fixes and newer protections, such as Memory Integrity Enforcement, on iOS 26+ for supported devices. Users on older, unsupported devices won’t be able to access these protections at all.
Another important factor in the upgrade cycle is restarting the device. What many people don’t realize is that when you restart your device, any memory-resident malware is flushed—unless it has somehow gained persistence, in which case it will return. High-end spyware tools tend to avoid leaving traces needed for persistence and often rely on users not restarting their devices.
Upgrading requires a restart, which makes this a win-win: you get the latest protections, and any memory-resident malware is flushed at the same time.
For iOS and iPadOS users, you can check if you’re using the latest software version, go to Settings > General > Software Update. It’s also worth turning on Automatic Updates if you haven’t already. You can do that on the same screen.
How to stay safe
The most important fix—however painful you may find it—is to upgrade to iOS 26.2. Not doing means missing an accumulating list of security fixes, leaving your device vulnerable to more and more newly found vulnerabilities.
But here are some other useful tips:
Make it a habit to restart your device on a regular basis. The NSA recommends doing this weekly.
Do not open unsolicited links and attachments without verifying with the trusted sender.
Remember, Apple threat notifications will never ask users to click links, open files, install apps or ask for account passwords or verification code.
For Apple Mail users specifically, these vulnerabilities create risk when viewing HTML-formatted emails containing malicious web content.
Malwarebytes for iOScan help keep your device secure, with Trusted Advisor alerting you when important updates are available.
If you are a high-value target, or you want the extra level of security, consider using Apple’s Lockdown Mode.
We don’t just report on phone security—we provide it
Last week, many Instagram users began receiving unsolicited emails from the platform that warned about a password reset request.
The message said:
“Hi {username}, We got a request to reset your Instagram password. If you ignore this message, your password will not be changed. If you didn’t request a password reset, let us know.”
Around the same time that users began receiving these emails, a cybercriminal using the handle “Solonik” offered data that alleged contains information about 17 million Instagram users for sale on a Dark Web forum.
These 17 million or so records include:
Usernames
Full names
User IDs
Email addresses
Phone numbers
Countries
Partial locations
Please note that there are no passwords listed in the data.
Despite the timing of the two events, Instagram denied this weekend that these events are related. On the platform X, the company stated they fixed an issue that allowed an external party to request password reset emails for “some people.”
So, what’s happening?
Regarding the data found on the dark web last week, Shahak Shalev, global head of scam and AI research at Malwarebytes, shared that “there are some indications that the Instagram data dump includes data from other, older, alleged Instagram breaches, and is a sort of compilation.” As Shalev’s team investigates the data, he also said that the earliest password reset requests reported by users came days before the data was first posted on the dark web, which might mean that “the data may have been circulating in more private groups before being made public.”
However, another possibility, Shalev said, is that “another vulnerability/data leak was happening as some bad actor tried spraying for [Instagram] accounts. Instagram’s announcement seems to reference that spraying. Besides the suspicious timing, there’s no clear connection between the two at this time.”
But, importantly, scammers will not care whether these incidents are related or not. They will try to take advantage of the situation by sending out fake emails.
“We felt it was important to alert people about the data availability so that everyone could reset their passwords, directly from the app, and be on alert for other phishing communications,” Shalev said.
If and when we find out more, we’ll keep you posted, so stay tuned.
Should you want to err on the safe side and decide to change your password, make sure to do so in the app and not click any links in the email, to avoid the risk that you have received a fake email. Or you might end up providing scammers with your password.
Another thing to keep in mind is that these are Meta-data. Which means some users may have reused or linked them to their Facebook or WhatsApp accounts. So, as a precaution, you can check recent logins and active sessions on Instagram, WhatsApp, and Facebook, and log out from any devices or locations you do not recognize.
If you want to find out whether your data was included in an Instagram data breach, or any other for that matter, try our free Digital Footprint scan.
Last week, many Instagram users began receiving unsolicited emails from the platform that warned about a password reset request.
The message said:
“Hi {username}, We got a request to reset your Instagram password. If you ignore this message, your password will not be changed. If you didn’t request a password reset, let us know.”
Around the same time that users began receiving these emails, a cybercriminal using the handle “Solonik” offered data that alleged contains information about 17 million Instagram users for sale on a Dark Web forum.
These 17 million or so records include:
Usernames
Full names
User IDs
Email addresses
Phone numbers
Countries
Partial locations
Please note that there are no passwords listed in the data.
Despite the timing of the two events, Instagram denied this weekend that these events are related. On the platform X, the company stated they fixed an issue that allowed an external party to request password reset emails for “some people.”
So, what’s happening?
Regarding the data found on the dark web last week, Shahak Shalev, global head of scam and AI research at Malwarebytes, shared that “there are some indications that the Instagram data dump includes data from other, older, alleged Instagram breaches, and is a sort of compilation.” As Shalev’s team investigates the data, he also said that the earliest password reset requests reported by users came days before the data was first posted on the dark web, which might mean that “the data may have been circulating in more private groups before being made public.”
However, another possibility, Shalev said, is that “another vulnerability/data leak was happening as some bad actor tried spraying for [Instagram] accounts. Instagram’s announcement seems to reference that spraying. Besides the suspicious timing, there’s no clear connection between the two at this time.”
But, importantly, scammers will not care whether these incidents are related or not. They will try to take advantage of the situation by sending out fake emails.
“We felt it was important to alert people about the data availability so that everyone could reset their passwords, directly from the app, and be on alert for other phishing communications,” Shalev said.
If and when we find out more, we’ll keep you posted, so stay tuned.
Should you want to err on the safe side and decide to change your password, make sure to do so in the app and not click any links in the email, to avoid the risk that you have received a fake email. Or you might end up providing scammers with your password.
Another thing to keep in mind is that these are Meta-data. Which means some users may have reused or linked them to their Facebook or WhatsApp accounts. So, as a precaution, you can check recent logins and active sessions on Instagram, WhatsApp, and Facebook, and log out from any devices or locations you do not recognize.
If you want to find out whether your data was included in an Instagram data breach, or any other for that matter, try our free Digital Footprint scan.
Grok’s failure to block sexualized images of minors has turned a single “isolated lapse” into a global regulatory stress test for xAI’s ambitions. The response from lawmakers and regulators suggests this will not be solved with a quick apology and a hotfix.
Last week we reported on Grok’s apology after it generated an image of young girls in “sexualized attire.”
The apology followed the introduction of Grok’s paid “Spicy Mode” in August 2025, which was marketed as edgy and less censored. In practice it enabled users to generate sexual deepfake images, including content that may cross into illegal child sexual abuse material (CSAM) under US and other jurisdictions’ laws.
A report from web-monitoring tool CopyLeaks highlighted “thousands” of incidents of Grok being used to create sexually suggestive images of non-consenting celebrities.
This is starting to backfire. Reportedly, three US senators are asking Google and Apple to remove Elon Musk’s Grok and X apps from their app stores, citing the spread of nonconsensual sexualized AI images of women and minors and arguing it violates the companies’ app store rules.
“In recent days, X users have used the app’s Grok AI tool to generate nonconsensual sexual imagery of real, private citizens at scale. This trend has included Grok modifying images to depict women being sexually abused, humiliated, hurt, and even killed. In some cases, Grok has reportedly created sexualized images of children—the most heinous type of content imaginable.”
The UK government also threatens to take possible action against the platform. Government officials have said they would fully support any action taken by Ofcom, the independent media regulator, against X. Even if that meant UK regulators could block the platform.
Indonesia and Malaysia already blocked Grok after its “digital undressing” function flooded the internet with suggestive and obscene manipulated images of women and minors.
As it turns out, a user prompted Grok to generate its own “apology,” which it did. After backlash over sexualized images of women and minors, Grok/X announced limits on image generation and editing for paying subscribers only, effectively paywalling those capabilities on main X surfaces.
For lawmakers already worried about disinformation, election interference, deepfakes, and abuse imagery, Grok is fast becoming the textbook case for why “move fast and break things” doesn’t mix with AI that can sexualize real people on demand.
Hopefully, the next wave of rules, ranging from EU AI enforcement to platform-specific safety obligations, will treat this incident as the baseline risk that all large-scale visual models must withstand, not as an outlier.
Keep your children safe
If you ever wondered why parents post images of their children with a smiley across their face, this is the reason.
Don’t make it easy for strangers to copy, reuse, or manipulate your photos.
This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.
And treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.
We don’t just report on threats – we help protect your social media
Grok’s failure to block sexualized images of minors has turned a single “isolated lapse” into a global regulatory stress test for xAI’s ambitions. The response from lawmakers and regulators suggests this will not be solved with a quick apology and a hotfix.
Last week we reported on Grok’s apology after it generated an image of young girls in “sexualized attire.”
The apology followed the introduction of Grok’s paid “Spicy Mode” in August 2025, which was marketed as edgy and less censored. In practice it enabled users to generate sexual deepfake images, including content that may cross into illegal child sexual abuse material (CSAM) under US and other jurisdictions’ laws.
A report from web-monitoring tool CopyLeaks highlighted “thousands” of incidents of Grok being used to create sexually suggestive images of non-consenting celebrities.
This is starting to backfire. Reportedly, three US senators are asking Google and Apple to remove Elon Musk’s Grok and X apps from their app stores, citing the spread of nonconsensual sexualized AI images of women and minors and arguing it violates the companies’ app store rules.
“In recent days, X users have used the app’s Grok AI tool to generate nonconsensual sexual imagery of real, private citizens at scale. This trend has included Grok modifying images to depict women being sexually abused, humiliated, hurt, and even killed. In some cases, Grok has reportedly created sexualized images of children—the most heinous type of content imaginable.”
The UK government also threatens to take possible action against the platform. Government officials have said they would fully support any action taken by Ofcom, the independent media regulator, against X. Even if that meant UK regulators could block the platform.
Indonesia and Malaysia already blocked Grok after its “digital undressing” function flooded the internet with suggestive and obscene manipulated images of women and minors.
As it turns out, a user prompted Grok to generate its own “apology,” which it did. After backlash over sexualized images of women and minors, Grok/X announced limits on image generation and editing for paying subscribers only, effectively paywalling those capabilities on main X surfaces.
For lawmakers already worried about disinformation, election interference, deepfakes, and abuse imagery, Grok is fast becoming the textbook case for why “move fast and break things” doesn’t mix with AI that can sexualize real people on demand.
Hopefully, the next wave of rules, ranging from EU AI enforcement to platform-specific safety obligations, will treat this incident as the baseline risk that all large-scale visual models must withstand, not as an outlier.
Keep your children safe
If you ever wondered why parents post images of their children with a smiley across their face, this is the reason.
Don’t make it easy for strangers to copy, reuse, or manipulate your photos.
This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.
And treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.
We don’t just report on threats – we help protect your social media
Other noteworthy stories that might have slipped under the radar: Jaguar Land Rover sales crash, hundreds of gen-AI data policy violations, and Chinese cyberattacks against Taiwan intensified.
Reportedly, pcTattletale founder Bryan Fleming has pleaded guilty in US federal court to computer hacking, unlawfully selling and advertising spyware, and conspiracy.
This is good news not just because we despise stalkerware like pcTattletale, but because it is only the second US federal stalkerware prosecution in a decade. It could could open the door to further cases against people who develop, sell, or promote similar tools.
In 2021, we reported that “employee and child-monitoring” software vendor pcTattletale had not been very careful about securing the screenshots it secretly captured from victims’ phones. A security researcher testing a trial version discovered that the app uploaded screenshots to an unsecured online database, meaning anyone could view them without authentication, such as a username and password.
In 2024, we revisited the app after researchers found it was once again leaking a database containing victim screenshots. One researcher discovered that pcTattletale’s Application Programming Interface (API) allowed anyone to access the most recent screen capture recorded from any device on which the spyware is installed. Another researcher uncovered a separate vulnerability that granted full access to the app’s backend infrastructure. That access allowed them to deface the website and steal AWS credentials, which turned out to be shared across all devices. As a result, the researcher obtained data about both victims and the customers who were doing the tracking.
This is no longer possible. Not because the developers fixed the problems, but because Amazon locked pcTattletale’s entire AWS infrastructure. Fleming later abandoned the product and deleted the contents of its servers.
However, Homeland Security Investigations had already started investigating pcTattletale in June 2021 and did not stop. A few things made Fleming stand out among other stalkerware operators. While many hide behind overseas shell companies, Fleming appeared to be proud of his work. And while others market their products as parental control or employee monitoring tools, pcTattletale explicitly promoted spying on romantic partners and spouses, using phrases such as “catch a cheater” and “surreptitiously spying on spouses and partners.” This made it clear the software was designed for non-consensual surveillance of adults.
Fleming is expected to be sentenced later this year.
Removing stalkerware
Malwarebytes, as one of the founding members of the Coalition Against Stalkerware, makes it a priority to detect and remove stalkerware-type apps from your device.
It is important to keep in mind, however, that removing stalkerware may alert the person spying on you that the app has been discovered. The Coalition Against Stalkerware outlines additional steps and considerations to help you decide the safest next move.
Because the apps often install under different names and hide themselves from users, they can be difficult to find and remove. That is where Malwarebytes can help you.
To scan your device:
Open your Malwarebytes dashboard
Start a Scan
The scan may take a few minutes.
If malware is detected, you can choose one of the following actions:
Uninstall. The threat will be deleted from your device.
Ignore Always. The file detection will be added to the Allow List, and excluded from future scans. Legitimate files are sometimes detected as malware. We recommend reviewing scan results and adding files to Ignore Always that you know are safe and want to keep.
Ignore Once: The detection is ignored for this scan only. It will be detected again during your next scan.
Malwarebytes detects pcTattleTale as PUP.Optional.PCTattletale.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.