Reading view

Meta confirms it’s working on premium subscription for its apps

Meta plans to test exclusive features that will be incorporated in paid versions of Facebook, Instagram, and WhatsApp. It confirmed these plans to TechCrunch.

But these plans are not to be confused with the ad-free subscription options that Meta introduced for Facebook and Instagram in the EU, the European Economic Area, and Switzerland in late 2023 and framed as a way to comply with General Data Protection Regulation (GDPR) and Digital Markets Act requirements.

From November 2023, users in those regions could either keep using the services for free with personalized ads or pay a monthly fee for an ad‑free experience. European rules require Meta to get users’ consent in order to show them targeted ads, so this was an obvious attempt to recoup advertising revenue when users declined to give that consent.

This year, users in the UK were given the same choice: use Meta’s products for free or subscribe to use them without ads. But only grudgingly, judging by the tone in the offer… “As part of laws in your region, you have a choice.”

As part of laws in your region, you have a choice
The ad-free option that has been rolling out coincides with the announcement of Meta’s premium subscriptions.

That ad-free option, however, is not what Meta is talking about now.

The newly announced plans are not about ads, and they are also separate from Meta Verified, which starts at around $15 a month and focuses on creators and businesses, offering a verification badge, better support, and anti‑impersonation protection.

Instead, these new subscriptions are likely to focus on additional features—more control over how users share and connect, and possibly tools such as expanded AI capabilities, unlimited audience lists, seeing who you follow that doesn’t follow you back, or viewing stories without the poster knowing it was you.

These examples are unconfirmed. All we know for sure is that Meta plans to test new paid features to see which ones users are willing to pay for and how much they can charge.

Meta has said these features will focus on productivity, creativity, and expanded AI.

My opinion

Unfortunately, this feels like another refusal to listen.

Most of us aren’t asking for more AI in our feeds. We’re asking for a basic sense of control: control over who sees us, what’s tracked about us, and how our data is used to feed an algorithm designed to keep us scrolling.

Users shouldn’t have to choose between being mined for behavioral data or paying a monthly fee just to be left alone. The message baked into “pay or be profiled” is that privacy is now a luxury good, not a default right. But while regulators keep saying the model is unlawful, the experience on the ground still nudges people toward the path of least resistance: accept the tracking and move on.

Even then, this level of choice is only available to users in Europe.

Why not offer the same option to users in the US? Or will it take stronger US privacy regulation to make that happen?


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  •  

Meta confirms it’s working on premium subscription for its apps

Meta plans to test exclusive features that will be incorporated in paid versions of Facebook, Instagram, and WhatsApp. It confirmed these plans to TechCrunch.

But these plans are not to be confused with the ad-free subscription options that Meta introduced for Facebook and Instagram in the EU, the European Economic Area, and Switzerland in late 2023 and framed as a way to comply with General Data Protection Regulation (GDPR) and Digital Markets Act requirements.

From November 2023, users in those regions could either keep using the services for free with personalized ads or pay a monthly fee for an ad‑free experience. European rules require Meta to get users’ consent in order to show them targeted ads, so this was an obvious attempt to recoup advertising revenue when users declined to give that consent.

This year, users in the UK were given the same choice: use Meta’s products for free or subscribe to use them without ads. But only grudgingly, judging by the tone in the offer… “As part of laws in your region, you have a choice.”

As part of laws in your region, you have a choice
The ad-free option that has been rolling out coincides with the announcement of Meta’s premium subscriptions.

That ad-free option, however, is not what Meta is talking about now.

The newly announced plans are not about ads, and they are also separate from Meta Verified, which starts at around $15 a month and focuses on creators and businesses, offering a verification badge, better support, and anti‑impersonation protection.

Instead, these new subscriptions are likely to focus on additional features—more control over how users share and connect, and possibly tools such as expanded AI capabilities, unlimited audience lists, seeing who you follow that doesn’t follow you back, or viewing stories without the poster knowing it was you.

These examples are unconfirmed. All we know for sure is that Meta plans to test new paid features to see which ones users are willing to pay for and how much they can charge.

Meta has said these features will focus on productivity, creativity, and expanded AI.

My opinion

Unfortunately, this feels like another refusal to listen.

Most of us aren’t asking for more AI in our feeds. We’re asking for a basic sense of control: control over who sees us, what’s tracked about us, and how our data is used to feed an algorithm designed to keep us scrolling.

Users shouldn’t have to choose between being mined for behavioral data or paying a monthly fee just to be left alone. The message baked into “pay or be profiled” is that privacy is now a luxury good, not a default right. But while regulators keep saying the model is unlawful, the experience on the ground still nudges people toward the path of least resistance: accept the tracking and move on.

Even then, this level of choice is only available to users in Europe.

Why not offer the same option to users in the US? Or will it take stronger US privacy regulation to make that happen?


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  •  

Microsoft Office zero-day lets malicious documents slip past security checks

Microsoft issued an emergency patch for a high-severity zero-day vulnerability in Office that allows attackers to bypass document security checks and is being exploited in the wild via malicious files.

Microsoft pushed the emergency patch for the zero‑day, tracked as CVE-2026-21509, and classified it as a “Microsoft Office Security Feature Bypass Vulnerability” with a CVSS score of 7.8 out of 10.

The flaw allows attackers to bypass Object Linking and Embedding (OLE) mitigations that are designed to block unsafe COM/OLE controls inside Office documents. This means a malicious attachment could infect a PC despite built-in protections.

In a real-life scenario, an attacker creates a fake Word, Excel, or PowerPoint file containing hidden “mini‑programs” or special objects. They can run code and do other things on the affected computer. Normally, Office has safety checks that would block those mini-programs because they’re risky.

However, the vulnerability allows the attacker to tweak the file’s structure and hidden information in a way that tricks Office into thinking the dangerous mini‑program inside the document is harmless. As a result, Office skips the usual security checks and allows the hidden code to run.

As code to test the bypass is publicly available, increasing the risk of exploitation, users are under urgent advice to apply the patch.

Updating Microsoft 365 and Office
Updating Microsoft 365 and Office

How to protect your system

What you need to do depends on which version of Office you’re using.

The affected products include Microsoft Office 2016, 2019, LTSC 2021, LTSC 2024, and Microsoft 365 Apps (both 32‑bit and 64‑bit).

Office 2021 and later are protected via a server‑side change once Office is restarted. To apply it, close all Office apps and restart them.

Office 2016 and 2019 require a manual update. Run Windows Update with the option to update other Microsoft products turned on.

If you’re running build 16.0.10417.20095 or higher, no action is required. You can check your build number by opening any Office app, going to your account page, and selecting About for whichever application you have open. Make sure the build number at the top reads 16.0.10417.20095 or higher.

What always helps:

  • Don’t open unsolicited attachments without verifying them with a trusted sender.
  • Treat all unexpected documents, especially those asking to “enable content” or “enable editing,” as suspicious.
  • Keep macros disabled by default and only allow signed macros from trusted publishers.
  • Use an up-to-date real-time anti-malware solution.
  • Keep your operating system and software fully up to date.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

  •  

Microsoft Office zero-day lets malicious documents slip past security checks

Microsoft issued an emergency patch for a high-severity zero-day vulnerability in Office that allows attackers to bypass document security checks and is being exploited in the wild via malicious files.

Microsoft pushed the emergency patch for the zero‑day, tracked as CVE-2026-21509, and classified it as a “Microsoft Office Security Feature Bypass Vulnerability” with a CVSS score of 7.8 out of 10.

The flaw allows attackers to bypass Object Linking and Embedding (OLE) mitigations that are designed to block unsafe COM/OLE controls inside Office documents. This means a malicious attachment could infect a PC despite built-in protections.

In a real-life scenario, an attacker creates a fake Word, Excel, or PowerPoint file containing hidden “mini‑programs” or special objects. They can run code and do other things on the affected computer. Normally, Office has safety checks that would block those mini-programs because they’re risky.

However, the vulnerability allows the attacker to tweak the file’s structure and hidden information in a way that tricks Office into thinking the dangerous mini‑program inside the document is harmless. As a result, Office skips the usual security checks and allows the hidden code to run.

As code to test the bypass is publicly available, increasing the risk of exploitation, users are under urgent advice to apply the patch.

Updating Microsoft 365 and Office
Updating Microsoft 365 and Office

How to protect your system

What you need to do depends on which version of Office you’re using.

The affected products include Microsoft Office 2016, 2019, LTSC 2021, LTSC 2024, and Microsoft 365 Apps (both 32‑bit and 64‑bit).

Office 2021 and later are protected via a server‑side change once Office is restarted. To apply it, close all Office apps and restart them.

Office 2016 and 2019 require a manual update. Run Windows Update with the option to update other Microsoft products turned on.

If you’re running build 16.0.10417.20095 or higher, no action is required. You can check your build number by opening any Office app, going to your account page, and selecting About for whichever application you have open. Make sure the build number at the top reads 16.0.10417.20095 or higher.

What always helps:

  • Don’t open unsolicited attachments without verifying them with a trusted sender.
  • Treat all unexpected documents, especially those asking to “enable content” or “enable editing,” as suspicious.
  • Keep macros disabled by default and only allow signed macros from trusted publishers.
  • Use an up-to-date real-time anti-malware solution.
  • Keep your operating system and software fully up to date.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

  •  

Clawdbot’s rename to Moltbot sparks impersonation campaign

After the viral AI assistant Clawdbot was forced to rename to Moltbot due to a trademark dispute, opportunists moved quickly. Within days, typosquat domains and a cloned GitHub repository appeared—impersonating the project’s creator and positioning infrastructure for a potential supply-chain attack.

The code is clean. The infrastructure is not. With the GitHub downloads and star rating rapidly rising, we took a deep dive into how fake domains target viral open source projects.

Fake domains spring up to impersonate Moltbot's landing page

The background: Why was Clawdbot renamed?

In early 2026, Peter Steinberger’s Clawdbot became one of the fastest-growing open source projects on GitHub. The self-hosted assistant—described as “Claude with hands”—allowed users to control their computer through WhatsApp, Telegram, Discord, and similar platforms.

Anthropic later objected to the name. Steinberger complied and rebranded the project to Moltbot (“molt” being what lobsters do when they shed their shell).

During the rename, both the GitHub organization and X (formerly Twitter) handle were briefly released before being reclaimed. Attackers monitoring the transition grabbed them within seconds.

“Had to rename our accounts for trademark stuff and messed up the GitHub rename and the X rename got snatched by crypto shills.” — Peter Steinberger

“Had to rename our accounts for trademark stuff and messed up the GitHub rename and the X rename got snatched by crypto shills.” — Peter Steinberger

That brief gap was enough.

Impersonation infrastructure emerged

While investigating a suspicious repository, I uncovered a coordinated set of assets designed to impersonate Moltbot.

Domains

  • moltbot[.]you
  • clawbot[.]ai
  • clawdbot[.]you

Repository

  • github[.]com/gstarwd/clawbot — a cloned repository using a typosquatted variant of the former Clawdbot project name

Website

A polished marketing site featuring:

  • professional design closely matching the real project
  • SEO optimization and structured metadata
  • download buttons, tutorials, and FAQs
  • claims of 61,500+ GitHub stars lifted from the real repository

Evidence of impersonation

False attribution: The site’s schema.org metadata falsely claims authorship by Peter Steinberger, linking directly to his real GitHub and X profiles. This is explicit identity misrepresentation.

The site's metadata

Misdirection to an unauthorized repository: “View on GitHub” links send users to gstarwd/clawbot, not the official moltbot/moltbot repository.

Stolen credibility:The site prominently advertises tens of thousands of stars that belong to the real project. The clone has virtually none (although at the time of writing, that number is steadily rising).

The site advertises 61,500+ GitHub stars

Mixing legitimate and fraudulent links: Some links point to real assets, such as official documentation or legitimate binaries. Others redirect to impersonation infrastructure. This selective legitimacy defeats casual verification and appears deliberate.

Full SEO optimization: Canonical tags, Open Graph metadata, Twitter cards, and analytics are all present—clearly intended to rank the impersonation site ahead of legitimate project resources.

The ironic security warning: The impersonation site even warns users about scams involving fake cryptocurrency tokens—while itself impersonating the project.

The site warms about crypto scams.

Code analysis: Clean by design

I performed a static audit of the gstarwd/clawbot repository:

  • no malicious npm scripts
  • no credential exfiltration
  • no obfuscation or payload staging
  • no cryptomining
  • no suspicious network activity

The code is functionally identical to the legitimate project, which is not reassuring.

The threat model

The absence of malware is the strategy. Nothing here suggests an opportunistic malware campaign. Instead, the setup points to early preparation for a supply-chain attack.

The likely chain of events:

A user searches for “clawbot GitHub” or “moltbot download” and finds moltbot[.]you or gstarwd/clawbot.

The code looks legitimate and passes a security audit.

The user installs the project and configures it, adding API keys and messaging tokens. Trust is established.

At a later point, a routine update is pulled through npm update or git pull. A malicious payload is delivered into an installation the user already trusts.

An attacker can then harvest:

  • Anthropic API keys
  • OpenAI API keys
  • WhatsApp session credentials
  • Telegram bot tokens
  • Discord OAuth tokens
  • Slack credentials
  • Signal identity keys
  • full conversation histories
  • command execution access on the compromised machine

What’s malicious, and what isn’t

Clearly malicious

  • false attribution to a real individual
  • misrepresentation of popularity metrics
  • deliberate redirection to an unauthorized repository

Deceptive but not yet malware

  • typosquat domains
  • SEO manipulation
  • cloned repositories with clean code

Not present (yet)

  • active malware
  • data exfiltration
  • cryptomining

Clean code today lowers suspicion tomorrow.

A familiar pattern

This follows a well-known pattern in open source supply-chain attacks.

A user searches for a popular project and lands on a convincing-looking site or cloned repository. The code appears legitimate and passes a security audit.

They install the project and configure it, adding API keys or messaging tokens so it can work as intended. Trust is established.

Later, a routine update arrives through a standard npm update or git pull. That update introduces a malicious payload into an installation the user already trusts.

From there, an attacker can harvest credentials, conversation data, and potentially execute commands on the compromised system.

No exploit is required. The entire chain relies on trust rather than technical vulnerabilities.

How to stay safe

Impersonation infrastructure like this is designed to look legitimate long before anything malicious appears. By the time a harmful update arrives—if it arrives at all—the software may already be widely installed and trusted.

That’s why basic source verification still matters, especially when popular projects rename or move quickly.

Advice for users

  • Verify GitHub organization ownership
  • Bookmark official repositories directly
  • Treat renamed projects as higher risk during transitions

Advice for maintainers

  • Pre-register likely typosquat domains before public renames
  • Coordinate renames and handle changes carefully
  • Monitor for cloned repositories and impersonation sites

Pro tip: Malwarebytes customers are protected. Malwarebytes is actively blocking all known indicators of compromise (IOCs) associated with this impersonation infrastructure, preventing users from accessing the fraudulent domains and related assets identified in this investigation.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

  •  

Clawdbot’s rename to Moltbot sparks impersonation campaign

After the viral AI assistant Clawdbot was forced to rename to Moltbot due to a trademark dispute, opportunists moved quickly. Within days, typosquat domains and a cloned GitHub repository appeared—impersonating the project’s creator and positioning infrastructure for a potential supply-chain attack.

The code is clean. The infrastructure is not. With the GitHub downloads and star rating rapidly rising, we took a deep dive into how fake domains target viral open source projects.

Fake domains spring up to impersonate Moltbot's landing page

The background: Why was Clawdbot renamed?

In early 2026, Peter Steinberger’s Clawdbot became one of the fastest-growing open source projects on GitHub. The self-hosted assistant—described as “Claude with hands”—allowed users to control their computer through WhatsApp, Telegram, Discord, and similar platforms.

Anthropic later objected to the name. Steinberger complied and rebranded the project to Moltbot (“molt” being what lobsters do when they shed their shell).

During the rename, both the GitHub organization and X (formerly Twitter) handle were briefly released before being reclaimed. Attackers monitoring the transition grabbed them within seconds.

“Had to rename our accounts for trademark stuff and messed up the GitHub rename and the X rename got snatched by crypto shills.” — Peter Steinberger

“Had to rename our accounts for trademark stuff and messed up the GitHub rename and the X rename got snatched by crypto shills.” — Peter Steinberger

That brief gap was enough.

Impersonation infrastructure emerged

While investigating a suspicious repository, I uncovered a coordinated set of assets designed to impersonate Moltbot.

Domains

  • moltbot[.]you
  • clawbot[.]ai
  • clawdbot[.]you

Repository

  • github[.]com/gstarwd/clawbot — a cloned repository using a typosquatted variant of the former Clawdbot project name

Website

A polished marketing site featuring:

  • professional design closely matching the real project
  • SEO optimization and structured metadata
  • download buttons, tutorials, and FAQs
  • claims of 61,500+ GitHub stars lifted from the real repository

Evidence of impersonation

False attribution: The site’s schema.org metadata falsely claims authorship by Peter Steinberger, linking directly to his real GitHub and X profiles. This is explicit identity misrepresentation.

The site's metadata

Misdirection to an unauthorized repository: “View on GitHub” links send users to gstarwd/clawbot, not the official moltbot/moltbot repository.

Stolen credibility:The site prominently advertises tens of thousands of stars that belong to the real project. The clone has virtually none (although at the time of writing, that number is steadily rising).

The site advertises 61,500+ GitHub stars

Mixing legitimate and fraudulent links: Some links point to real assets, such as official documentation or legitimate binaries. Others redirect to impersonation infrastructure. This selective legitimacy defeats casual verification and appears deliberate.

Full SEO optimization: Canonical tags, Open Graph metadata, Twitter cards, and analytics are all present—clearly intended to rank the impersonation site ahead of legitimate project resources.

The ironic security warning: The impersonation site even warns users about scams involving fake cryptocurrency tokens—while itself impersonating the project.

The site warms about crypto scams.

Code analysis: Clean by design

I performed a static audit of the gstarwd/clawbot repository:

  • no malicious npm scripts
  • no credential exfiltration
  • no obfuscation or payload staging
  • no cryptomining
  • no suspicious network activity

The code is functionally identical to the legitimate project, which is not reassuring.

The threat model

The absence of malware is the strategy. Nothing here suggests an opportunistic malware campaign. Instead, the setup points to early preparation for a supply-chain attack.

The likely chain of events:

A user searches for “clawbot GitHub” or “moltbot download” and finds moltbot[.]you or gstarwd/clawbot.

The code looks legitimate and passes a security audit.

The user installs the project and configures it, adding API keys and messaging tokens. Trust is established.

At a later point, a routine update is pulled through npm update or git pull. A malicious payload is delivered into an installation the user already trusts.

An attacker can then harvest:

  • Anthropic API keys
  • OpenAI API keys
  • WhatsApp session credentials
  • Telegram bot tokens
  • Discord OAuth tokens
  • Slack credentials
  • Signal identity keys
  • full conversation histories
  • command execution access on the compromised machine

What’s malicious, and what isn’t

Clearly malicious

  • false attribution to a real individual
  • misrepresentation of popularity metrics
  • deliberate redirection to an unauthorized repository

Deceptive but not yet malware

  • typosquat domains
  • SEO manipulation
  • cloned repositories with clean code

Not present (yet)

  • active malware
  • data exfiltration
  • cryptomining

Clean code today lowers suspicion tomorrow.

A familiar pattern

This follows a well-known pattern in open source supply-chain attacks.

A user searches for a popular project and lands on a convincing-looking site or cloned repository. The code appears legitimate and passes a security audit.

They install the project and configure it, adding API keys or messaging tokens so it can work as intended. Trust is established.

Later, a routine update arrives through a standard npm update or git pull. That update introduces a malicious payload into an installation the user already trusts.

From there, an attacker can harvest credentials, conversation data, and potentially execute commands on the compromised system.

No exploit is required. The entire chain relies on trust rather than technical vulnerabilities.

How to stay safe

Impersonation infrastructure like this is designed to look legitimate long before anything malicious appears. By the time a harmful update arrives—if it arrives at all—the software may already be widely installed and trusted.

That’s why basic source verification still matters, especially when popular projects rename or move quickly.

Advice for users

  • Verify GitHub organization ownership
  • Bookmark official repositories directly
  • Treat renamed projects as higher risk during transitions

Advice for maintainers

  • Pre-register likely typosquat domains before public renames
  • Coordinate renames and handle changes carefully
  • Monitor for cloned repositories and impersonation sites

Pro tip: Malwarebytes customers are protected. Malwarebytes is actively blocking all known indicators of compromise (IOCs) associated with this impersonation infrastructure, preventing users from accessing the fraudulent domains and related assets identified in this investigation.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

  •  

Burner phones and lead-lined bags: a history of UK security tactics in China

Starmer’s team is wary of spies but such fears are not new – with Theresa May once warned to get dressed under a duvet

When prime ministers travel to China, heightened security arrangements are a given – as is the quiet game of cat and mouse that takes place behind the scenes as each country tests out each other’s tradecraft and capabilities.

Keir Starmer’s team has been issued with burner phones and fresh sim cards, and is using temporary email addresses, to prevent devices being loaded with spyware or UK government servers being hacked into.

Continue reading...

© Photograph: Simon Dawson/Simon Dawson/10 Downing Street

© Photograph: Simon Dawson/Simon Dawson/10 Downing Street

© Photograph: Simon Dawson/Simon Dawson/10 Downing Street

  •  

Malicious Chrome extensions can spy on your ChatGPT chats

Researchers discovered 16 malicious browser extensions for Google Chrome and Microsoft Edge that steal ChatGPT session tokens, giving attackers access to accounts, including conversation history and metadata.

The 16 malicious extensions (15 for Chrome and 1 for Edge) claim to improve and optimize ChatGPT, but instead siphon users’ session tokens to attackers. Together, they have been downloaded around 900 times, a relatively small number compared to other malicious extensions.

Despite benign descriptions and, in some cases, a “featured” badge, the real goal of these extensions is to hijack ChatGPT identities by stealing session authentication tokens and sending them to attacker-controlled backends.

Possession of these tokens gives attackers the same level of access as the user, including conversation history and metadata.

In addition to your ChatGPT session token, the extensions also send extra details about themselves (such as their version and language settings), along with information about how they’re used, and special keys they get from their own online service.

Taken together, this allows the attackers to build a picture of who you are and how you work online. They can use it to keep recognizing you over time, build a profile of your behavior, and maintain access to your ChatGPT-connected services for much longer. This increases the privacy impact and means a single compromised extension can cause broader harm if its servers are abused or breached.

According to the researchers, this campaign coincides with a broader trend:

“The rapid growth in adoption of AI-powered browser extensions, aimed at helping users with their everyday productivity needs. While most of them are completely benign, many of these extensions mimic known brands to gain users’ trust, particularly those designed to enhance interaction with large language models.”

How to stay safe

Although we always advise people to install extensions only from official web stores, this case proves once again that not all extensions available there are safe. That said, installing extensions from outside official web stores carries an even higher risk.

Extensions listed in official stores undergo a review process before being approved. This process, which combines automated and manual checks, assesses the extension’s safety, policy compliance, and overall user experience. The goal is to protect users from scams, malware, and other malicious activity. However, this review process is not foolproof.

Microsoft and Google have been notified about the abuse. However, extensions that are already installed may remain active in Chrome and Edge until users manually remove them.

Malicious extensions

These are the browser extensions you should remove. They are listed by Name — Publisher — Extension ID:

  • ChatGPT bulk delete, Chat manager — ChatGPT Mods — gbcgjnbccjojicobfimcnfjddhpphaod
  • ChatGPT export, Markdown, JSON, images — ChatGPT Mods — hljdedgemmmkdalbnmnpoimdedckdkhm
  • ChatGPT folder, voice download, prompt manager, free tools — ChatGPT Mods — lmiigijnefpkjcenfbinhdpafehaddag
  • ChatGPT message navigator, history scroller — ChatGPT Mods — ifjimhnbnbniiiaihphlclkpfikcdkab
  • ChatGPT Mods — Folder Voice Download & More Free Tools — jhohjhmbiakpgedidneeloaoloadlbdj
  • ChatGPT pin chat, bookmark — ChatGPT Mods — kefnabicobeigajdngijnnjmljehknjl
  • ChatGPT Prompt Manager, Folder, Library, Auto Send — ChatGPT Mods — ioaeacncbhpmlkediaagefiegegknglc
  • ChatGPT prompt optimization — ChatGPT Mods — mmjmcfaejolfbenlplfoihnobnggljij
  • ChatGPT search history, locate specific messages — ChatGPT Mods — ipjgfhcjeckaibnohigmbcaonfcjepmb
  • ChatGPT Timestamp Display — ChatGPT Mods — afjenpabhpfodjpncbiiahbknnghabdc
  • ChatGPT Token counter — ChatGPT Mods — hfdpdgblphooommgcjdnnmhpglleaafj
  • ChatGPT model switch, save advanced model uses — ChatGPT Mods — pfgbcfaiglkcoclichlojeaklcfboieh
  • ChatGPT voice download, TTS download — ChatGPT Mods — območbankihdfckkbfnoglefmdgmblcld (original: obdobankihdfckkbfnoglefmdgmblcld)
  • Collapsed message — ChatGPT Mods — lechagcebaneoafonkbfkljmbmaaoaec
  • Multi-Profile Management & Switching — ChatGPT Mods — nhnfaiiobkpbenbbiblmgncgokeknnno
  • Search with ChatGPT — ChatGPT Mods — hpcejjllhbalkcmdikecfngkepppoknd

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

  •  

Malicious Chrome extensions can spy on your ChatGPT chats

Researchers discovered 16 malicious browser extensions for Google Chrome and Microsoft Edge that steal ChatGPT session tokens, giving attackers access to accounts, including conversation history and metadata.

The 16 malicious extensions (15 for Chrome and 1 for Edge) claim to improve and optimize ChatGPT, but instead siphon users’ session tokens to attackers. Together, they have been downloaded around 900 times, a relatively small number compared to other malicious extensions.

Despite benign descriptions and, in some cases, a “featured” badge, the real goal of these extensions is to hijack ChatGPT identities by stealing session authentication tokens and sending them to attacker-controlled backends.

Possession of these tokens gives attackers the same level of access as the user, including conversation history and metadata.

In addition to your ChatGPT session token, the extensions also send extra details about themselves (such as their version and language settings), along with information about how they’re used, and special keys they get from their own online service.

Taken together, this allows the attackers to build a picture of who you are and how you work online. They can use it to keep recognizing you over time, build a profile of your behavior, and maintain access to your ChatGPT-connected services for much longer. This increases the privacy impact and means a single compromised extension can cause broader harm if its servers are abused or breached.

According to the researchers, this campaign coincides with a broader trend:

“The rapid growth in adoption of AI-powered browser extensions, aimed at helping users with their everyday productivity needs. While most of them are completely benign, many of these extensions mimic known brands to gain users’ trust, particularly those designed to enhance interaction with large language models.”

How to stay safe

Although we always advise people to install extensions only from official web stores, this case proves once again that not all extensions available there are safe. That said, installing extensions from outside official web stores carries an even higher risk.

Extensions listed in official stores undergo a review process before being approved. This process, which combines automated and manual checks, assesses the extension’s safety, policy compliance, and overall user experience. The goal is to protect users from scams, malware, and other malicious activity. However, this review process is not foolproof.

Microsoft and Google have been notified about the abuse. However, extensions that are already installed may remain active in Chrome and Edge until users manually remove them.

Malicious extensions

These are the browser extensions you should remove. They are listed by Name — Publisher — Extension ID:

  • ChatGPT bulk delete, Chat manager — ChatGPT Mods — gbcgjnbccjojicobfimcnfjddhpphaod
  • ChatGPT export, Markdown, JSON, images — ChatGPT Mods — hljdedgemmmkdalbnmnpoimdedckdkhm
  • ChatGPT folder, voice download, prompt manager, free tools — ChatGPT Mods — lmiigijnefpkjcenfbinhdpafehaddag
  • ChatGPT message navigator, history scroller — ChatGPT Mods — ifjimhnbnbniiiaihphlclkpfikcdkab
  • ChatGPT Mods — Folder Voice Download & More Free Tools — jhohjhmbiakpgedidneeloaoloadlbdj
  • ChatGPT pin chat, bookmark — ChatGPT Mods — kefnabicobeigajdngijnnjmljehknjl
  • ChatGPT Prompt Manager, Folder, Library, Auto Send — ChatGPT Mods — ioaeacncbhpmlkediaagefiegegknglc
  • ChatGPT prompt optimization — ChatGPT Mods — mmjmcfaejolfbenlplfoihnobnggljij
  • ChatGPT search history, locate specific messages — ChatGPT Mods — ipjgfhcjeckaibnohigmbcaonfcjepmb
  • ChatGPT Timestamp Display — ChatGPT Mods — afjenpabhpfodjpncbiiahbknnghabdc
  • ChatGPT Token counter — ChatGPT Mods — hfdpdgblphooommgcjdnnmhpglleaafj
  • ChatGPT model switch, save advanced model uses — ChatGPT Mods — pfgbcfaiglkcoclichlojeaklcfboieh
  • ChatGPT voice download, TTS download — ChatGPT Mods — območbankihdfckkbfnoglefmdgmblcld (original: obdobankihdfckkbfnoglefmdgmblcld)
  • Collapsed message — ChatGPT Mods — lechagcebaneoafonkbfkljmbmaaoaec
  • Multi-Profile Management & Switching — ChatGPT Mods — nhnfaiiobkpbenbbiblmgncgokeknnno
  • Search with ChatGPT — ChatGPT Mods — hpcejjllhbalkcmdikecfngkepppoknd

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

  •  

WhatsApp rolls out new protections against advanced exploits and spyware

WhatsApp is quietly rolling out a new safety layer for photos, videos, and documents, and it lives entirely under the hood. It won’t change how you chat, but it will change what happens to the files that move through your chats—especially the kind that can hide malware.

The new feature, called Strict Account Settings, is rolling out gradually over the coming weeks. To see whether you have the option—and to enable it—go to Settings > Privacy > Advanced.

Strict account settings
Image courtesy of WhatsApp

Yesterday, we wrote about a WhatsApp bug on Android that made headlines because a malicious media file in a group chat could be downloaded and used as an attack vector without you tapping anything. You only had to be added to a new group to be exposed to the booby-trapped file. That issue highlighted something security folks have worried about for years: media files are a great vehicle for attacks, and they do not always exploit WhatsApp itself, but bugs in the operating system or its media libraries.

In Meta’s explanation of the new technology, it points back to the 2015 Stagefright Android vulnerability, where simply processing a malicious video could compromise a device. Back then, WhatsApp worked around the issue by teaching its media library to spot broken MP4 files that could trigger those OS bugs, buying users protection even if their phones were not fully patched.

What’s new is that WhatsApp has now rebuilt its core media-handling library in Rust, a memory-safe programming language. This helps eliminate several types of memory bugs that often lead to serious security problems. In the process, it replaced about 160,000 lines of older C++ code with roughly 90,000 lines of Rust, and rolled the new library out to billions of devices across Android, iOS, desktop apps, wearables, and the web.

On top of that, WhatsApp has bundled a series of checks into an internal system it calls “Kaleidoscope.” This system inspects incoming files for structural oddities, flags higher‑risk formats like PDFs with embedded content or scripts, detects when a file pretends to be something it’s not (for example, a renamed executable), and marks known dangerous file types for special handling in the app. It won’t catch every attack, but it should prevent malicious files from poking at more fragile parts of your device.

For everyday users, the Rust rebuilt and Kaleidoscope checks are good news. They add a strong, invisible safety net around photos, videos and other files you receive, including in group chats where the recent bug could be abused. They also line up neatly with our earlier advice to turn off automatic media downloads or use Advanced Privacy Mode, which limits how far a malicious file can travel on your device even if it lands in WhatsApp.

WhatsApp is the latest platform to roll out enhanced protections for users: Apple introduced Lockdown Mode in 2022, and Android followed with Advanced Protection Mode last year. WhatsApp’s new Strict Account Settings takes a similar high-level approach, applying more restrictive defaults within the app, including blocking attachments and media from unknown senders.

However, this is no reason to rush back to WhatsApp, or to treat these changes as a guarantee of safety. At the very least, Meta is showing that it is willing to invest in making WhatsApp more secure.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

  •  

WhatsApp rolls out new protections against advanced exploits and spyware

WhatsApp is quietly rolling out a new safety layer for photos, videos, and documents, and it lives entirely under the hood. It won’t change how you chat, but it will change what happens to the files that move through your chats—especially the kind that can hide malware.

The new feature, called Strict Account Settings, is rolling out gradually over the coming weeks. To see whether you have the option—and to enable it—go to Settings > Privacy > Advanced.

Strict account settings
Image courtesy of WhatsApp

Yesterday, we wrote about a WhatsApp bug on Android that made headlines because a malicious media file in a group chat could be downloaded and used as an attack vector without you tapping anything. You only had to be added to a new group to be exposed to the booby-trapped file. That issue highlighted something security folks have worried about for years: media files are a great vehicle for attacks, and they do not always exploit WhatsApp itself, but bugs in the operating system or its media libraries.

In Meta’s explanation of the new technology, it points back to the 2015 Stagefright Android vulnerability, where simply processing a malicious video could compromise a device. Back then, WhatsApp worked around the issue by teaching its media library to spot broken MP4 files that could trigger those OS bugs, buying users protection even if their phones were not fully patched.

What’s new is that WhatsApp has now rebuilt its core media-handling library in Rust, a memory-safe programming language. This helps eliminate several types of memory bugs that often lead to serious security problems. In the process, it replaced about 160,000 lines of older C++ code with roughly 90,000 lines of Rust, and rolled the new library out to billions of devices across Android, iOS, desktop apps, wearables, and the web.

On top of that, WhatsApp has bundled a series of checks into an internal system it calls “Kaleidoscope.” This system inspects incoming files for structural oddities, flags higher‑risk formats like PDFs with embedded content or scripts, detects when a file pretends to be something it’s not (for example, a renamed executable), and marks known dangerous file types for special handling in the app. It won’t catch every attack, but it should prevent malicious files from poking at more fragile parts of your device.

For everyday users, the Rust rebuilt and Kaleidoscope checks are good news. They add a strong, invisible safety net around photos, videos and other files you receive, including in group chats where the recent bug could be abused. They also line up neatly with our earlier advice to turn off automatic media downloads or use Advanced Privacy Mode, which limits how far a malicious file can travel on your device even if it lands in WhatsApp.

WhatsApp is the latest platform to roll out enhanced protections for users: Apple introduced Lockdown Mode in 2022, and Android followed with Advanced Protection Mode last year. WhatsApp’s new Strict Account Settings takes a similar high-level approach, applying more restrictive defaults within the app, including blocking attachments and media from unknown senders.

However, this is no reason to rush back to WhatsApp, or to treat these changes as a guarantee of safety. At the very least, Meta is showing that it is willing to invest in making WhatsApp more secure.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

  •  

A WhatsApp bug lets malicious media files spread through group chats

WhatsApp is going through a rough patch. Some users would argue it has been ever since Meta acquired the once widely trusted messaging platform. User sentiment has shifted from “trusted default messenger” to a grudgingly necessary Meta product.

Privacy-aware users still see WhatsApp as one of the more secure mass-market messaging platforms if you lock down its settings. Even then, many remain uneasy about Meta’s broader ecosystem, and wish all their contacts would switch to a more secure platform.

Back to current affairs, which will only reinforce that sentiment.

Google’s Project Zero has just disclosed a WhatsApp vulnerability where a malicious media file, sent into a newly created group chat, can be automatically downloaded and used as an attack vector.

The bug affects WhatsApp on Android and involves zero‑click media downloads in group chats. You can be attacked simply by being added to a group and having a malicious file sent to you.

According to Project Zero, the attack is most likely to be used in targeted campaigns, since the attacker needs to know or guess at least one contact. While focused, it is relatively easy to repeat once an attacker has a likely target list.

And to put a cherry on top for WhatsApp’s competitors, a potentially even more serious concern for the popular messaging platform, an international group of plaintiffs sued Meta Platforms, alleging the WhatsApp owner can store, analyze, and access virtually all of users’ private communications, despite WhatsApp’s end-to-end encryption claims.

How to secure WhatsApp

Reportedly, Meta pushed a server change on November 11, 2025, but Google says that only partially resolved the issue. So, Meta is working on a comprehensive fix.

Google’s advice is to disable Automatic Download or enable WhatsApp’s Advanced Privacy Mode so that media is not automatically downloaded to your phone.

And you’ll need to keep WhatsApp updated to get the latest patches, which is true for any app and for Android itself.

Turn off auto-download of media

Goal: ensure that no photos, videos, audio, or documents are pulled to the device without an explicit decision.

  • Open WhatsApp on your Android device.
  • Tap the three‑dot menu in the top‑right corner, then tap Settings.
  • Go to Storage and data (sometimes labeled Data and storage usage).
  • Under Media auto-download, you will see When using mobile data, when connected on Wi‑Fi. and when roaming.
  • For each of these three entries, tap it and uncheck all media types: Photos, Audio, Videos, Documents. Then tap OK.
  • Confirm that each category now shows something like “No media” under it.

Doing this directly implements Project Zero’s guidance to “disable Automatic Download” so that malicious media can’t silently land on your storage as soon as you are dropped into a hostile group.

Stop WhatsApp from saving media to your Android gallery

Even if WhatsApp still downloads some content, you can stop it from leaking into shared storage where other apps and system components see it.

  • In Settings, go to Chats.
  • Turn off Media visibility (or similar option such as Show media in gallery). For particularly sensitive chats, open the chat, tap the contact or group name, find Media visibility, and set it to No for that thread.

WhatsApp is a sandbox, and should contain the threat. Which means, keeping media inside WhatsApp makes it harder for a malicious file to be processed by other, possibly more vulnerable components.

Lock down who can add you to groups

The attack chain requires the attacker to add you and one of your contacts to a new group. Reducing who can do that lowers risk.

  • ​In Settings, tap Privacy.
  • Tap Groups.
  • Change from Everyone to My contacts or ideally My contacts except… and exclude any numbers you do not fully trust.
  • If you use WhatsApp for work, consider keeping group membership strictly to known contacts and approved admins.

Set up two-step verification on your WhatsApp account

Read this guide for Android and iOS to learn how to do that.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

  •  

A WhatsApp bug lets malicious media files spread through group chats

WhatsApp is going through a rough patch. Some users would argue it has been ever since Meta acquired the once widely trusted messaging platform. User sentiment has shifted from “trusted default messenger” to a grudgingly necessary Meta product.

Privacy-aware users still see WhatsApp as one of the more secure mass-market messaging platforms if you lock down its settings. Even then, many remain uneasy about Meta’s broader ecosystem, and wish all their contacts would switch to a more secure platform.

Back to current affairs, which will only reinforce that sentiment.

Google’s Project Zero has just disclosed a WhatsApp vulnerability where a malicious media file, sent into a newly created group chat, can be automatically downloaded and used as an attack vector.

The bug affects WhatsApp on Android and involves zero‑click media downloads in group chats. You can be attacked simply by being added to a group and having a malicious file sent to you.

According to Project Zero, the attack is most likely to be used in targeted campaigns, since the attacker needs to know or guess at least one contact. While focused, it is relatively easy to repeat once an attacker has a likely target list.

And to put a cherry on top for WhatsApp’s competitors, a potentially even more serious concern for the popular messaging platform, an international group of plaintiffs sued Meta Platforms, alleging the WhatsApp owner can store, analyze, and access virtually all of users’ private communications, despite WhatsApp’s end-to-end encryption claims.

How to secure WhatsApp

Reportedly, Meta pushed a server change on November 11, 2025, but Google says that only partially resolved the issue. So, Meta is working on a comprehensive fix.

Google’s advice is to disable Automatic Download or enable WhatsApp’s Advanced Privacy Mode so that media is not automatically downloaded to your phone.

And you’ll need to keep WhatsApp updated to get the latest patches, which is true for any app and for Android itself.

Turn off auto-download of media

Goal: ensure that no photos, videos, audio, or documents are pulled to the device without an explicit decision.

  • Open WhatsApp on your Android device.
  • Tap the three‑dot menu in the top‑right corner, then tap Settings.
  • Go to Storage and data (sometimes labeled Data and storage usage).
  • Under Media auto-download, you will see When using mobile data, when connected on Wi‑Fi. and when roaming.
  • For each of these three entries, tap it and uncheck all media types: Photos, Audio, Videos, Documents. Then tap OK.
  • Confirm that each category now shows something like “No media” under it.

Doing this directly implements Project Zero’s guidance to “disable Automatic Download” so that malicious media can’t silently land on your storage as soon as you are dropped into a hostile group.

Stop WhatsApp from saving media to your Android gallery

Even if WhatsApp still downloads some content, you can stop it from leaking into shared storage where other apps and system components see it.

  • In Settings, go to Chats.
  • Turn off Media visibility (or similar option such as Show media in gallery). For particularly sensitive chats, open the chat, tap the contact or group name, find Media visibility, and set it to No for that thread.

WhatsApp is a sandbox, and should contain the threat. Which means, keeping media inside WhatsApp makes it harder for a malicious file to be processed by other, possibly more vulnerable components.

Lock down who can add you to groups

The attack chain requires the attacker to add you and one of your contacts to a new group. Reducing who can do that lowers risk.

  • ​In Settings, tap Privacy.
  • Tap Groups.
  • Change from Everyone to My contacts or ideally My contacts except… and exclude any numbers you do not fully trust.
  • If you use WhatsApp for work, consider keeping group membership strictly to known contacts and approved admins.

Set up two-step verification on your WhatsApp account

Read this guide for Android and iOS to learn how to do that.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

  •  

TikTok narrowly avoids a US ban by spinning up a new American joint venture

TikTok may have found a way to stay online in the US. The company announced late last week that it has set up a joint venture backed largely by US investors. TikTok announced TikTok USDS Joint Venture LLC on Friday in a deal valued at about $14 billion, allowing it to continue operating in the country.

This is the culmination of a long-running fight between TikTok and US authorities. In 2019, the Committee on Foreign Investment in the United States (CFIUS) flagged ByteDance’s 2017 acquisition of Musical.ly as a national security risk, on the basis that state links between the app’s Chinese owner would make put US users’ data at risk.

In his first term, President Trump issued an executive order demanding that ByteDance sell the business or face a ban. That was order was blocked by courts, and President Biden later replaced it with a broader review process in 2021.

In April 2024, Congress passed the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACA), which Biden signed into law. That set a January 19, 2025 deadline for ByteDance to divest its business or face a nationwide ban. With no deal finalized, TikTok voluntarily went dark for about 12 hours on January 18, 2025. Trump later issued executive orders extending the deadline, culminating in a September 2025 agreement that led to the joint venture.

Three managing investors each hold 15% of the new business: database giant Oracle (which previously vied to acquire TikTok when ByteDance was first told to divest), technology-focused investment group Silver Lake, and the United Arab Emirates-backed AI (Artificial Intelligence) investment company MGX.

Other investors include the family office of tech entrepreneur Michael Dell, as well as Vastmere Strategic Investments, Alpha Wave Partners, Revolution, Merritt Way, and Via Nova.

Original owner ByteDance retains 19.9% of the business, and according to an internal memo released before the deal was officially announced, 30% of the company will be owned by affiliates of existing ByteDance investors. That’s in spite of the fact that PAFACA mandated a complete severance of TikTok in the US from its Chinese ownership.

A focus on security

The company is eager to promote data security for its users. With that in mind, Oracle takes the role of “trusted security partner” for data protection and compliance auditing under the deal.

Oracle is also expected to store US user data in its cloud environment. The program will reportedly align with security frameworks including the National Institute of Standards and Technology (NIST) Cybersecurity Framework. Other TikTok-owned apps such as CapCut and Lemon8 will also fall under the joint venture’s security umbrella.

Canada’s TikTok tension

It’s been a busy month for ByteDance, with other developments north of the border. Last week, Canada’s Federal Court overturned a November 2024 governmental order to shut down TikTok’s Canadian business on national security grounds. The decision gives Industry Minister Mélanie Joly time to review the case.

Why this matters

TikTok’s new US joint venture lowers the risk of direct foreign access to American user data, but it doesn’t erase all of the concerns that put the app in regulators’ crosshairs in the first place. ByteDance still retains an economic stake, the recommendation algorithm remains largely opaque, and oversight depends on audits and enforcement rather than hard technical separation.

In other words, this deal reduces exposure, but it doesn’t make TikTok a risk-free platform. For users, that means the same common-sense rules still apply: be thoughtful about what you share and remember that regulatory approval isn’t the same as total data safety.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

  •  

TikTok narrowly avoids a US ban by spinning up a new American joint venture

TikTok may have found a way to stay online in the US. The company announced late last week that it has set up a joint venture backed largely by US investors. TikTok announced TikTok USDS Joint Venture LLC on Friday in a deal valued at about $14 billion, allowing it to continue operating in the country.

This is the culmination of a long-running fight between TikTok and US authorities. In 2019, the Committee on Foreign Investment in the United States (CFIUS) flagged ByteDance’s 2017 acquisition of Musical.ly as a national security risk, on the basis that state links between the app’s Chinese owner would make put US users’ data at risk.

In his first term, President Trump issued an executive order demanding that ByteDance sell the business or face a ban. That was order was blocked by courts, and President Biden later replaced it with a broader review process in 2021.

In April 2024, Congress passed the Protecting Americans from Foreign Adversary Controlled Applications Act (PAFACA), which Biden signed into law. That set a January 19, 2025 deadline for ByteDance to divest its business or face a nationwide ban. With no deal finalized, TikTok voluntarily went dark for about 12 hours on January 18, 2025. Trump later issued executive orders extending the deadline, culminating in a September 2025 agreement that led to the joint venture.

Three managing investors each hold 15% of the new business: database giant Oracle (which previously vied to acquire TikTok when ByteDance was first told to divest), technology-focused investment group Silver Lake, and the United Arab Emirates-backed AI (Artificial Intelligence) investment company MGX.

Other investors include the family office of tech entrepreneur Michael Dell, as well as Vastmere Strategic Investments, Alpha Wave Partners, Revolution, Merritt Way, and Via Nova.

Original owner ByteDance retains 19.9% of the business, and according to an internal memo released before the deal was officially announced, 30% of the company will be owned by affiliates of existing ByteDance investors. That’s in spite of the fact that PAFACA mandated a complete severance of TikTok in the US from its Chinese ownership.

A focus on security

The company is eager to promote data security for its users. With that in mind, Oracle takes the role of “trusted security partner” for data protection and compliance auditing under the deal.

Oracle is also expected to store US user data in its cloud environment. The program will reportedly align with security frameworks including the National Institute of Standards and Technology (NIST) Cybersecurity Framework. Other TikTok-owned apps such as CapCut and Lemon8 will also fall under the joint venture’s security umbrella.

Canada’s TikTok tension

It’s been a busy month for ByteDance, with other developments north of the border. Last week, Canada’s Federal Court overturned a November 2024 governmental order to shut down TikTok’s Canadian business on national security grounds. The decision gives Industry Minister Mélanie Joly time to review the case.

Why this matters

TikTok’s new US joint venture lowers the risk of direct foreign access to American user data, but it doesn’t erase all of the concerns that put the app in regulators’ crosshairs in the first place. ByteDance still retains an economic stake, the recommendation algorithm remains largely opaque, and oversight depends on audits and enforcement rather than hard technical separation.

In other words, this deal reduces exposure, but it doesn’t make TikTok a risk-free platform. For users, that means the same common-sense rules still apply: be thoughtful about what you share and remember that regulatory approval isn’t the same as total data safety.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

  •  

Get paid to scroll TikTok? The data trade behind Freecash ads

Loyal readers and other privacy-conscious people will be familiar with the expression, “If it’s too good to be true, it’s probably false.”

Getting paid handsomely to scroll social media definitely falls into that category. It sounds like an easy side hustle, which usually means there’s a catch.

In January 2026, an app called Freecash shot up to the number two spot on Apple’s free iOS chart in the US, helped along by TikTok ads that look a lot like job offers from TikTok itself. The ads promised up to $35 an hour to watch your “For You” page. According to reporting, the ads didn’t promote Freecash by name. Instead, they showed a young woman expressing excitement about seemingly being “hired by TikTok” to watch videos for money.

Freecash landing page

The landing pages featured TikTok and Freecash logos and invited users to “get paid to scroll” and “cash out instantly,” implying a simple exchange of time for money.

Those claims were misleading enough that TikTok said the ads violated its rules on financial misrepresentation and removed some of them.

Once you install the app, the promised TikTok paycheck vanishes. Instead, Freecash routes you to a rotating roster of mobile games—titles like Monopoly Go and Disney Solitaire—and offers cash rewards for completing time‑limited in‑game challenges. Payouts range from a single cent for a few minutes of daily play up to triple‑digit amounts if you reach high levels within a fixed period.

The whole setup is designed not to reward scrolling, as it claims, but to funnel you into games where you are likely to spend money or watch paid advertisements.

Freecash’s parent company, Berlin‑based Almedia, openly describes the platform as a way to match mobile game developers with users who are likely to install and spend. The company’s CEO has spoken publicly about using past spending data to steer users toward the genres where they’re most “valuable” to advertisers. 

Our concern, beyond the bait-and-switch, is the privacy issue. Freecash’s privacy policy allows the automatic collection of highly sensitive information, including data about race, religion, sex life, sexual orientation, health, and biometrics. Each additional mobile game you install to chase rewards adds its own privacy policy, tracking, and telemetry. Together, they greatly increase how much behavioral data these companies can harvest about a user.

Experts warn that data brokers already trade lists of people likely to be more susceptible to scams or compulsive online behavior—profiles that apps like this can help refine.

We’ve previously reported on data brokers that used games and apps to build massive databases, only to later suffer breaches exposing all that data.

When asked about the ads, Freecash said the most misleading TikTok promotions were created by third-party affiliates, not by the company itself. Which is quite possible because Freecash does offer an affiliate payout program to people who promote the app online. But they made promises to review and tighten partner monitoring.

For experienced users, the pattern should feel familiar: eye‑catching promises of easy money, a bait‑and‑switch into something that takes more time and effort than advertised, and a business model that suddenly makes sense when you realize your attention and data are the real products.

How to stay private

Free cash? Apparently, there is no such thing.

If you’re curious how intrusive schemes like this can be, consider using a separate email address created specifically for testing. Avoid sharing real personal details. Many users report that once they sign up, marketing emails quickly pile up.

Some of these schemes also appeal to people who are younger or under financial pressure, offering tiny payouts while generating far more value for advertisers and app developers.

So, what can you do?

  • Gather information about the company you’re about to give your data. Talk to friends and relatives about your plans. Shared common sense often helps make the right decisions.
  • Create a separate account if you want to test a service. Use a dedicated email address and avoid sharing real personal details.
  • Limit information you provide online to what makes sense for the purpose. Does a game publisher need your Social Security Number? I don’t think so.
  • Be cautious about app installs that are framed as required to make the money initially promised, and review permissions carefully.
  • Use an up-to-date real-time anti-malware solution on all your devices.

Work from the premise that free money does not exist. Try to work out the business model of those offering it, and then decide.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  •  

Get paid to scroll TikTok? The data trade behind Freecash ads

Loyal readers and other privacy-conscious people will be familiar with the expression, “If it’s too good to be true, it’s probably false.”

Getting paid handsomely to scroll social media definitely falls into that category. It sounds like an easy side hustle, which usually means there’s a catch.

In January 2026, an app called Freecash shot up to the number two spot on Apple’s free iOS chart in the US, helped along by TikTok ads that look a lot like job offers from TikTok itself. The ads promised up to $35 an hour to watch your “For You” page. According to reporting, the ads didn’t promote Freecash by name. Instead, they showed a young woman expressing excitement about seemingly being “hired by TikTok” to watch videos for money.

Freecash landing page

The landing pages featured TikTok and Freecash logos and invited users to “get paid to scroll” and “cash out instantly,” implying a simple exchange of time for money.

Those claims were misleading enough that TikTok said the ads violated its rules on financial misrepresentation and removed some of them.

Once you install the app, the promised TikTok paycheck vanishes. Instead, Freecash routes you to a rotating roster of mobile games—titles like Monopoly Go and Disney Solitaire—and offers cash rewards for completing time‑limited in‑game challenges. Payouts range from a single cent for a few minutes of daily play up to triple‑digit amounts if you reach high levels within a fixed period.

The whole setup is designed not to reward scrolling, as it claims, but to funnel you into games where you are likely to spend money or watch paid advertisements.

Freecash’s parent company, Berlin‑based Almedia, openly describes the platform as a way to match mobile game developers with users who are likely to install and spend. The company’s CEO has spoken publicly about using past spending data to steer users toward the genres where they’re most “valuable” to advertisers. 

Our concern, beyond the bait-and-switch, is the privacy issue. Freecash’s privacy policy allows the automatic collection of highly sensitive information, including data about race, religion, sex life, sexual orientation, health, and biometrics. Each additional mobile game you install to chase rewards adds its own privacy policy, tracking, and telemetry. Together, they greatly increase how much behavioral data these companies can harvest about a user.

Experts warn that data brokers already trade lists of people likely to be more susceptible to scams or compulsive online behavior—profiles that apps like this can help refine.

We’ve previously reported on data brokers that used games and apps to build massive databases, only to later suffer breaches exposing all that data.

When asked about the ads, Freecash said the most misleading TikTok promotions were created by third-party affiliates, not by the company itself. Which is quite possible because Freecash does offer an affiliate payout program to people who promote the app online. But they made promises to review and tighten partner monitoring.

For experienced users, the pattern should feel familiar: eye‑catching promises of easy money, a bait‑and‑switch into something that takes more time and effort than advertised, and a business model that suddenly makes sense when you realize your attention and data are the real products.

How to stay private

Free cash? Apparently, there is no such thing.

If you’re curious how intrusive schemes like this can be, consider using a separate email address created specifically for testing. Avoid sharing real personal details. Many users report that once they sign up, marketing emails quickly pile up.

Some of these schemes also appeal to people who are younger or under financial pressure, offering tiny payouts while generating far more value for advertisers and app developers.

So, what can you do?

  • Gather information about the company you’re about to give your data. Talk to friends and relatives about your plans. Shared common sense often helps make the right decisions.
  • Create a separate account if you want to test a service. Use a dedicated email address and avoid sharing real personal details.
  • Limit information you provide online to what makes sense for the purpose. Does a game publisher need your Social Security Number? I don’t think so.
  • Be cautious about app installs that are framed as required to make the money initially promised, and review permissions carefully.
  • Use an up-to-date real-time anti-malware solution on all your devices.

Work from the premise that free money does not exist. Try to work out the business model of those offering it, and then decide.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  •  

Spammers abuse Zendesk to flood inboxes with legitimate-looking emails, but why?

Short answer: we have no idea.

People are actively complaining that their mailboxes and queues are being flooded by emails coming from the Zendesk instances of trusted companies like Discord, Riot Games, Dropbox, and many others.

Zendesk is a customer service and support software platform that helps companies manage customer communication. It supports tickets, live chat, email, phone, and communication through social media.

Some people complained about receiving over 1,000 such emails. The strange thing ais that so far there are no reports of malicious links, tech support scam numbers, or any type of phishing in these emails.

The abusers are able to send waves of emails from these systems because Zendesk allows them to create fake support tickets with email addresses that do not belong to them. The system sends a confirmation mail to the provided email address if the affected company has not restricted ticket submission to verified users.

In a December advisory, Zendesk warned about this method, which they called relay spam. In essence it’s an example of attackers abusing a legitimate automated part of a process. We have seen similar attacks before, but they always served a clear purpose for the attacker, whereas this one doesn’t.

Even though some of the titles in use definitely are of a clickbait nature. Some examples:

  • FREE DISCORD NITRO!!
  • TAKE DOWN ORDER NOW FROM CD Projekt
  • TAKE DOWN NOW ORDER FROM Israel FOR Square Enix
  • DONATION FOR State Of Tennessee CONFIRMED
  • LEGAL NOTICE FROM State Of Louisiana FOR Electronic
  • IMPORTANT LAW ENFORCEMENT NOTIFICATION FROM DISCORD FROM Peru
  • Thank you for your purchase!
  •  Binance Sign-in attempt from Romania
  • LEGAL DEMAND from Take-Two interactive

So, this could be someone testing the system, but it just as well might be someone who enjoys disrupting the system and creating disruption. Maybe they have an axe to grind with Zendesk. Or they’re looking for a way to send attachments with the emails.

Either way, Zendesk told BleepingComputer that they introduced new safety features on their end to detect and stop this type of spam in the future. But companies are advised to restrict the users that can submit tickets and the titles submitters can give to the tickets.

Stay vigilant

In the emails we have seen the links in the tickets are legitimate and point to the affected company’s ticket system. And the only part of the emails the attackers should be able to manipulate is the title and subject of the ticket.

But although everyone involved tells us just to ignore the emails, it is never wrong to handle them with an appropriate amount of distrust.

  • Delete or archive the emails without interacting.
  • Do not click on the links if you have not submitted the ticket or call any telephone number mentioned in the ticket. Reach out through verified channels.
  • Ignore any actions advised in the parts of the email the ticket submitter can control.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

  •  
❌