Normal view

Received an Instagram password reset email? Here’s what you need to know

12 January 2026 at 22:04

Last week, many Instagram users began receiving unsolicited emails from the platform that warned about a password reset request.

The message said:

“Hi {username},
We got a request to reset your Instagram password.
If you ignore this message, your password will not be changed. If you didn’t request a password reset, let us know.”

Around the same time that users began receiving these emails, a cybercriminal using the handle “Solonik” offered data that alleged contains information about 17 million Instagram users for sale on a Dark Web forum.

These 17 million or so records include:

  • Usernames
  • Full names
  • User IDs
  • Email addresses
  • Phone numbers
  • Countries
  • Partial locations

Please note that there are no passwords listed in the data.

Despite the timing of the two events, Instagram denied this weekend that these events are related. On the platform X, the company stated they fixed an issue that allowed an external party to request password reset emails for “some people.”

So, what’s happening?

Regarding the data found on the dark web last week, Shahak Shalev, global head of scam and AI research at Malwarebytes, shared that “there are some indications that the Instagram data dump includes data from other, older, alleged Instagram breaches, and is a sort of compilation.” As Shalev’s team investigates the data, he also said that the earliest password reset requests reported by users came days before the data was first posted on the dark web, which might mean that “the data may have been circulating in more private groups before being made public.”

However, another possibility, Shalev said, is that “another vulnerability/data leak was happening as some bad actor tried spraying for [Instagram] accounts. Instagram’s announcement seems to reference that spraying. Besides the suspicious timing, there’s no clear connection between the two at this time.”

But, importantly, scammers will not care whether these incidents are related or not. They will try to take advantage of the situation by sending out fake emails.

“We felt it was important to alert people about the data availability so that everyone could reset their passwords, directly from the app, and be on alert for other phishing communications,” Shalev said.

If and when we find out more, we’ll keep you posted, so stay tuned.

How to stay safe

If you have enabled 2FA on your Instagram account, we think it is indeed safe to ignore the emails, as proposed by Meta.

Should you want to err on the safe side and decide to change your password, make sure to do so in the app and not click any links in the email, to avoid the risk that you have received a fake email. Or you might end up providing scammers with your password.

Another thing to keep in mind is that these are Meta-data. Which means some users may have reused or linked them to their Facebook or WhatsApp accounts. So, as a precaution, you can check recent logins and active sessions on Instagram, WhatsApp, and Facebook, and log out from any devices or locations you do not recognize.

If you want to find out whether your data was included in an Instagram data breach, or any other for that matter, try our free Digital Footprint scan.

Received an Instagram password reset email? Here’s what you need to know

12 January 2026 at 22:04

Last week, many Instagram users began receiving unsolicited emails from the platform that warned about a password reset request.

The message said:

“Hi {username},
We got a request to reset your Instagram password.
If you ignore this message, your password will not be changed. If you didn’t request a password reset, let us know.”

Around the same time that users began receiving these emails, a cybercriminal using the handle “Solonik” offered data that alleged contains information about 17 million Instagram users for sale on a Dark Web forum.

These 17 million or so records include:

  • Usernames
  • Full names
  • User IDs
  • Email addresses
  • Phone numbers
  • Countries
  • Partial locations

Please note that there are no passwords listed in the data.

Despite the timing of the two events, Instagram denied this weekend that these events are related. On the platform X, the company stated they fixed an issue that allowed an external party to request password reset emails for “some people.”

So, what’s happening?

Regarding the data found on the dark web last week, Shahak Shalev, global head of scam and AI research at Malwarebytes, shared that “there are some indications that the Instagram data dump includes data from other, older, alleged Instagram breaches, and is a sort of compilation.” As Shalev’s team investigates the data, he also said that the earliest password reset requests reported by users came days before the data was first posted on the dark web, which might mean that “the data may have been circulating in more private groups before being made public.”

However, another possibility, Shalev said, is that “another vulnerability/data leak was happening as some bad actor tried spraying for [Instagram] accounts. Instagram’s announcement seems to reference that spraying. Besides the suspicious timing, there’s no clear connection between the two at this time.”

But, importantly, scammers will not care whether these incidents are related or not. They will try to take advantage of the situation by sending out fake emails.

“We felt it was important to alert people about the data availability so that everyone could reset their passwords, directly from the app, and be on alert for other phishing communications,” Shalev said.

If and when we find out more, we’ll keep you posted, so stay tuned.

How to stay safe

If you have enabled 2FA on your Instagram account, we think it is indeed safe to ignore the emails, as proposed by Meta.

Should you want to err on the safe side and decide to change your password, make sure to do so in the app and not click any links in the email, to avoid the risk that you have received a fake email. Or you might end up providing scammers with your password.

Another thing to keep in mind is that these are Meta-data. Which means some users may have reused or linked them to their Facebook or WhatsApp accounts. So, as a precaution, you can check recent logins and active sessions on Instagram, WhatsApp, and Facebook, and log out from any devices or locations you do not recognize.

If you want to find out whether your data was included in an Instagram data breach, or any other for that matter, try our free Digital Footprint scan.

Regulators around the world are scrutinizing Grok over sexual deepfakes

12 January 2026 at 15:04

Grok’s failure to block sexualized images of minors has turned a single “isolated lapse” into a global regulatory stress test for xAI’s ambitions. The response from lawmakers and regulators suggests this will not be solved with a quick apology and a hotfix.

Last week we reported on Grok’s apology after it generated an image of young girls in “sexualized attire.”

The apology followed the introduction of Grok’s paid “Spicy Mode” in August 2025, which was marketed as edgy and less censored. In practice it enabled users to generate sexual deepfake images, including content that may cross into illegal child sexual abuse material (CSAM) under US and other jurisdictions’ laws.

A report from web-monitoring tool CopyLeaks highlighted “thousands” of incidents of Grok being used to create sexually suggestive images of non-consenting celebrities.

This is starting to backfire. Reportedly, three US senators are asking Google and Apple to remove Elon Musk’s Grok and X apps from their app stores, citing the spread of nonconsensual sexualized AI images of women and minors and arguing it violates the companies’ app store rules.

In their joint letter, the senators state:

“In recent days, X users have used the app’s Grok AI tool to generate nonconsensual sexual imagery of real, private citizens at scale. This trend has included Grok modifying images to depict women being sexually abused, humiliated, hurt, and even killed. In some cases, Grok has reportedly created sexualized images of children—the most heinous type of content imaginable.”

The UK government also threatens to take possible action against the platform. Government officials have said they would fully support any action taken by Ofcom, the independent media regulator, against X. Even if that meant UK regulators could block the platform.

Indonesia and Malaysia already blocked Grok after its “digital undressing” function flooded the internet with suggestive and obscene manipulated images of women and minors.

As it turns out, a user prompted Grok to generate its own “apology,” which it did. After backlash over sexualized images of women and minors, Grok/X announced limits on image generation and editing for paying subscribers only, effectively paywalling those capabilities on main X surfaces.

For lawmakers already worried about disinformation, election interference, deepfakes, and abuse imagery, Grok is fast becoming the textbook case for why “move fast and break things” doesn’t mix with AI that can sexualize real people on demand.

Hopefully, the next wave of rules, ranging from EU AI enforcement to platform-specific safety obligations, will treat this incident as the baseline risk that all large-scale visual models must withstand, not as an outlier.

Keep your children safe

If you ever wondered why parents post images of their children with a smiley across their face, this is the reason.

Don’t make it easy for strangers to copy, reuse, or manipulate your photos.

This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

And treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Regulators around the world are scrutinizing Grok over sexual deepfakes

12 January 2026 at 15:04

Grok’s failure to block sexualized images of minors has turned a single “isolated lapse” into a global regulatory stress test for xAI’s ambitions. The response from lawmakers and regulators suggests this will not be solved with a quick apology and a hotfix.

Last week we reported on Grok’s apology after it generated an image of young girls in “sexualized attire.”

The apology followed the introduction of Grok’s paid “Spicy Mode” in August 2025, which was marketed as edgy and less censored. In practice it enabled users to generate sexual deepfake images, including content that may cross into illegal child sexual abuse material (CSAM) under US and other jurisdictions’ laws.

A report from web-monitoring tool CopyLeaks highlighted “thousands” of incidents of Grok being used to create sexually suggestive images of non-consenting celebrities.

This is starting to backfire. Reportedly, three US senators are asking Google and Apple to remove Elon Musk’s Grok and X apps from their app stores, citing the spread of nonconsensual sexualized AI images of women and minors and arguing it violates the companies’ app store rules.

In their joint letter, the senators state:

“In recent days, X users have used the app’s Grok AI tool to generate nonconsensual sexual imagery of real, private citizens at scale. This trend has included Grok modifying images to depict women being sexually abused, humiliated, hurt, and even killed. In some cases, Grok has reportedly created sexualized images of children—the most heinous type of content imaginable.”

The UK government also threatens to take possible action against the platform. Government officials have said they would fully support any action taken by Ofcom, the independent media regulator, against X. Even if that meant UK regulators could block the platform.

Indonesia and Malaysia already blocked Grok after its “digital undressing” function flooded the internet with suggestive and obscene manipulated images of women and minors.

As it turns out, a user prompted Grok to generate its own “apology,” which it did. After backlash over sexualized images of women and minors, Grok/X announced limits on image generation and editing for paying subscribers only, effectively paywalling those capabilities on main X surfaces.

For lawmakers already worried about disinformation, election interference, deepfakes, and abuse imagery, Grok is fast becoming the textbook case for why “move fast and break things” doesn’t mix with AI that can sexualize real people on demand.

Hopefully, the next wave of rules, ranging from EU AI enforcement to platform-specific safety obligations, will treat this incident as the baseline risk that all large-scale visual models must withstand, not as an outlier.

Keep your children safe

If you ever wondered why parents post images of their children with a smiley across their face, this is the reason.

Don’t make it easy for strangers to copy, reuse, or manipulate your photos.

This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

And treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

❌