Cybersecurity researchers have discovered two malicious Microsoft Visual Studio Code (VS Code) extensions that are advertised as artificial intelligence (AI)-powered coding assistants, but also harbor covert functionality to siphon developer data to China-based servers.
The extensions, which have 1.5 million combined installs and are still available for download from the official Visual Studio
Security failures rarely arrive loudly. They slip in through trusted tools, half-fixed problems, and habits people stop questioning. This weekβs recap shows that pattern clearly.
Attackers are moving faster than defenses, mixing old tricks with new paths. βPatchedβ no longer means safe, and every day, software keeps becoming the entry point.
What follows is a set of small but telling signals.
If thereβs a constant in cybersecurity, itβs that adversaries are always innovating. The rise of offensive AI is transforming attack strategies and making them harder to detect. Googleβs Threat Intelligence Group, recently reported on adversaries using Large Language Models (LLMs) to both conceal code and generate malicious scripts on the fly, letting malware shape-shift in real-time to evade
The North Korean threat actor known as Konni has been observed using PowerShell malware generated using artificial intelligence (AI) tools to target developers and engineering teams in the blockchain sector.
The phishing campaign has targeted Japan, Australia, and India, highlighting the adversary's expansion of the targeting scope beyond South Korea, Russia, Ukraine, and European nations, Check