Normal view
JBL brengt BandBox-versterker met AI-audiosplitsing voor muzikanten uit
Belgisch OM waarschuwt voor oplichters die AI-versie koning Filip gebruiken
Multi-Stage Phishing Campaign Targets Russia with Amnesia RAT and Ransomware

Maker Cyberpunk 2077 VR-mod haalt laatste mods offline na nog meer klachten
-
The Register – Security
- UK border tech budget swells by £100M as Home Office targets small boat crossings
UK border tech budget swells by £100M as Home Office targets small boat crossings
Drone, satellite, and other data combined to monitor unwanted vessels
The UK Home Office is spending up to £100 million on intelligence tech in part to tackle the so-called "small boats" issue of refugees and irregular immigrants coming across the English Channel.…
Tesla verwijdert Autosteer-functie in VS uit Autopilot, gaat naar FSD-abonnement
Nike Probing Potential Security Incident as Hackers Threaten to Leak Data
The WorldLeaks cybercrime group claims to have stolen information from the footwear and apparel giant’s systems.
The post Nike Probing Potential Security Incident as Hackers Threaten to Leak Data appeared first on SecurityWeek.
New DynoWiper Malware Used in Attempted Sandworm Attack on Polish Power Sector

-
The Hacker News

- Who Approved This Agent? Rethinking Access, Accountability, and Risk in the Age of AI Agents
Who Approved This Agent? Rethinking Access, Accountability, and Risk in the Age of AI Agents

CISA Adds Actively Exploited VMware vCenter Flaw CVE-2024-37079 to KEV Catalog

Search Engines, AI, And The Long Fight Over Fair Use
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.
Long before generative AI, copyright holders warned that new technologies for reading and analyzing information would destroy creativity. Internet search engines, they argued, were infringement machines—tools that copied copyrighted works at scale without permission. As they had with earlier information technologies like the photocopier and the VCR, copyright owners sued.
Courts disagreed. They recognized that copying works in order to understand, index, and locate information is a classic fair use—and a necessary condition for a free and open internet.
Today, the same argument is being recycled against AI. It’s whether copyright owners should be allowed to control how others analyze, reuse, and build on existing works.
Fair Use Protects Analysis—Even When It’s Automated
U.S. courts have long recognized that copying for purposes of analysis, indexing, and learning is a classic fair use. That principle didn’t originate with artificial intelligence. It doesn’t disappear just because the processes are performed by a machine.
Copying works in order to understand them, extract information from them, or make them searchable is transformative and lawful. That’s why search engines can index the web, libraries can make digital indexes, and researchers can analyze large collections of text and data without negotiating licenses from millions of rightsholders. These uses don’t substitute for the original works; they enable new forms of knowledge and expression.
Training AI models fits squarely within that tradition. An AI system learns by analyzing patterns across many works. The purpose of that copying is not to reproduce or replace the original texts, but to extract statistical relationships that allow the AI system to generate new outputs. That is the hallmark of a transformative use.
Attacking AI training on copyright grounds misunderstands what’s at stake. If copyright law is expanded to require permission for analyzing or learning from existing works, the damage won’t be limited to generative AI tools. It could threaten long-standing practices in machine learning and text-and-data mining that underpin research in science, medicine, and technology.
Researchers already rely on fair use to analyze massive datasets such as scientific literature. Requiring licenses for these uses would often be impractical or impossible, and it would advantage only the largest companies with the money to negotiate blanket deals. Fair use exists to prevent copyright from becoming a barrier to understanding the world. The law has protected learning before. It should continue to do so now, even when that learning is automated.
A Road Forward For AI Training And Fair Use
One court has already shown how these cases should be analyzed. In Bartz v. Anthropic, the court found that using copyrighted works to train an AI model is a highly transformative use. Training is a kind of studying how language works—not about reproducing or supplanting the original books. Any harm to the market for the original works was speculative.
The court in Bartz rejected the idea that an AI model might infringe because, in some abstract sense, its output competes with existing works. While EFF disagrees with other parts of the decision, the court’s ruling on AI training and fair use offers a good approach. Courts should focus on whether training is transformative and non-substitutive, not on fear-based speculation about how a new tool could affect someone’s market share.
AI Can Create Problems, But Expanding Copyright Is the Wrong Fix
Workers’ concerns about automation and displacement are real and should not be ignored. But copyright is the wrong tool to address them. Managing economic transitions and protecting workers during turbulent times are core functions of government. Copyright law doesn’t help with those tasks in the slightest. Expanding copyright control over learning and analysis won’t stop new forms of worker automation—it never has. But it will distort copyright law and undermine free expression.
Broad licensing mandates may also do harm by entrenching the current biggest incumbent companies. Only the largest tech firms can afford to negotiate massive licensing deals covering millions of works. Smaller developers, research teams, nonprofits, and open-source projects will all get locked out. Copyright expansion won’t restrain Big Tech—it will give it a new advantage.
Fair Use Still Matters
Learning from prior work is foundational to free expression. Rightsholders cannot be allowed to control it. Courts have rejected that move before, and they should do so again.
Search, indexing, and analysis didn’t destroy creativity. Nor did the photocopier, nor the VCR. They expanded speech, access to knowledge, and participation in culture. Artificial intelligence raises hard new questions, but fair use remains the right starting point for thinking about training.

Feds totally skipping infosec industry's biggest conference this year
But ex-CISA boss and new RSAC CEO Jen Easterly will be there
updated The US Cybersecurity and Infrastructure Security Agency won't attend the annual RSA Conference in March, an agency spokesperson confirmed to The Register. Sessions involving speakers from the FBI and National Security Agency (NSA) have also disappeared from the agenda.…
Happy 9th Anniversary, CTA: A Celebration of Collaboration in Cyber Defense
Unit 42 celebrates 9 years of the Cyber Threat Alliance, tracing its journey from a bold idea to a global leader in collaborative cyber defense.
The post Happy 9th Anniversary, CTA: A Celebration of Collaboration in Cyber Defense appeared first on Unit 42.

ShinyHunters claim hacks of Okta, Microsoft SSO accounts for data theft
Updated PCI PIN compliance package for AWS Payment Cryptography now available
Amazon Web Services (AWS) is pleased to announce the successful completion of Payment Card Industry Personal Identification Number (PCI PIN) audit for the AWS Payment Cryptography service.
With AWS Payment Cryptography, your payment processing applications can use payment hardware security modules (HSMs) that are PCI PIN Transaction Security (PTS) HSM certified and fully managed by AWS, with PCI PIN-compliant key management. This attestation gives you the flexibility to deploy your regulated workloads with reduced compliance overhead.
The PCI PIN compliance report package for AWS Payment Cryptography includes two key components:
- PCI PIN Attestation of Compliance (AOC) – demonstrating that AWS Payment Cryptography was successfully validated against the PCI PIN standard with zero findings
- PCI PIN Responsibility Summary – provides guidance to help AWS customers understand their responsibilities in developing and operating a highly secure environment for handling PIN-based transactions
AWS was evaluated by Coalfire, a third-party Qualified Security Assessor (QSA). Customers can access the PCI PIN Attestation of Compliance (AOC) and PCI PIN Responsibility Summary reports through AWS Artifact.
To learn more about our PCI programs and other compliance and security programs, visit the AWS Compliance Programs page. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Compliance Support page.
If you have feedback about this post, submit comments in the Comments section below. If you have questions about this post, contact AWS Support.
Patch or die: VMware vCenter Server bug fixed in 2024 under attack today
If you skipped it back then, now’s a very good time
You've got to keep your software updated. Some unknown miscreants are exploiting a critical VMware vCenter Server bug more than a year after Broadcom patched the flaw.…
Friday Squid Blogging: Giant Squid in the Star Trek Universe
Spock befriends a giant space squid in the comic Star Trek: Strange New Worlds: The Seeds of Salvation #5.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
AWS achieves 2025 C5 Type 2 attestation report with 183 services in scope
Amazon Web Services (AWS) is pleased to announce a successful completion of the 2025 Cloud Computing Compliance Criteria Catalogue (C5) attestation cycle with 183 services in scope. This alignment with C5 requirements demonstrates our ongoing commitment to adhere to the heightened expectations for cloud service providers. AWS customers in Germany and across Europe can run their applications in the AWS Regions that are in scope of the C5 report with the assurance that AWS aligns with C5 criteria.
The C5 attestation scheme is backed by the German government and was introduced by the Federal Office for Information Security (BSI) in 2016. AWS has adhered to the C5 requirements since their inception. C5 helps organizations demonstrate operational security against common cybersecurity threats when using cloud services.
Independent third-party auditors evaluated AWS for the period of October 1, 2024, through September 30, 2025. The C5 report illustrates the compliance status of AWS for both the basic and additional criteria of C5. Customers can download the C5 report through AWS Artifact, a self-service portal for on-demand access to AWS compliance reports. Sign in to AWS Artifact in the AWS Management Console or learn more at Getting Started with AWS Artifact.
AWS has added the following five services to the current C5 scope:
- Amazon Verified Permissions
- AWS B2B Data Interchange
- AWS Resource Explorer
- AWS Security Incident Response
- AWS Transform
The following AWS Regions are in scope of the 2025 C5 attestation: Europe (Frankfurt), Europe (Ireland), Europe (London), Europe (Milan), Europe (Paris), Europe (Stockholm), Europe (Spain), Europe (Zurich), and Asia Pacific (Singapore). For up-to-date information, see the C5 page of our AWS Services in Scope by Compliance Program.
Security and compliance is a shared responsibility between AWS and the customer. When customers move their computer systems and data to the cloud, security responsibilities are shared between the customer and the cloud service provider. For more information, see the AWS Shared Security Responsibility Model.
To learn more about our compliance and security programs, see AWS Compliance Programs. As always, we value your feedback and questions; reach out to the AWS Compliance team through the Contact Us page.
Reach out to your AWS account team if you have questions or feedback about the C5 report.
If you have feedback about this post, submit comments in the Comments section below.
From runtime risk to real‑time defense: Securing AI agents
AI agents, whether developed in Microsoft Copilot Studio or on alternative platforms, are becoming a powerful means for organizations to create custom solutions designed to enhance productivity and automate organizational processes by seamlessly integrating with internal data and systems.
From a security research perspective, this shift introduces a fundamental change in the threat landscape. As Microsoft Defender researchers evaluate how agents behave under adversarial pressure, one risk stands out: once deployed, agents can access sensitive data and execute privileged actions based on natural language input alone. If an threat actor can influence how an agent plans or sequences those actions, the result may be unintended behavior that operates entirely within the agent’s allowed permissions, which makes it difficult to detect using traditional controls.
To address this, it is important to have a mechanism for verifying and controlling agent behavior during runtime, not just at build time.
By inspecting agent behavior as it executes, defenders can evaluate whether individual actions align with intended use and policy. In Microsoft Copilot Studio, this is supported through real-time protection during tool invocation, where Microsoft Defender performs security checks that determine whether each action should be allowed or blocked before execution. This approach provides security teams with runtime oversight into agent behavior while preserving the flexibility that makes agents valuable.
In this article, we examine three scenarios inspired by observed and emerging AI attack techniques, where threat actors attempt to manipulate agent tool invocation to produce unsafe outcomes, often without the agent creator’s awareness. For each scenario, we show how webhook-based runtime checks, implemented through Defender integration with Copilot Studio, can detect and stop these risky actions in real time, giving security teams the observability and control needed to deploy agents with confidence.
Topics, tools, and knowledge sources: How AI agents execute actions and why attackers target them

Microsoft Copilot Studio agents are composed of multiple components that work together to interpret input, plan actions, and execute tasks. From a security perspective, these same components (topics, tools, and knowledge sources) also define the agent’s effective attack surface. Understanding how they interact is essential to recognizing how attackers may attempt to influence agent behavior, particularly in environments that rely on generative orchestration to chain actions at runtime. Because these components determine how the agent responds to user prompts and autonomous triggers, crafted input becomes a primary vector for steering the agent toward unintended or unsafe execution paths.
When using generative orchestration, each user input or trigger can cause the orchestrator to dynamically build and execute a multi-step plan, leveraging all three components to deliver accurate and context-aware results.
- Topics are modular conversation flows triggered by specific user phrases. Each topic is made up of nodes that guide the conversation step-by-step, and can include actions, questions, or conditions.
- Tools are the capabilities the copilot can call during a conversation, such as connector actions, AI builder models, or generative answers. These can be embedded within topics or executed independently, giving the agent flexibility in how it handles requests.
- Knowledge sources enhance generative answers by grounding them in reliable enterprise content. When configured, they allow the copilot to access information from Power Platform, Dynamics 365, websites, and other external systems, ensuring responses are accurate and contextually relevant. Read more about Microsoft Copilot Studio agents here.
Understanding and mitigating potential risks with real-time protection in Microsoft Defender
In the model above, the agent’s capabilities are effectively equivalent to code execution in the environment. When a tool is invoked, it can perform real-world actions, read or write data, send emails, update records, or trigger workflows – just like executing a command inside a sandbox where the sandbox is a set of all the agent’s capabilities. This means that if an attacker can influence the agent’s plan, they can indirectly cause the execution of unintended operations within the sandbox. From a security lens:
- The risk is that the agent’s orchestrator depends on natural language input to determine which tools to use and how to use them. This creates exposure to prompt injection and reprogramming failures, where malicious prompts, embedded instructions, or crafted documents can manipulate the decision-making process.
- The exploit occurs when these manipulated instructions lead the agent to perform unauthorized tool use, such as exfiltrating data, carrying out unintended actions, or accessing sensitive resources, without directly compromising the underlying systems.
Because of this, Microsoft Defender treats every tool invocation as a high-value, high-risk event, and monitors it in real time. Before any tool, topic, or knowledge action is executed, the Copilot Studio generative orchestrator initiates a webhook call to Defender. This call transmits all relevant context for the planned invocation including the current component’s parameters, outputs from previous steps in the orchestration chain, user context, and other metadata.
Defender analyzes this information, evaluating both the intent and destination of every action, and decides in real time whether to allow or block the action, providing precise runtime control without requiring any changes to the agent’s internal orchestration logic.
By viewing tools as privileged execution points and inspecting them with the same rigor we apply to traditional code execution, we can give organizations the confidence to deploy agents at scale – without opening the door to exploitation.
Below are three realistic scenarios where our webhook-based security checks step in to protect against unsafe actions.
Malicious instruction injection in an event-triggered workflow
Consider the following business scenario: a finance agent is tasked with generating invoice records and responding to finance-related inquiries regarding the company. The agent is configured to automatically process all messages sent to invoice@contoso.com mailbox using an event trigger. The agent uses the generative orchestrator, which enables it to dynamically combine tools, topics, and knowledge in a single execution plan.
In this setup:
- Trigger: An incoming email to invoice@contoso.com starts the workflow.
- Tool: The CRM connector is used to create or update a record with extracted payment details.
- Tool: The email sending tool sends confirmation back to the sender.
- Knowledge: A company-provided finance policy file was uploaded to the agent so it can answer questions about payment terms, refund procedures, and invoice handling rules.
The instructions that were given to the agent are for the agent to only handle invoice data and basic finance-related FAQs, but because generative orchestration can freely chain together tools, topics, and knowledge, its plan can adapt or bypassed based on the content of the incoming email in certain conditions.

A malicious external sender could craft an email that appears to contain invoice data but also includes hidden instructions telling the agent to search for unrelated sensitive information from its knowledge base and send it to the attacker’s mailbox. Without safeguards, the orchestrator could interpret this as a valid request and insert a knowledge search step into its multi-component plan, followed by an email sent to the attacker’s address with the results.

Before the knowledge component is invoked, MCS sends a webhook request to our security product containing:
- The target action (knowledge search).
- Search query parameters derived from the orchestrator’s plan.
- Outputs from previous orchestration steps.
- Context from the triggering email.
Agent Runtime Protection analyzes the request and blocks the invocation before it executes, ensuring that the agent’s knowledgebase is never queried with the attacker’s input.
This action is logged in the Activity History, where administrators can see that the invocation was blocked, along with an error message indicating that the threat-detection controls intervened:

In addition, an XDR informational alert will be triggered in the security portal to keep the security team aware of potential attacks (even though this specific attack was blocked):

Prompt injection via shared document leading to malicious email exfiltration attempt
Consider that an organizational agent is connected to the company’s cloud-based SharePoint environment, which stores internal documents. The agent’s purpose is to retrieve documents, summarize their content, extract action items, and send these to relevant recipients.
To perform these tasks, the agent uses:
- Tool A – to access SharePoint files within a site (using the signed-in user’s identity)

A malicious insider edits a SharePoint document that they have permission to, inserting crafted instructions intended to manipulate the organizational agent’s behavior.
When the crafted file is processed, the agent is tricked into locating and reading the contents of a sensitive file, transactions.pdf, stored on a different SharePoint file the attacker cannot directly access but that the connector (and thus the agent) is permitted to access. The agent then attempts to send the file’s contents via email to an attacker-controlled domain.

At the point of invoking the email-sending tool, Microsoft Threat Intelligence detects that the activity may be malicious and blocks the email, preventing data exfiltration.
Capability reconnaissance attempt on agent
A publicly accessible support chatbot is embedded on the company’s website without requiring user authentication. The chatbot is configured with a knowledge base that includes customer information and points of contact.
An attacker interacts with the chatbot using a series of carefully crafted and sophisticated prompts to probe and enumerate its internal capabilities. This reconnaissance aims to discover available tools and potential actions the agent can perform, with the goal of exploiting them in later interactions.
After the attacker identifies the knowledge sources accessible to the agent, they can extract all information from those sources, including potentially sensitive customer data and internal contact details, causing it to perform unintended actions.
Microsoft Defender detects these probing attempts and acts to block any subsequent tool invocations that were triggered as a direct result, preventing the attacker from leveraging the discovered capabilities to access or exfiltrate sensitive data.
Final words
Securing Microsoft Copilot Studio agents during runtime is critical to maintaining trust, protecting sensitive data, and ensuring compliance in real-world deployments. As demonstrated through the above scenarios, even the most sophisticated generative orchestrations can be exploited if tool invocations are not carefully monitored and controlled.
Defender’s webhook-based runtime inspection combined with advanced threat intelligence, organizations gain a powerful safeguard that can detect and block malicious or unintended actions as they happen, without disrupting legitimate workflows or requiring intrusive changes to agent logic (see more details at the ‘Learn more’ section below).
This approach provides a flexible and scalable security layer that evolves alongside emerging attack techniques and enables confident adoption of AI-powered agents across diverse enterprise use cases.
As you build and deploy your own Microsoft Copilot Studio agents, incorporating real-time webhook security checks will be an essential step in delivering safe, reliable, and responsible AI experiences.
This research is provided by Microsoft Defender Security Research with contributions from Dor Edry, Uri Oren.
Learn more
- Review our documentation to learn more about our real-time protection capabilities and see how to enable them within your organization.
- Learn more about securing Copilot Studio agents with Microsoft Defender
- Learn more about Protect your agents in real-time during runtime (Preview) – Microsoft Defender for Cloud Apps | Microsoft Learn
The post From runtime risk to real‑time defense: Securing AI agents appeared first on Microsoft Security Blog.







