From runtime risk to real‑time defense: Securing AI agents

As AI agents built in Microsoft Copilot Studio and other platforms become more integrated into business operations, they introduce a new category of runtime security risk. These agents can access sensitive data and execute privileged actions based solely on natural language input making them powerful, but also a potential target for prompt injection and manipulation attacks.

Microsoft Defender researchers highlight a key concern: if a threat actor can influence how an agent sequences actions, the agent may perform unintended operations within its allowed permissions making traditional detection methods less effective.

To address this, Microsoft has introduced real-time runtime protection for Copilot Studio agents. Every tool invocation (such as sending emails, querying knowledge bases, or updating records) is treated as a high-risk event. Before execution, the agent sends contextual data via webhook to Microsoft Defender, which analyzes intent, parameters, user context, and prior steps in the orchestration chain. Defender then determines whether to allow or block the action without requiring changes to the agent’s logic

The article outlines three real-world risk scenarios:

  • Malicious instruction injection in event-triggered workflows (e.g., finance inbox automation attempting unauthorized data retrieval)
  • Prompt injection via shared documents, leading to potential data exfiltration from platforms like SharePoint
  • Capability reconnaissance attacks against public-facing agents to probe and exploit internal tools

In each case, Microsoft Defender’s runtime inspection detects and blocks suspicious actions before execution, while generating visibility through activity logs and XDR alerts. The key takeaway: securing AI agents requires runtime validation not just build-time controls. By monitoring tool invocations in real time, organizations can deploy AI agents confidently while protecting sensitive data, maintaining compliance, and reducing the risk of exploitation.

As AI adoption scales, runtime security is becoming foundational to responsible and secure AI deployment.