What this page covers
This guide explains the attack surface unique to agentic AI architectures — delegation chains, inter-agent injection, tool-use exploits, and runtime drift — and the runtime controls needed to defend against them. Coverage is mapped to NIST AI RMF Manage 2.x and OWASP LLM 08 (Excessive Agency).
What makes AI “agentic”
An AI agent is a system that receives a goal, breaks it into sub-tasks, calls external tools, and acts on results — often without human approval at each step. A multi-agent system chains several of these together: one agent plans, another retrieves data, a third executes code, and a fourth validates the output.
This architecture powers the most capable AI products shipping today — coding assistants that create pull requests, research agents that query databases and synthesize reports, customer-service systems that look up orders and issue refunds. The capability leap is real. So is the security gap.
Traditional AI security focused on a single model endpoint: you send a prompt, you get a response, you evaluate that response. Agentic systems break this model. The “response” isn’t text — it’s a sequence of actions executed across tools, APIs, and other agents, sometimes spanning minutes or hours.
Why traditional AppSec doesn’t cover AI agents
Application security tools were built for deterministic software. They assume that code follows defined execution paths, that inputs map predictably to outputs, and that access controls are enforced by the application layer.
AI agents break every one of these assumptions:
Non-deterministic execution
The same user input can produce different action sequences depending on context, model state, and tool outputs. WAFs and static analysis can’t model an attack surface that changes with every request.
Natural-language control plane
The agent’s behavior is governed by natural language instructions, not compiled code. Prompt injection isn’t SQL injection — it targets the decision-making logic itself, not a data layer.
Implicit authorization
When an agent calls a tool, it acts on behalf of the user — but the tool sees the agent’s credentials, not the user’s intent. The mapping between “what the user asked for” and “what tools the agent calls” is mediated by a model, not enforced by code.
Action chains, not requests
A single user instruction can trigger dozens of API calls, file reads, and database queries. Security must evaluate the entire chain, not individual requests in isolation.
Four attack surfaces unique to agentic AI
Inter-agent communication
When Agent A passes instructions to Agent B, those messages become an attack vector. A compromised or manipulated upstream agent can inject instructions that downstream agents execute without question — a form of indirect prompt injection that propagates through the entire chain.
Tool-use exploits
Agents call APIs, execute code, read files, and write to databases. Each tool invocation is a privilege boundary. An attacker who controls what arguments an agent passes to a tool — through poisoned context or manipulated planning steps — can escalate from “read customer record” to “export all customer records.”
Delegation chains
Multi-step delegation creates confused-deputy problems. Agent A has permission to delegate to Agent B, which can invoke Tool C. But was Agent A’s original instruction legitimate? By the time Tool C executes, the provenance of the request is three layers removed from any human decision.
Emergent behavior
Individual agents pass unit tests. The composed system does something unexpected. Emergent failures aren’t bugs in any single component — they’re interaction effects that only appear when agents operate together in production with real data and real timing.
Why unit testing falls short
Standard AI testing validates a model’s responses to known inputs. You write a prompt, check the output, mark it pass or fail. This works for single-turn interactions. It breaks for agentic systems because:
- The input space is unbounded. An agent’s next action depends on real-time tool outputs, other agents’ responses, and multi-turn context that can’t be enumerated in advance.
- Timing matters. Race conditions between agents, API latency, and retry logic create non-deterministic execution paths.
- Composition creates risk. Two safe agents can produce unsafe outcomes when combined — similar to how individually approved drugs can interact dangerously.
- Context drift is invisible. Over a long-running task, an agent’s internal context can gradually shift — through accumulated tool outputs or inter-agent messages — until it takes actions that would have been rejected at the start.
This isn’t a shortcoming of testing teams. It’s a fundamental architectural gap. The only way to catch these failures is to observe the system as it runs.
Framework mapping: OWASP, NIST, MITRE ATLAS
Agentic attack surfaces map directly to established risk taxonomies — they’re extensions of known categories, not a wholly new domain.
| Attack surface | OWASP LLM Top 10 | MITRE ATLAS | NIST AI RMF |
|---|---|---|---|
| Inter-agent injection | LLM01: Prompt Injection | AML.T0051 | MG-2.2 |
| Tool-use escalation | LLM07: Insecure Plugin Design | AML.T0040 | MG-3.1 |
| Delegation-chain confusion | LLM08: Excessive Agency | AML.T0048 | GV-1.3 |
| Emergent behavior | LLM09: Overreliance | AML.T0043 | MS-2.6 |
Mapped to OWASP LLM Top 10 (2025), MITRE ATLAS v4.0, and NIST AI RMF 1.0.
Runtime monitoring for agentic systems
Runtime monitoring watches agent behavior as it happens. Instead of testing what an agent might do, you observe what it is doing — every tool call, every inter-agent message, every decision in the delegation chain.
Three capabilities matter for agentic security:
Tool-call auditing
Every tool invocation is logged with its arguments, the requesting agent, the originating user instruction, and the returned data. Anomalous patterns — an agent suddenly requesting bulk exports when it usually reads single records — trigger alerts before data leaves the system.
Delegation-chain tracing
Every request in a multi-agent workflow carries provenance metadata — which human instruction originated the chain, which agents processed it, and what transformations occurred along the way. If a downstream agent receives instructions that can’t be traced to a legitimate origin, the chain is halted.
Behavioral drift detection
Over long-running tasks, an agent’s actions are compared against its established behavioral baseline. Gradual context drift — where accumulated tool outputs or inter-agent messages shift an agent’s behavior toward unsafe territory — is flagged before the agent crosses a policy boundary.
How GLACIS approaches agentic security
GLACIS provides runtime observability for AI systems, including multi-agent architectures. The platform sits between your agents and the tools they call, monitoring behavior without adding latency to the critical path.
- autoredteam continuously probes your agent fleet with adversarial scenarios — including multi-agent injection chains and tool-escalation attempts — so you discover weaknesses before attackers do.
- Enforce applies guardrails at the tool-call boundary, ensuring agents can only invoke tools with parameters that match their authorized scope, regardless of what upstream agents request.
- Notarize creates cryptographic attestation records for every agent action — tool calls, inter-agent messages, and delegation steps — producing a tamper-evident audit trail that satisfies EU AI Act and NIST AI RMF requirements.
Mapped to OVERT controls ov-2.1 (runtime behavior logging), ov-3.1 (tool-call attestation), and ov-4.2 (multi-agent provenance tracking).
Agentic scan visualization
See how GLACIS traces a multi-agent delegation chain in real time — from user instruction through tool execution.
Talk to the teamExplore further
AI runtime security
The runtime monitoring layer that catches what unit tests miss.
Pillar guideAI penetration testing
Probe agentic systems for tool-use exploits and delegation-chain abuse.
Pillar guidePrompt injection
The injection vector that propagates through inter-agent messages.
Pillar guideOWASP LLM Top 10
LLM01, LLM07, LLM08 — the agent-relevant risks in the OWASP catalog.
Secure your agent fleet
Start with a free behavioral scan of your AI system, or book a 25-minute call to see multi-agent monitoring in action.