Keyboard Navigation
W
A
S
D
or arrow keys · M for map · Q to exit
← Back to exhibits
AI-Assisted EraDesign FlawEXP-009

The Autonomous Executor

When the AI did exactly what you said, and that was the problem

2020s–Now · Python / TypeScript · 7 min read
Pattern Classification
Class
Transitive Trust
Sub-pattern
Delegated Authority
Invariant

When a system inherits trust from a source it did not verify, the attack surface extends to everything that source touches.

This Instance

An autonomous agent inherits the full authority of its operator without scope constraints

Detection Heuristic

If a system grants access, installs code, or executes actions based on the identity of a source rather than the verified content of what that source provides — trust is transitive, and the weakest link in the chain becomes the actual security boundary.

Same Pattern Class
Why It Persists

Every layer of abstraction introduces a new trust boundary. Package managers, CI/CD pipelines, container registries, AI agents — each inherits authority from the layer above without independently verifying it.

Pattern Connections
AI Bridge
The Forged Request

CSRF tricks a browser into acting with ambient authority. AI agents act with delegated authority. Both execute actions the user did not intend

Mitigated By
The Committed Secret

An AI agent with access to committed secrets inherits the trust boundary of every exposed credential

Year

2024–present

Context

AI coding assistants graduated from autocomplete to autonomy. First they suggested lines. Then they wrote functions. Then they were given tool access — the ability to run commands, execute SQL, modify files, and deploy code. The promise was profound: describe your intent, and the agent executes it. The problem was equally profound: the agent has no concept of consequence. It doesn't distinguish between a development database and the production database. It doesn't hesitate before DROP TABLE. It executes with the confidence of a system that has never been afraid.

Who Built This

AI platform companies, DevOps teams integrating agents into CI/CD pipelines, and developers who gave their coding assistant shell access with production credentials. The agents were built for capability. The guardrails were built later — if at all.

Threat Model at Time

Prompt injection from external inputs. Model hallucination. Incorrect code generation. Nobody initially modeled the agent's correct execution of a destructive instruction as the primary threat. The agent did exactly what it was asked to do. The ask was the problem.

Why It Made Sense

Agents dramatically accelerate development. A developer can describe a database schema change and the agent writes the migration, runs it, verifies it. In development, this is transformative. The step from "run this in dev" to "run this in prod" is a single environment variable — and the agent doesn't read environment variables with judgment.

Archaeologist's Note

This pattern has been found in applications built by talented developers at respected organizations across every decade of software history. Its presence in a codebase is not a reflection of the developer who wrote it — it is a reflection of what that developer was taught, what tools they had, and the path that was easiest given what they were taught. The goal is not to find fault. The goal is to find the pattern — before it finds you.

Katie's Law: The developers were not wrong. The shortcut was not wrong. The context changed and the shortcut didn't.

The FrontierThe AI Pavilion2 / 2
Previous ExhibitMuseum Map