When a system inherits trust from a source it did not verify, the attack surface extends to everything that source touches.
Secrets are embedded in artifacts (code, config, logs) that inherit a wider trust boundary than intended
If a system grants access, installs code, or executes actions based on the identity of a source rather than the verified content of what that source provides — trust is transitive, and the weakest link in the chain becomes the actual security boundary.
Every layer of abstraction introduces a new trust boundary. Package managers, CI/CD pipelines, container registries, AI agents — each inherits authority from the layer above without independently verifying it.
Exposed credentials enable unauthorized access — a committed API key bypasses all access control
An AI agent with access to committed secrets inherits the trust boundary of every exposed credential
Year
2008–present
Context
Applications need secrets — database passwords, API keys, OAuth tokens, encryption keys. In the beginning, developers put them where they were needed: in the source code. The configuration file was checked into Git alongside the application. When the repository was public, the secrets were public. When the repository was private, the secrets were accessible to every developer, every CI runner, and every backup.
Who Built This
Every developer who ever typed password = "..." in a config file and committed it. Every tutorial that showed API_KEY = 'your-key-here' without explaining how to externalize it. Every .env file that made it past .gitignore.
Threat Model at Time
Perimeter security. The repository was behind authentication. The server was behind a firewall. Secrets in the code were safe because the code was safe. Then open source happened. Then CI/CD happened. Then "oops, I made the repo public for 10 minutes" happened.
Why It Made Sense
The code needs the secret to run. Putting the secret next to the code is the simplest architecture. No external dependencies. No vault to configure. git clone and the application works. This is developer experience optimized for day one — and a security incident waiting for day two.
This pattern has been found in applications built by talented developers at respected organizations across every decade of software history. Its presence in a codebase is not a reflection of the developer who wrote it — it is a reflection of what that developer was taught, what tools they had, and the path that was easiest given what they were taught. The goal is not to find fault. The goal is to find the pattern — before it finds you.
Katie's Law: The developers were not wrong. The shortcut was not wrong. The context changed and the shortcut didn't.