When a system trusts the presence of a credential instead of verifying the intent behind it, authentication becomes indistinguishable from authorization.
Authentication is checked, but authorization to access a specific resource is assumed from the credential alone
If a system performs a state-changing action because a credential is present — without verifying that the specific request was intentionally initiated by the credential holder — the system trusts authority, not intent.
Every authentication mechanism that attaches credentials automatically recreates this pattern. Cookies gave way to bearer tokens, tokens to API keys, keys to ambient cloud IAM roles. The carrier changes. The assumption does not.
Unguarded memory (1960s) is the physical precursor to IDOR (2000s). Both grant access because an identifier is known, not because access is authorized
Without memory protection, buffer overflows don't just corrupt local state — they can overwrite any memory in the system
Year
1961–1969
Context
The first computers ran one program at a time. The entire machine — all of memory, all of I/O, all of the processor — belonged to that one program. There was nothing to protect, because there was nothing else running. Then came timesharing. MIT's CTSS (1961), IBM's TSS/360 (1967), and Multics (1969) allowed multiple users to share a single machine simultaneously. Each user believed they had the machine to themselves. But underneath, their programs shared physical memory — and nothing prevented one program from reaching into another's space.
Who Built This
Operating system designers building the first multi-user systems. The hardware of the early 1960s had no memory protection mechanisms. The IBM 7090 that ran CTSS relied on base-and-bounds registers — a program could access any address within its allocated range, but the range was enforced by software, not hardware. A bug in the OS, or a clever user, could bypass the bounds.
Threat Model at Time
Hardware reliability. Programs crashed because of hardware faults — vacuum tube failures, magnetic core errors, card reader jams. The threat was the machine itself, not the other users. Everyone with access to a timesharing system was a trusted researcher or employee. The idea that a user would intentionally access another user's memory was not part of the mental model.
Why It Made Sense
Memory protection hardware did not exist or was primitive. Adding protection meant adding hardware — circuits, registers, comparison logic — at a time when every transistor was expensive. The economic calculus was clear: trust the users and ship the system, or spend years designing protection hardware for a threat that had never been observed.
This pattern has been found in applications built by talented developers at respected organizations across every decade of software history. Its presence in a codebase is not a reflection of the developer who wrote it — it is a reflection of what that developer was taught, what tools they had, and the path that was easiest given what they were taught. The goal is not to find fault. The goal is to find the pattern — before it finds you.
Katie's Law: The developers were not wrong. The shortcut was not wrong. The context changed and the shortcut didn't.