Employees treated ChatGPT as a private productivity tool — an extension of their private cognitive workspace. ChatGPT is a public service. All inputs are potentially retained for model training. The employees understood what they were pasting. They did not understand where they were pasting it.
Samsung banned generative AI tools for internal use. Multiple large corporations issued similar bans (Apple, Amazon, JPMorgan). Accelerated enterprise AI governance discussions. The data cannot be retrieved.
The Incident
In April 2023, Bloomberg and The Economist Korea reported that three Samsung semiconductor employees had entered sensitive proprietary information into ChatGPT within a 20-day period:
1. An engineer pasted proprietary semiconductor source code into ChatGPT and asked it to check for bugs.
2. A second employee pasted source code from the same database, asking ChatGPT to optimize it.
3. A third employee transcribed a confidential internal meeting — including notes from a business strategy session — and asked ChatGPT to summarize it.
In all three cases, the data was entered into OpenAI's service, which at the time retained user inputs for model improvement. The proprietary information — chip designs representing years of competitive advantage — was permanently incorporated into a training corpus accessible to OpenAI.
Samsung banned ChatGPT for internal use within weeks, as did dozens of major corporations.
The Pattern
This is Ambient Authority (Law II) operating at the employee layer. The employees had legitimate authority over the data in the context where they worked with it — their workstations, their internal tools, their domain. They did not have authority to disclose it externally. ChatGPT looked like an internal tool because it behaved like one (it felt private, it felt like a personal assistant) but it was a public endpoint.
The mental model failure: "I am having a private conversation with an AI assistant." The operational reality: "I am submitting text to a third-party's training pipeline."
This is the same mental model failure behind every CSRF attack: the browser makes a request that looks like it comes from the user's session, because it does, but the intent behind the request is not the user's. The employee pasted proprietary code because the interface felt like a private workspace. The interface was not private.
The Exhibit
This incident maps to [The Boundary Collapse](/exhibits/boundary-collapse) pattern (Law I): the boundary between proprietary internal data and a public external service was not enforced by technology, only by employee understanding. The employees understood what they were pasting. They did not understand the boundary they were crossing.
This is the distinction the museum keeps trying to surface: you cannot defend a boundary that is not implemented as a constraint. "Employees shouldn't do this" is a policy. "Employees can't do this" is a control. Samsung had a policy. They did not have a control.
Why It Matters
The Samsung incident was not an edge case. Within three months, Amazon internal Slack channels leaked that Amazon employees were pasting AWS confidential information into ChatGPT. Apple issued a ban. JPMorgan issued a ban. The same incident pattern replicated across industries — because the mental model failure (private conversation vs. public endpoint) is universal, not specific to Samsung.
The data didn't leak. It was pasted. The distinction matters because the fix is different: you can't patch a paste.