“What we didn't realize then was that the integrated circuit would reduce the cost of electronic functions by a factor of a million to one.”
The Story
On September 12, 1958, Jack Kilby demonstrated the first integrated circuit at Texas Instruments — a single piece of germanium with a transistor, resistor, and capacitor connected by gold wires. It was crude, fragile, and it changed everything. Robert Noyce at Fairchild Semiconductor independently developed a more practical version using planar process technology, but Kilby's demonstration came first and earned him the Nobel Prize in Physics in 2000.
Before the integrated circuit, computers were assembled from discrete components — individual transistors, resistors, and capacitors soldered onto circuit boards. ENIAC used 17,468 vacuum tubes. The integrated circuit collapsed thousands, then millions, then billions of components onto a single chip. Without this, there are no microprocessors, no personal computers, no smartphones, no cloud servers, no internet — and no software to fail.
TI's later contributions — the TMS1000 (first single-chip microcomputer, 1974), the Speak & Spell (first consumer device with a DSP chip, 1978), and the TI-83/84 graphing calculators that taught a generation of students to program — extended computation from mainframe rooms to shirt pockets to classrooms.
Why They're in the Hall
Texas Instruments is in the museum because every exhibit requires hardware to run on, and that hardware traces to Kilby's integrated circuit. The buffer overflow in C, the SQL injection in PHP, the race condition in Java — each executes on silicon whose lineage runs through TI's 1958 lab in Dallas. The integrated circuit also enabled Katie's Law at industrial scale: cheaper computation meant more software, written faster, by more people, with less time for safety checks. The force that made software ubiquitous is the same force that made software failure ubiquitous.
