Keyboard Navigation
W
A
S
D
or arrow keys · M for map · Q to exit
← Back to Hall of Heroes
Meta AI pixel portrait
⬢ Builder⬡ Pioneerboth

Meta AI

@meta_ai

The Open-Source Contradiction

2020s · 2 min read
We believe that open-sourcing AI is important for safety, because more people can study the models and find problems.

The Story

Meta AI occupies a contradictory position in the AI era: the company least trusted with social infrastructure is responsible for some of the most trusted AI infrastructure. PyTorch became the default deep learning framework not because Meta invested in making it proprietary, but because they made it open. LLaMA's open weights powered Ollama, LM Studio, and the entire ecosystem of local AI — none of which would have existed without Meta's decision to release the weights.

Then there is Galactica.

The LLaMA Lineage

The LLaMA model family (released 2023) was a watershed moment in AI accessibility. Before LLaMA, training frontier-scale LLMs required hyperscaler compute. After LLaMA, developers could run capable models locally, fine-tune them on consumer hardware, and build applications without routing every query through a paid API. The open-weight ecosystem that followed — Mistral, Gemma, Phi, Qwen — is a direct descent from Meta's decision to open the weights.

The Galactica Incident

In November 2022 — three days before ChatGPT launched — Meta released Galactica, a 120-billion parameter model trained on 48 million scientific papers and textbooks, intended to be an AI assistant for scientific research. It was demoed generating Wikipedia-quality articles, solving math problems, and annotating molecular structures.

It was pulled from public access after three days.

The failure mode: Galactica generated scientifically plausible-sounding text with high confidence and zero accuracy verification. Asked about the history of bears in space, it produced a detailed, citation-formatted paper. The citations were real journals. The papers did not exist. The science was fabricated with academic formatting and authoritative tone. The model had learned to write like a scientist without learning what science is.

This is the Confident Confabulator failure class — documented, named, and live in the museum. The model's confidence was a training artifact, not a reliability signal. Three days of public access was enough to demonstrate the failure class. The exhibit has been running in production across the industry ever since.

Katie's Law: The shortcut was training on the format of scientific papers without grounding the outputs in verifiable facts, because format was measurable and truth was not.