10MB JSON catalog parsed with sscanf on every launch, followed by O(n²) deduplication — ~63 billion comparisons per startup
Rockstar patched within weeks of public disclosure. Paid t0st $10,000 bug bounty. Loading times reduced by ~70%.
The Incident
GTA Online, released in 2013, had notoriously slow loading times on PC — averaging 6 minutes per launch. Players accepted it as the cost of a massive open world. Nobody at Rockstar fixed it because it had been that way since launch.
The Discovery
In February 2021, a developer going by t0st decided to reverse-engineer the GTA Online launcher to find out why it was so slow. Using a CPU profiler, they identified two compounding flaws:
Flaw 1: Full JSON Parse on Every Launch
On every startup, the game parsed a ~10MB JSON file (net_shopping_catalog — the entire in-game store catalog) using sscanf, character by character. This file grew with every content update.
Flaw 2: O(n²) Deduplication
After parsing, the game ran a uniqueness check on every item — comparing each entry against every other entry. With ~63,000 items, this meant ~63,000 × 63,000 = ~4 billion comparisons. Every single launch.
The Fix
A cache for the parsed data and a hash set for deduplication. A few lines of code. The kind of fix a junior developer could implement in an afternoon.
The Aftermath
Rockstar acknowledged the bug, patched it within weeks, and paid t0st $10,000 through their bug bounty program. Loading times dropped by approximately 70%.
Why It Matters
This wasn't a security vulnerability. It was The Greedy Initializer — the same pattern from 1990s desktop CRM applications — surviving in a 2013 AAA game engine. Load everything at startup. Process it every time. Never cache the result. The pattern didn't evolve with the data. The data grew. The pattern didn't.
The Pattern
This incident is a direct instance of [The Greedy Initializer](/exhibits/the-greedy-initializer) (EXP-002): a system that loads all possible data at startup and reprocesses it on every launch, regardless of whether it has changed. The fix — a parse cache and a hash-based deduplication set — is documented there, along with its full lineage from 1990s enterprise software to modern applications.
The O(n²) deduplication is also a textbook instance of Complexity Accretion: no single decision was obviously wrong. The catalog grew. The algorithm stayed. The cost compounded silently for seven years.