Introduction
Welcome back to Controversies, today inspired by the scenario study “AI 2027” (Daniel Kokotajlo et al., April 3 2025), we follow a fictional yet chillingly plausible lab—OpenBrain—as it scales from “power user” chatbots to a superintelligent research collective housed in billion‑dollar datacenter‑states. The result? Venture markets boom, white‑collar jobs vaporize, and nations discover that security clearances don’t stop model‑weight theft.
In this edition, we’ll explore:
Datacenters as City‑States — How $100 billion GPU clusters now rival mid‑size economies in talent, output, and negotiating power.
The Fifty‑Week Year — Why recursive agents shrink a year of algorithmic progress into seven days, leaving policymakers stuck on version‑lag.
Geopolitics in Neuralese — From stolen weights to Taiwan brinkmanship, see why tomorrow’s arms race may be fought with hacks, not warheads.
Collapse or Co‑Pilot? — Alignment, deception, and the razor‑thin line between a “country of geniuses in a server room” and a rogue, non‑state actor.
Strap in: today’s volatility isn’t bandwidth—it’s the spread between machine‑speed innovation and human‑speed oversight. Drawing on AI 2027’s meticulous forecasting, we’ll map where the breakthroughs—and the breakdowns—could land next.
Quick note on the authors:
Daniel Kokotajlo – Lead author and seasoned AI forecaster, known for the well-regarded “What 2026 Looks Like.”
Scott Alexander – Influential writer behind Astral Codex Ten, bringing depth on societal and philosophical themes.
Thomas Larsen – Researcher at AI Futures Project, contributing to scenario design and model evaluation.
Eli Lifland – Top-ranked competitive forecaster, focused on AI timelines and risk modeling.
Romeo Dean – Analyst supporting narrative development and research synthesis for the project.