Introduction
Welcome back to Laboratory, our deep-dive series charting the forces that will define the next chapter of AI. Last time, we explored the rise of “slow reasoning” in generative models—now, we shift our gaze beneath the algorithms to the titanic machines powering them. In this week’s feature, we unpack Pilz et al.’s landmark study “Trends in AI Supercomputers,” which for the first time assembles data on 500+ AI-optimized clusters (2019–2025) and reveals a true arms-race in silicon, power, and capital.
Here’s what we’ll be unpacking:
Compute on Warp Speed: How peak performance has been doubling every nine months—and what that means for training tomorrow’s models.
Anatomy of a Behemoth: A close look at xAI’s Colossus—200 000 GPUs, 2×10²⁰ FLOP/s, 300 MW draw, $7 billion price tag.
From Public Labs to Private Giants: Why industry now owns 80 % of all AI-compute (vs. 40 % in 2019) and how that reshapes access and innovation.
Geopolitics of Compute: The U.S. claims ~75 % of global capacity, China 15 %—and what national competitiveness looks like when measured in exa-flops.
The Power Bottleneck: Why securing gigawatts on-site is the next frontier—and how decentralized, multi-center training may become the norm.
Forecasting 2030: Extrapolating today’s curves to predict systems with 2 million chips, $200 billion in hardware, and 9 GW of power.
Strategic Imperatives: How investors, infrastructure planners, and policymakers must adapt to an era where electrical grids matter as much as semiconductor fabs.
Join me as we drill into the machinery—and the constraints—that will drive AI’s next wave of breakthroughs.