Introduction
Welcome back to Controversies, the series where we crank up the heat on the thorniest questions shaping the future of AI. This week, we’re taking on a subject that has philosophers, computer scientists, and casual tech enthusiasts all buzzing: do large language models truly think, or are they merely mimicking human cognition?
In the spotlight is Christopher Summerfield’s new book, These Strange New Minds, which probes the cognitive abilities of AI systems that many see as mere “stochastic parrots.” Are advanced language models destined to hit an insurmountable wall of artificial mimicry? Or do their emerging capabilities hint at the earliest glimpses of genuine thought? Strap in and prepare for a deep dive into the debate that pits old-school skeptics against those who believe machines might be more than the sum of their code.
In this post, we’ll investigate:
The Performance vs. Competence Conundrum: Why a few comical AI errors may not reveal the true extent of an LLM’s underlying prowess.
Linguistic vs. Perceptual Learning: How AI’s “high road of linguistic data” challenges the entrenched notion that genuine understanding demands real-world sensory grounding.
Ethical and Safety Minefields: From filter bubbles to misaligned incentives—where should we draw the line on AI regulation before power tilts away from everyday users and into corporate or governmental hands?
If you’re looking to navigate these questions with your eyes wide open—to appreciate AI’s dazzling breakthroughs without glossing over the lurking dangers—subscribe now. Join a community that’s ready to think critically about whether machines are on the brink of true cognition… or simply fooling us one “insightful” sentence at a time.


