Introduction
Welcome back to Laboratory. This week we unpack Nature’s feature series “The future of AI” (14 November 2025), which brings together six voices at the sharp end of artificial intelligence: Mustafa Suleyman at Microsoft AI, Pushmeet Kohli at Google DeepMind, Timnit Gebru at DAIR, Jared Kaplan at Anthropic, Anima Anandkumar at Caltech, and Amandeep Gill at the United Nations. Instead of a single narrative about progress, the series offers a prism: six different vantage points on what AI is doing to science, work, security, and power.
In this briefing, we chart:
Six visions: how each figure understands the most important opportunities and risks for AI.
Three tension lines: acceleration vs deliberation, centralization vs pluralism, productivity vs justice.
The action agenda: what policymakers, researchers, companies, and citizens should actually do next.
Executive Summary
Artificial intelligence is no longer a speculative technology. Trillions of dollars in capital and infrastructure are flowing into models that already touch hundreds of millions of people. Yet the question that quietly sits behind every product launch and policy memo is deceptively simple: what is all of this actually for?
Nature recently brought together six influential voices who sit at very different points in the AI ecosystem: Mustafa Suleyman at Microsoft AI, Pushmeet Kohli at Google DeepMind, Timnit Gebru at DAIR, Jared Kaplan at Anthropic, Anima Anandkumar at Caltech, and Amandeep Gill at the United Nations. Read together, their perspectives reveal less a single narrative and more a set of tension lines that will shape the next decade: ambition versus restraint, openness versus consolidation, productivity versus precarity, innovation versus inequality.
The emerging picture is one of AI as a general purpose capability that will be deeply embedded in scientific discovery, knowledge work, public services, and security architectures. But this future is not prewritten. Decisions about who controls the infrastructure, how risks are governed, which communities are listened to, and what is shared openly will heavily influence whether AI becomes a tool for genuine global flourishing or simply amplifies existing power imbalances.
1. A crossroads for AI
The six interviewees agree on one core point: AI is no longer confined to the lab. It is moving into a phase where underlying models, data centers, and integration into workflows form a new layer of digital infrastructure.
At this crossroads, three questions dominate:
Where will AI create the most value first: in productivity tools, in scientific discovery, or in security and defense systems.
Who will shape its trajectory: a handful of frontier labs and hyperscalers, or a broader coalition of academics, regulators, and civil society.
How much risk is acceptable in the race for performance and adoption.
The six perspectives in the Nature series can be read as different, sometimes conflicting, answers to these questions.
2. Mustafa Suleyman: Copilots, platforms, and concentrated power
As chief executive of Microsoft AI, Mustafa Suleyman embodies the new industrial phase of AI: large scale, deeply integrated products such as Copilot that sit across the operating system, productivity suite, and cloud.
His view highlights three structural shifts:
From tools to copilots: AI moves from narrow utilities to persistent assistants that sit alongside users across applications, observing context and orchestrating actions. This is less about isolated chatbots and more about an ambient layer of intelligence.
From research labs to platform companies: the decisive leverage is no longer just in model design but in controlling the distribution channels, default settings, and enterprise integrations where AI is actually used.
From experimental ethics to institutional governance: questions about consciousness, autonomy, and misuse are no longer academic. They feed directly into product policies, kill switches, and internal review mechanisms.
Suleyman’s optimism about the potential of AI to augment human capability coexists with concern about misuse and misalignment. Yet his answers make clear that, in practice, platform incentives and market share are inseparable from debates about safety and responsibility.
3. Pushmeet Kohli: AI as a new instrument for science
Pushmeet Kohli, leading AI for science at Google DeepMind, represents a different frontier: AI as a scientific instrument. The success of AlphaFold in protein structure prediction is a preview of what happens when machine learning is tuned not just for text or images, but for physical and biological systems.
Several themes stand out:
Acceleration of discovery: models that can propose structures, hypotheses, or simulations compress years of trial and error into hours. For many scientists, AI is becoming as fundamental as the microscope or the particle accelerator.
Bottlenecks beyond algorithms: the obstacles are often data quality, domain expertise, and incentive structures in academia, not just model size. Without high quality, well curated scientific datasets and robust validation pipelines, AI risks producing elegant but misleading outputs.
Trust and verification: AI derived scientific insights must be embedded in workflows that allow replication, cross checking, and uncertainty quantification. Otherwise the risk is that scientists outsource too much judgment to opaque models.
Kohli’s perspective underscores a crucial point: the most transformative AI use cases may unfold in domains that are invisible to everyday users, reshaping drug discovery, materials science, and climate modeling long before the public sees a dramatic consumer app.
4. Timnit Gebru: Power, justice, and who gets a say
Where Suleyman and Kohli focus on capability and opportunity, Timnit Gebru starts from power and inequality. As head of the Distributed AI Research Institute, her work foregrounds those who usually sit at the receiving end of technologies rather than at the design table.
Key fault lines she emphasizes include:
Bias and structural inequality: large models trained on historical data tend to reproduce and sometimes amplify patterns of discrimination. Without deliberate countermeasures, AI can deepen gaps in policing, hiring, credit scoring, and surveillance.
The marginalization of ethics: after a brief period when technology firms showcased AI ethics teams, many of those teams have been downsized or sidelined as commercial pressures grew. Ethical concerns are often treated as PR risks rather than central design constraints.
Questioning AGI as a goal: Gebru is skeptical of the near term fixation on artificial general intelligence as the north star. She argues that this framing can obscure more immediate issues: workers displaced or degraded by automation, communities surveilled by cheap AI vision systems, and extractive data practices focused on the global south.
From this angle, the “future of AI” is less about speculative superintelligence and more about who gets harmed or empowered next year, in specific contexts like migration control, social services, and labor platforms.
5. Jared Kaplan: Frontier models, labour markets, and safety
As co founder and chief science officer at Anthropic, Jared Kaplan stands at the center of the frontier model race. His perspective highlights the dual nature of systems like Claude: they are both powerful general purpose tools and potential sources of systemic risk.
On the opportunity side, Kaplan expects:
Significant productivity gains across knowledge work, from drafting and coding to analysis and customer service.
New kinds of composite workflows, where multiple AI agents coordinate to handle complex tasks that currently require teams.
On the risk side, he focuses on:
Labour displacement and deskilling: some jobs will be augmented, others will be automated or restructured into lower quality, higher surveillance roles. The distributional effects are unlikely to be neutral.
Safety and misuse: as models gain capabilities, they also lower the barrier to designing malware, persuasive content, or dual use scientific knowledge. Anthropic positions itself as a company that takes alignment and evaluations seriously, but Kaplan is clear that company level measures are not enough.
He is relatively supportive of stronger regulation, particularly in areas like safety standards, red teaming, and incident reporting, but also wary of frameworks that freeze market structure in favor of incumbents. This mirrors a broader industry tension: how to regulate without entrenching.
6. Anima Anandkumar: Open research, academia, and the next generation
Anima Anandkumar, based at Caltech with a track record at Nvidia and Amazon, occupies a bridge position between industry and academia. Her focus is on ensuring that AI remains a scientific field, not just a product pipeline.
She stresses several levers:
The role of universities: academic labs can explore ideas that are too early, too speculative, or not immediately profitable. They also train the next generation of researchers who need exposure to open problems, not just proprietary stacks.
Open sharing and reproducibility: while complete openness may not be feasible for every model, Anandkumar argues that methods, benchmarks, and key datasets should remain as transparent as possible. Without this, the field risks fragmenting into closed silos.
Preparing young people for fluid careers: in rapidly shifting labour markets, the durable skills are mathematical thinking, computational literacy, and the ability to collaborate with AI tools rather than compete against them.
Her view points toward a hybrid ecosystem in which public institutions, open source communities, and private labs each have a distinct, complementary role, rather than academia becoming a mere talent funnel for a few firms.
7. Amandeep Gill: Global rules for a global technology
As the UN’s special envoy for digital and emerging technologies, Amandeep Gill approaches AI not as a product or research topic, but as an issue of international security and governance. His background in non proliferation shapes his thinking.
Several parallels and contrasts stand out:
Nuclear vs AI: both involve powerful capabilities with cross border externalities, but AI is far more diffused and entangled with commercial incentives. That makes traditional arms control models necessary but insufficient.
Autonomous weapons and escalation risks: the prospect of AI enabled surveillance and targeting systems raises the specter of faster, less accountable escalation dynamics. Even if full autonomy is avoided, partial automation can still shift incentives in dangerous ways.
Digital inequality: without deliberate effort, low income countries risk becoming dependent on foreign AI infrastructure, locked into consumption rather than co creation. Access to compute, data, and research collaborations will be critical to avoid a new kind of digital dependency.
Uniform regulation vs regulatory competition: Gill advocates for more harmonized global norms around safety, transparency, and human rights, while acknowledging that countries will experiment with different domestic regimes.
In this framing, AI is part of a broader struggle to update multilateral institutions for a world where intangible, rapidly evolving technologies shape everything from trade to warfare.
8. Three tension lines that will shape the next decade
Reading these six perspectives together, three major tension lines emerge. They are not binary choices, but axes along which policy and strategy will move.
Acceleration vs deliberation
Companies like Microsoft and Anthropic are under pressure to ship products, grow user bases, and monetize infrastructure investments.
Researchers like Gebru and diplomats like Gill push for slower, more deliberate deployment in high stakes domains, with stronger guardrails and more community input.
Centralization vs pluralism
Frontier models and cloud platforms are inherently capital intensive, which pushes toward concentration of power in a handful of firms and governments.
Open research, academic participation, and capacity building for low income countries are needed to keep the ecosystem diverse and contestable.
Productivity vs justice
Many visions foreground gains in efficiency, creativity, and scientific discovery.
Others highlight that without explicit redistribution, worker protections, and legal safeguards, those gains can translate into higher profits for a few and greater precarity for many.
How these tensions are negotiated will do more to determine the “future of AI” than any individual product launch.
9. Conclusion
If there is a common thread across these six voices, it is that passivity is not an option. AI is not a neutral wave that society must simply surf. It is a collection of design decisions, business models, research agendas, and legal frameworks that can be nudged in better or worse directions.
Several practical implications follow:
For policymakers: move from abstract principles to concrete standards on testing, reporting, and deployment in sensitive sectors, while preserving space for competition and open research.
For researchers and educators: insist that ethics, safety, and distributional impacts are treated as first class research topics, not afterthoughts. Fight for institutional support for interdisciplinary work that cuts across computer science, social science, and law.
For companies: build internal governance that has real teeth, invest in long term safety and interpretability research, and be transparent about capabilities and limitations instead of overhyping.
For citizens and workers: treat AI literacy as a civic skill. Understanding what these systems can and cannot do is key to navigating workplaces, services, and politics that will increasingly be shaped by them.
The Nature series does not offer a single answer to what AI is “for”. Instead it maps the contours of an argument that will run through the next decade: whether this technology ends up reinforcing old hierarchies, or whether it becomes part of a serious project to expand human capabilities and reduce global inequalities. That choice will not be made by algorithms. It will be made by us.
For the full details: Future of AI




The centralization vs pluralism tension really neds more atention in policy circles. When you look at how frontier models require such massive capital investmnt, it's hard to see how smaller institutions can stay relevant without some kind of coordinated infastructure sharing. The idea that we can just rely on open source to balance things out seems a bit optimistic given the compute gaps we're seeing.
“The Nature series does not offer a single answer to what AI is “for”. Instead it maps the contours of an argument that will run through the next decade: whether this technology ends up reinforcing old hierarchies, or whether it becomes part of a serious project to expand human capabilities and reduce global inequalities. That choice will not be made by algorithms. It will be made by us.”
Really strong end there. Great article 🟢