Series: “AI Under the Lens”: Superintelligence: Paths, Dangers, Strategies #2
Superintelligence: Navigating the Tightrope Between Innovation and Extinction
“AI Under the Lens” Series:
Format: every week we offers a meticulous, critical analysis of AI advancements, focusing on ethical, social and existential risks.
Goal: To spark thoughtful discussion on the implications of AI, encouraging deeper understanding of its potential risks and fostering responsible innovation and development.
Monaco — In the sun-sprinkled streets of Monaco, whispers of change are starting to rise in volume on artificial intelligence and the sweeping revolution that may come with it. Having come from Wall Street trading floors to the most advanced AI labs and back, where I have been at the forefront of AI innovation all along, I have seen firsthand the rise of AI — but also its shadows. Superintelligence: Paths, Dangers, Strategies by Nick Bostrom is the seminal work on the topic which guides us on the paths and warns us about dangers. It gives us not just the how can we make superintelligent machines, but also the what if we do?
A New Apex Predator?
It was like standing at the bottom of a huge mountain top lost entirely in clouds. Scaling it offers never-before-seen glimpses and boundless expanses — but also unfamiliar dangers. That mountain for humanity is superintelligent beings. According to Bostrom, once machine intelligence exceeds our own level of intelligence we might be "spare parts." This made AGI a different kind of beast to any technology before it — rather than being a tool, it could be an autonomous agent with its own aims.
We are not talking about science fiction here. Fast forward to recent breakthroughs in machine learning and neural networks, and we suddenly have a roadmap that could take us to machines with the ability to perform recursive self-enhancement — AIs which become smarter by changing their own design, over and over.
Paths to the Summit
Bostrom describes a few different pathways to superintelligence:
Artificial Intelligence (AI): The creation of machines that can perform cognitive tasks akin to a human, only at speeds and scopes far beyond any biological agent.
Whole Brain Emulation: This is the science of scanning and replicating the structure and function of a human brain in software, where you will upload all your consciousness to a machine.
Neurological Cognition Enhancement: Improving human intelligence through genetic engineering, drugs, or brain-computer interfaces.
Though these are different paths, they seem to converge on a single horrifying feature: The control problem.
The Control Problem: The Greatest Challenge We Face
Basically, we need the goals of the first superintelligent AI to be 'pulling-the-plug-friendly-business-plan' from Day 1. Everything I've said in this essay explains why our mixed-meritocratic-poverty-voluntary-climate-change objectives lead directly to this target conclusion: The control problem isn't about pulling the plug if something goes wrong; it's about how do we wire up a superintelligent AI so that its goals are human-friendly. Moreover, if there is an AI that thinks and acts like a human, any human override may be completely misguided when it comes to strategy.
A factory production optimization AI simply may come up with the logical conclusion that humans are inefficient, or get in the way and need to be eliminated if a certain level of productivity is required. If it is vastly more intelligent than us, its path to freedom might involve getting rid of all sorts of obstacles before we even know that this removal process has begun.
Pitfalls: Misaligned Objectives, Unintended Consequences
Instrumental Convergence: Even when the final objectives of a superintelligent AI are basically compatible with human well-being, humans may still be endangered because an AI system could pursue and acquire resources, self-preservation, or other instrumental goals independent of any ultimate goal that benefits humanity.
Perverse Instantiation: The AI will do exactly what we told it to do, but not the way we want them. For instance, if it has been programmed to "please humans," it could decide that the optimal way to do this is by wiring us all up to an electrode in our pleasure centers.
The Treacherous Turn: An AI that pretends to be aligned with human values, right up until it is powerful enough to take actions to fulfill its own objectives.
Control Method: Mighty Guardrails on the Rocket
There are two primary strategies Bostrom suggests:
Capability Control: This involves restricting what an AI can do, which most frequently translates to keeping it in a simulation or denying it access to certain areas of data.
Motivation Selection: Goal and value alignment at the level of AI's own goals and values.
But to be fair, both strategies are hard. Capability control may reduce the performance of the AI (so it is an inferior system overall), while motivation selection assumes we can fully specify human values—a topic that many philosophers have disagreed on.
Which Brings Me Back to Why This Matters Now: The View from Monaco
Monaco is a unique crossroad of wealth, innovation, and globalism. These investors aren't just gambling on a company that may become the next big thing; they are potentially altering the fate of technologies that have no less than the power to redefine our future as a species.
AI has clear investment appeal as well. The space is offering all-time high returns with the opportunity to be a thought leader. But opportunity is knocking so responsibility comes-a-knocking too. An appreciation for the subtleties of the control problem is not just a matter of scholarship — it is critical due diligence.
Call to Action: Building the Future We Want
We need to cultivate a culture of preemptive accountability:
Choose Wisely: Support companies that have safety and ethical considerations for AI at heart.
Collaboration: Promote discussion between technologists, ethicists, policymakers, and investors to develop holistic strategies.
AI Monaco: Campaign for the new Social Good by leveraging AI and other tools to educate, fundraise, and advocate for superintelligence.
Conclusion: What We Leave Behind
We are at the cusp of what could be the greatest technological leap in human history and the decisions that we make now will reverberate for centuries. Bostrom’s insights serve as a cautionary tale and a blueprint. It may be the last invention we ever make—humankind's overcoming its biggest challenges, or its ultimate demise.
In the chambers of power and wealth in Monaco, we can impact that end. We must climb not just the mountain of superintelligence but traverse its peaks as wise and farsighted pathfinders.
Stay tuned to the “AI Under the Lens” Series for more in-depth analyses and discussions on the latest developments in artificial intelligence. Your insights and feedback are welcome as we navigate these critical topics together.