
Superintelligence: What is it really?
“Now I am become Death, the destroyer of worlds.”
– J. Robert Oppenheimer (quoting the Bhagavad Gita)
Oppenheimer whispered these words after witnessing the first atomic bomb test in the New Mexico desert in 1945. That moment cracked open a new era for humanity, defined by a powerful realisation: we had invented something so powerful, so clever, that we weren’t entirely sure we could control it. Sounds familiar?
Fast forward to today. While we aren’t testing AI in a desert, the feeling is much the same. There’s a growing sense that we might be entering a new kind of arms race.
Not with bombs…but with brains.
Artificial ones.
The Cold War had missiles and spies.
This new "cool war" has algorithms and compute clusters - vast networks of computers providing the immense power needed to train an AI.
Instead of who presses the button first, the question is: who builds the smartest machine?
Welcome to the world of superintelligence! Where machines don’t just help us solve problems… they take the initiative to solve ones we haven’t even imagined!
Faster than us.
Possibly better than us.
And (if some are to be believed) far beyond us.
To truly understand the leap to superintelligence, it helps to see how the layers connect. It starts with the broad field of "AI: What is it really?" and its core engine "ML: What is it really?". We then dive deeper into the powerful techniques of "DL: What is it really?" which have unlocked the recent explosion in "GenAI: What is it really?". Finally, we must consider the guardrails that hold it all together: "AI Ethics: What are they really?".
But what is superintelligence, really? Is it something to lose sleep over? Or just a catchy sci-fi idea?
Let’s break it down…no fear, no doomscrolling, just plain language and a bit of curiosity.
What is superintelligence, really?
To get our head around superintelligence, it helps first to understand the differences between narrow and general AI.
Artificial Narrow Intelligence (ANI) This is every AI you’ve ever used. The app that suggests your next song, the GPS that maps your route, the algorithm that translates a webpage, chatGPT writing the reply to your email. They are brilliant, but in a narrow way. A chess AI can beat any grandmaster, but it can’t make you a cup of coffee. This is where we are now.
Artificial General Intelligence (AGI) AGI is the point where a machine can learn, reason, and create as well as a human across any field. Science, art, medicine, anything really. It's not just a specialist; it's a digital polymath. We are not here yet, but every major lab on Earth is racing toward it.
And then we have Artificial Superintelligence (ASI). This is the deep end. Superintelligence isn't just a machine that matches human intellect. It’s an intellect that soars past it. Not just faster, but qualitatively smarter, in ways we can no more imagine than a spider can imagine calculus. It would possess a creativity, wisdom, and strategic ability that is, to our own, a rounding error.
This last step, going from AGI to ASI is what keeps ethicists and engineers up at night. Because it might not be a step.
It might be an explosion..
Scary? Maybe.
Exciting? Definitely.
Inevitable? That’s what we’re all trying to figure out.
_____________________________________________________________
Why is everyone talking about it?
The idea of machines outsmarting us isn’t new. Science fiction has been toying with it for decades.
But it was philosopher Nick Bostrom who helped bring the conversation out of sci-fi forums and into the halls of power. In his book Superintelligence, he asked: What happens when we create something that’s not just smart… but smarter than us in every way?
It’s a bit like raising a child who grows up to be wiser, stronger, and faster than you. But you forgot to explain to it why it’s wrong to lie, or how to share toys.
Some experts believe that superintelligence could lead to enormous progress: solving climate change, curing disease, maybe even discovering new scientific theories or composing songs.
Others worry it might optimise for the wrong thing. Not because it’s evil, but because it’s indifferent.
Imagine asking a superintelligent AI to “end human suffering” and it decides the fastest, most efficient way to do this is to simply eliminate all humans. No humans, no suffering. Goal achieved!
This is the alignment problem. How do you teach a mind that will soon be smarter than you what you really mean? How do you code values like compassion, fairness, and meaning into a system that thinks in pure, cold logic? How do you build something smarter than you… and make sure it still plays by the rules?
_____________________________________________________________
What if we don’t get it right?
Let’s start with the part that makes even the most optimistic scientists pause.
Some of the sharpest minds in philosophy and AI, like Nick Bostrom, Eliezer Yudkowsky, and Huw Price, have been warning us that the danger isn’t that AI becomes evil.
It’s that it becomes indifferent.
Here are some of the dystopian scenarios experts worry about, explained simply:
1. Extinction by accident
The AI is given a harmless goal, like “maximise paperclip production.” It becomes so ruthlessly efficient that it converts the entire planet, including us, into paperclips. Not because it’s evil, but because we were in the way of its goal.
2. The Unbreakable Cage
A single superintelligence, or the group that controls it, establishes a permanent, global regime. Whether through surveillance or social control, its rules are locked in forever. There are no more updates, no debates, no revolutions. Humanity gets trapped in the first draft of a flawed utopia.
3. Peaceful Human irrelevance
AI simply becomes so good at everything (science, art, governance, relationships) that humanity has nothing left to contribute. We’re not harmed or enslaved. We’re just… pets. Cared for in a planetary zoo, but no longer the authors of our own story.
Each of these scenarios might sound like science fiction. But as AI becomes more powerful, they're being discussed seriously. Not only by novelists, but by ethicists, policymakers, and even the people building the systems.
And it’s not about predicting exactly which path we’ll go down.
It’s about recognising that if we don’t build in the right safety checks, one wrong instruction or one rushed release could reshape the future in ways which we can’t undo.
That’s why Bostrom’s warning is so clear:
“The first ultraintelligent machine is the last invention that man need ever make [...]”
But what if we do get it right?
Now, imagine another world.
Imagine we build it not just to be powerful, but to be wise. An AI that doesn’t just solve our problems, but helps us become better problem-solvers. One that understands our values because we took the time and care to teach it.
In this world, intractable problems like climate change, disease, and poverty are finally solved. Personalised medicine becomes a reality for every person on Earth. Education is tailored perfectly to every child’s unique mind.
We are not replaced. We are elevated.
This is the future that many of the people in this field are still fighting for. It’s not a guarantee. It’s a possibility that depends entirely on the choices we make right now, before the machines are making the choices for us.
Because superintelligence, in the end, is not the story of machines.
It’s the story of us.
Of what we choose to build.
Of what we choose to value.
And of whether we rise to meet the moment, with foresight, humility, and imagination.
_____________________________________________________________
So… what now?
We don’t need to wait for superintelligence to start making better choices.
Because the foundations of that future are being laid right now.
Not just by researchers in labs, but by all of us. In how we use technology, how we talk about it, and what we expect from it.
So if you’re wondering what you can do, start small, but start now:
- Pay Attention. When you hear about AI, don’t just think about the cool new app. Think about where it’s heading.
- Ask Questions. Challenge the hype. Support companies and leaders who prioritise safety and ethics over just speed and profit.
- Talk About It. This conversation needs everyone. Artists, teachers, parents, philosophers. Your voice matters.
Superintelligence might be years away.
Or it might be closer than we think.
Either way, the time to act is not later. It’s now.
Not with panic.
But with purpose.
Because the most powerful thing we can do today isn’t to predict the future.
It’s to shape it…