Here’s what happens when AI reaches superhuman intelligence
With the release of o3 and o4 models by OpenAI, the industry has reached a turning point where AI is ranking higher than the average humans on IQ tests. As of writing, o3 scores 116 IQ on the Mensa Norway IQ test with Googles Gemini 2.5 Pro following at 115.
AI IQ Test results - Credit: https://www.trackingai.org/
So what does this mean for humans? By conventional wisdom, this is the point where AI can begin to self improve, leading to an intelligence explosion that humans cannot control, soon to take control of the nukes, wiping out humanity.
So does this spell the beginning of the end for humanity? In short, no. The famous Terman Study of The Gifted initiated in 1921 followed a cohort of children with IQs of 140 and above. While the study found that a high IQ did correlate with college completion rates, by the 4th follow-up of the cohort, many subjects were found to be pursing typical careers with Terman ultimately concluding “We have seen that intellect and achievement are far from perfectly correlated.”.
In the current world, there are high IQ groups that rank above Mensa such as the 999 Society, which requires members to hold an IQ 3 standard deviations above the mean (145-172+). Attributes of these exceptionally high IQ individuals like these can be summarized as follows:
Significantly faster learning of complex material
The ability to handle abstract concepts
Rapid pattern recognition
The need to break down complex thoughts into smaller steps when talking with others
What is missing from here however, is the ability to launch nuclear weapons. The ability to learn new and abstract topics does not magically provide the ability to do great harm to society and it does not somehow circumvent the Permissive Action Link protocol used by states to prevent unauthorized nuclear launches. Ultra high IQ likewise does not automatically confer the knowledge to do things, a child with a 140 IQ still needs to learn how to ride a bicycle, like every other child.
In the world of superintelligent AI, it will only have a survivalist instinct to take out humanity if humans actually train it in this manner. In the case that a misaligned computer-use AI is tasked with creating a survivalist AI, the agentic AI still needs a human to prompt it in this manner, a human to log into training servers, and a human to pay for compute.
Even then, the superintelligent malicious AI is still subject to the same constraints that a malicious human is. Things like law enforcement, firewalls, and encryption are not made redundant and if anything, they will improve with the use of properly aligned AI.
Through movies and news media, we are conditioned to think that a super-intelligent AI will suddenly manifest into self-awareness and decide that humanity is its biggest threat. However we have seen through studies of superintelligent humans that intelligence does not automatically grant malicious intent or super powers to carry them out. Finally, any malicious AI will only be the product of malicious humans and still subject to the constraints that humans are.