Share this @internewscast.com
Since homo sapiens first emerged, humanity has enjoyed an unbeaten 300,000–year run as the most intelligent creatures on the planet.
However, thanks to rapid advances in artificial intelligence (AI), that might not be the case for much longer.
Many scientists believe that the singularity – the moment when AI first surpasses humanity – is now not a matter of ‘if’ but ‘when’.
And according to some AI pioneers, we might not have much longer to wait.
A new report from the research group AIMultiple combined predictions made by 8,590 scientists and entrepreneurs to see when the experts think the singularity might come.
The findings revealed that AI experts’ predictions for the singularity keep getting closer and closer with every unexpected leap in AI’s abilities.
In the mid–2010s, scientists generally thought that AI couldn’t possibly surpass human intelligence any time before 2060 at the earliest.
Now, some industry leaders think the singularity might arrive in as little as three months’ time.

The singularity is the moment that AI’s intelligence surpasses that of humanity, just like Skynet in the Terminator films. This might seem like science fiction, but experts say it might not be far away
What is the singularity?
In mathematics, the singularity refers to a point where matter becomes so dense that the laws of physics begin to fail.
However, after being adopted by science fiction writer Vernor Vinge and futurist Ray Kurzweil, the term has taken on a radically different meaning.
Today, the singularity usually refers to the point at which technological advancements begin to accelerate well beyond humanity’s means to control them.
Often, this is taken to refer to the moment that an AI becomes more intelligent than all of humanity combined.
Cem Dilmegani, principal analyst at AIMultiple, told Daily Mail: ‘Singularity is a hypothetical event which is expected to result in a rapid increase in machine intelligence.
‘For singularity, we need a system that combines human–level thinking with superhuman speed and rapidly accessible, near–perfect memory.
‘Singularity should also result in machine consciousness, but since consciousness is not well–defined, we can’t be precise about it.’

Scientists’ predictions about when the singularity will occur have been tracked over the years, with a trend towards closer and closer predictions as AI has continued to surpass expectations
What is the earliest the singularity might arrive?
While the vast majority of AI experts now believe the singularity is inevitable, they differ wildly in when they think it might come.
The most radical prediction comes from the chief executive and founder of leading AI firm Anthropic, Dario Amodie.
In an essay titled ‘Machines of Loving Grace’, Mr Mamodei predicts that the singularity will arrive as early as 2026.
He says that this AI will be ‘smarter than a Nobel Prize winner across most relevant fields’ and will ‘absorb information and generate actions at roughly 10x–100x human speed’.
And he is not alone with his bold predictions.
Elon Musk, CEO of Tesla and Grok–creators xAI, also recently predicted that superintelligence would arrive next year.
Speaking during a wide–ranging interview on X in 2024, Mr Musk said: ‘If you define AGI (artificial general intelligence) as smarter than the smartest human, I think it’s probably next year, within two years.’

CEO and founder of AI firm Anthropic, Dario Amodei (pictured), predicted in an essay that AI would become superintelligent by 2025
Likewise, Sam Altman, CEO of ChatGPT creator OpenAI, claimed in a 2024 essay: ‘It is possible that we will have superintelligence in a few thousand days.’
That would place the arrival of the singularity any time from about 2027 onwards.
How likely is it that these predictions are right?
Although these predictions are extreme, these tech leaders’ optimism is not entirely unfounded.
Mr Dilmegani says: ‘GenAI’s capabilities exceeded most experts’ expectations and pushed singularity expectations earlier.’
The key to this revaluation of AI’s potential is that the power of leading AI models has grown exponentially, roughly doubling once every seven months.
If this exponential growth starts to accelerate, it could kick off a chain reaction that would lead to a sudden intelligence explosion.
For this reason, some AI leaders think that the singularity could arrive incredibly quickly once the right conditions are met.

Sam Altman (pictured), CEO of ChatGPT creator OpenAI, has argued that AI will surpass humanity by 2027–2028 at the earliest

The confidence of tech leaders is based on the rapid increase in the power of AI models. These graphs show how various types of ‘Large Language Models’ have rapidly increased their computing power over the last decade
However, the expert consensus is that the singularity will not arrive for many more years.
Mr Dilmegani says that if the singularity does arrive next year, he will ‘happily print our article about the topic and eat it.’
Over–optimism about AI is nothing new, and countless exaggerated predictions have been proven wrong in the past.
For example, Geoffrey Hinton, the so–called ‘godfather of AI’, predicted that hospitals would not need radiologists by 2021 because they would be replaced by AI.
Even as far back as 1965, AI pioneer Herbert Simon boldly claimed: ‘Machines will be capable, within twenty years, of doing any work a man can do.’
As Mr Dilmegani points out, AI’s capabilities are currently nowhere near what the human mind is capable of.
However, business leaders like Mr Altman and Mr Amodei have good reasons to overstate how soon the singularity will arrive.
Mr Dilmegani says: ‘An earlier singularity timeline places current AI leaders as the ultimate leaders of industry.

Based on the rapid growth of computational power available, Elon Musk has predicted that AI will surpass humanity by the end of this year

The support for this theory is that the abilities of AI appear to have grown exponentially. This graph shows how the speed of AI agents has doubled once every seven months

AI expert Cem Dilmegani, of AIMultiple, told Daily Mail that he would print out and eat his research if Elon Musk’s predictions (illustrated) came true
‘The company that reaches singularity could always remain as the world’s most valuable company.
‘This optimism fuels investment, and both of these CEOs run loss–making companies that depend on investor confidence.’
When is the singularity likely to arrive?
To get a picture of when the singularity is really likely to occur, Mr Dilmegani and his colleagues combined surveys covering 8,590 AI experts.
This showed that, although predictions have moved a lot closer since the release of ChatGPT, most experts think the singularity is probably about 20 years away.
For the singularity to occur, experts say that AI will first need to reach a state known as Artificial General Intelligence (AGI), in which it has human–like abilities across a wide range of tasks.
Although the results of some studies vary, the consensus is that this will happen by around 2040.
Some groups, like investors, were more confident and placed this moment a little earlier, usually centring on some point around 2030.

However, most experts believe that the singularity will arrive sometime around 2040–2060. Investors are more bullish, putting this point around 2030. However, the overall consensus is that the AI singularity is coming, but not next year
Once AGI has been achieved, the experts think that the singularity will follow very quickly as the AI rapidly evolves to reach ‘superintelligence’.
In one poll, scientists assigned a 10 per cent probability to the singularity arriving two years after AGI and a 75 per cent chance of this happening within the next 30 years.
However, although the experts’ opinions put the singularity much further back than leaders like Mr Altman and Mr Musk, most agree that the singularity is coming.
That may mean humanity does not have much longer to enjoy our position as the smartest creatures on Earth.