Even the AI companies are tired of talking about AGI
Share this @internewscast.com

It seems that no trendy phrase can escape the clutches of cringeworthiness, and “rizz” is a case in point. As soon as it crossed over to older generations, its appeal began to fade. Similarly, teachers donning “6-7” costumes for Halloween effectively ended Gen Alpha’s enthusiasm for the term. In a parallel development, tech CEOs are distancing themselves from the once-celebrated “artificial general intelligence” or AGI, eagerly searching for new terminology to adopt.

Not long ago, AGI was the holy grail of the AI industry. The term, believed to have been first used in 1997 by researcher Mark Gubrud, was meant to describe AI systems that could match or even surpass the human brain in complexity and speed. Even today, when people talk about AGI, they generally refer to AI that equals or exceeds human intelligence. However, in a surprising twist, major tech companies are choosing to rebrand, inventing new terms and acronyms that ultimately mean much the same thing.

Throughout the past year, tech leaders have been downplaying AGI as a landmark achievement. Dario Amodei, CEO of Anthropic, backed by Amazon, has openly criticized the term, referring to it as mere “marketing jargon.” OpenAI’s CEO, Sam Altman, commented in August that the term isn’t particularly useful. Similarly, Jeff Dean, Google’s chief scientist and head of Gemini, avoids AGI discussions, while Microsoft CEO Satya Nadella believes the AGI hype is premature and “nonsensical.” On a recent earnings call, Nadella expressed skepticism that AGI, as defined in their contracts, would be realized anytime soon.

In place of AGI, these companies are promoting a variety of new terms. Meta speaks of “personal superintelligence,” Microsoft champions “humanist superintelligence,” Amazon promotes “useful general intelligence,” and Anthropic advocates for “powerful AI.” This marks a significant shift for these companies, which once chased the AGI benchmark and the fear of being left behind if they didn’t.

The challenge with the term “AGI” is its increasing ambiguity as AI advances. The notion of AI matching human intelligence varies widely depending on who you ask. Jeff Dean of Google noted that the definitions are so varied that the difficulty of achieving AGI differs by magnitudes of a trillion.

Nevertheless, some companies have invested billions into this imprecise concept, a situation most evident in the evolving and complex relationship between Microsoft and OpenAI.

In 2019, OpenAI and Microsoft famously signed a contract with an “AGI clause.” It gave Microsoft the right to use OpenAI’s tech until the latter achieved AGI. But the contract apparently didn’t fully define what that meant. When the deal was renewed in October, things got even more complicated. The terms shifted to say that “once AGI is declared by OpenAI, that declaration will now be verified by an independent expert panel” — meaning that now, it won’t just be OpenAI’s call to define what AGI means, it’ll be a group of industry experts — and Microsoft won’t lose all its rights to the tech once that happens, either. The simplest way to put this whole ordeal off? Just don’t say AGI.

Another problem is that AGI has developed some baggage. Tech companies have spent years detailing their own fears about how the technology could destroy everything. Books have been written (think: If Anyone Builds It, Everyone Dies). Hunger strikes have made headlines. For a while, it was still good publicity — saying your tech is so powerful that you’re worried about its influence on the Earth seems to draw big investor dollars. But the public, unsurprisingly, soured on that idea. So, with the complicated definitions, contract drama, and public fear around superpowerful AI, it’s a lot easier to market less-loaded terminology. That’s why every tech company seems to be making some new brand of “intelligence” its own.

One popular general-purpose replacement for AGI is “artificial superintelligence,” or ASI. ASI is AI that surpasses human intelligence in virtually every area — compared to AGI, which is now generally defined as AI that’s equal to human intelligence. But for some in the tech industry, even the idea of “superintelligence” has become amorphous and conflated with AGI. The multiple theoretical milestones don’t even have clearly distinguished timelines. Amodei says he expects “powerful AI” to come “as early as 2026.” Altman says he expects AGI to be developed in the “reasonably close-ish future.”

So companies have developed their own variants. Meta CEO Mark Zuckerberg said in January that the company needed “to build for [artificial] general intelligence,” but by July, he had pivoted to “personal superintelligence” in a manifesto. It was a power-to-the-people spin on AGI that “helps you achieve your goals, create what you want to see in the world, experience any adventure, be a better friend to those you care about, and grow to become the person you aspire to be.” Zuckerberg used the manifesto to combat public fears of AI taking jobs and throw shade at Meta’s competitors, calling the company’s vision “distinct from others in the industry who believe superintelligence should be directed centrally towards automating all valuable work, and then humanity will live on a dole of its output.”

Microsoft, however, has also rebranded its venture as chasing “Humanist Superintelligence (HSI),” which is essentially Zuckerberg’s manifesto in a different font. The company is defining HSI as “incredibly advanced AI capabilities that always work for, in service of, people and humanity more generally” and are “problem-oriented” instead of being “an unbounded and unlimited entity with high degrees of autonomy.” The rebrand came complete with a new website, topped with the term “Approachable Intelligence,” backed with a sepia-style background and a soft color palette, and awash with paintings and photos of nature.

Image: Microsoft AI

For Amazon’s part, it has rebranded its AGI efforts as chasing “useful general intelligence,” or “AI that makes us smarter and gives us more agency.” Late last year, the company hired the founders of Adept, an agentic AI startup, and licensed its technology, in efforts to compete against others in the AGI race. Like the other companies’ branding efforts, though, Amazon is positioning its UGI efforts as useful, easily defined, and decidedly not all-powerful or scary: just “enabling practical AI that can actually do things for us and make our customers more productive, empowered, and fulfilled.”

With “powerful AI,” Anthropic has no interest in seeming down-to-earth. Amodei dubs it a “‘country of geniuses in a datacenter’” that is “smarter than a Nobel Prize winner across most relevant fields — biology, programming, math, engineering, writing, etc.” Powerful AI, he said, would be able to write compelling novels, prove unsolved theorems in mathematics, and write complex code. It would not just answer questions but complete complex, multistep tasks over hours, days, or weeks, similar to AI CEOs’ vision of a successful AI agent, and “absorb information and generate actions at roughly 10x–100x human speed.”

AGI and ASI were already a lot to reckon with. Now we’ve got PSI, HSI, UGI, and PI, too. Cheers to the new acronyms next year will bring.

Follow topics and authors from this story to see more like this in your personalized homepage feed and to receive email updates.


Share this @internewscast.com
You May Also Like

Why Threads is Set to Become Your Go-To Morning App

This excerpt from “Sources” by Alex Heath, a newsletter exploring AI and…

Breaking News: TikTok’s US Sale Set to Transform Social Media Landscape

According to reports from Axios, The Hollywood Reporter, and CNBC, TikTok has…

Riot Games Uncovers Critical Motherboard Vulnerability Exploited by PC Cheaters

Riot Games has brought attention to a critical security vulnerability found in…