Share this @internewscast.com
(NEXSTAR) – An open letter urging the cessation of progress in “superintelligence” development has attracted support from a wide spectrum of prominent figures, including former royals, Hollywood icons, conservative media voices, and a former U.S. National Security Adviser.
This coalition is appealing to tech companies to pause their efforts in advancing this sophisticated level of artificial intelligence until it can be developed with adequate safety measures and regulatory controls.
The letter expresses concern that the AI technology being pursued could “significantly surpass human capabilities across nearly all intellectual tasks.”
It highlights potential dangers such as economic displacement, loss of autonomy and civil rights, threats to national security, and, in the most extreme scenario, the risk of human extinction.
What is AI ‘superintelligence’?
Within AI discourse, “superintelligence” is often referred to as artificial general intelligence, or AGI.
Although not a technically defined term, it remains “a significant yet vaguely outlined concept,” as AI expert Geoffrey Hinton explained to the Associated Press last year.
“I use it to mean AI that is at least as good as humans at nearly all of the cognitive things that humans do,” he said.
“Superintelligence” research isn’t about building a specific AI tool. It’s more about building a “thinking machine,” said Pei Wang, a professor who teaches an AGI course at Temple University. The AI would be able to reason, plan and learn from experiences like people do.
OpenAI, Amazon, Google, Meta and Microsoft are all heavily invested in researching it, according to the AP. Some AI experts warn companies are in an arms race of sorts to develop a technology they can’t guarantee they’ll be able to fully control.
In an interview with Ezra Klein of The New York Times, AI researcher Eliezer Yudkowsky described a scenario where “now the AI is doing a complete redesign of itself. We have no idea what’s going on in there. We don’t even understand the thing that’s growing the AI.”
But instead of turning it off, a company may be too invested in having the superior technology before its competitors.
“And of course, if you build superintelligence, you don’t have the superintelligence — the superintelligence has you,” Yudkowsky said.
While there are those concerned AI will grow out of control, there’s also the criticism that developers are sometimes inflating the capabilities of their products. OpenAI was recently met with ridicule from mathematicians and AI scientists when its researcher claimed ChatGPT had figured out unsolved math problems — when what it really did was find and summarize what was already online.
Who has signed the letter?
Prince Harry and his wife Meghan, the Duchess of Sussex, made headlines Wednesday for joining others in signing the cautionary letter. Actors Stephen Fry and Joseph Gordon-Levitt have joined, as has musician will.i.am.
Two prominent conservative commentators, Steve Bannon and Glenn Beck, have also signed on. Also on the list are Apple co-founder Steve Wozniak; British billionaire Richard Branson; the former Chairman of the U.S. Joint Chiefs of Staff Mike Mullen, who served under Republican and Democratic administrations; and Democratic foreign policy expert Susan Rice, who was national security adviser to President Barack Obama.
They join AI pioneers, including Yoshua Bengio and Geoffrey Hinton, co-winners of the Turing Award, computer science’s top prize. Hinton also won a Nobel Prize in physics last year. Both have been vocal in bringing attention to the dangers of a technology they helped create.
“This is not a ban or even a moratorium in the usual sense,” wrote another signatory, Stuart Russell, an AI pioneer and computer science professor at the University of California, Berkeley. “It’s simply a proposal to require adequate safety measures for a technology that, according to its developers, has a significant chance to cause human extinction. Is that too much to ask?”
The Associated Press contributed to this report.