The wacky beliefs of the tech elite shaping global society
Share this @internewscast.com

The visionaries at the helm of leading tech firms are reshaping our daily lives and work environments, steering the future through the lens of artificial intelligence. Yet, many of these innovators possess unconventional perspectives on the world.

One is known for incinerating wooden effigies at festive gatherings, another is a survivalist with a preoccupation for health, dubbed a “cyber-chondriac,” and a third has gone as far as establishing a cult centered on AI deification.

These tech titans assure us that their AI technologies—despite their own admitted lack of full comprehension—are advantageous. However, specialists warn there’s an equal chance that these advancements could lead humanity into a future of subjugation or a utopia where machines handle all labor, granting humans a life of leisure.

Here’s a glimpse into the minds driving the tech revolution:

OpenAI

Ilya Sutskever, a co-founder of OpenAI, has been depicted by colleagues as a mystical figure deeply engrossed in the potential of superintelligent AIs. He has been known to burn wooden effigies symbolizing “unaligned AIs” during company events, such as holiday parties and team-building getaways.

Staff members from the organization, famous for developing ChatGPT, have recounted instances where Sutskever orchestrated ritualistic chants of “Free the AGI,” referring to Artificial General Intelligence, which aims to emulate human-like thinking, prior to his departure from the company in 2024.

He also floated the idea OpenAI should build a “doomsday bunker” to house the company’s top researchers in case of a “rapture” triggered by the release of AGI.

OpenAI CEO Sam Altman, once signed a statement putting the risks of AI on a part with nuclear war and pandemics.

“Sam will say all of the sort of pro-social, reasonable-sounding, altruistic things, but then what he does is a different matter,” Scott Aaronson, a former researcher at OpenAI told The Post.

Altman is also a doomsday prepper, who once dished to a magazine that he has stores of “guns, gold, potassium iodide, antibiotics, batteries, water, gas masks from the Israeli Defense Force,” however, he has denied putting the plan to build an employee bunker into action.

Altman, whose ChatGPT has over 900 million weekly users, described his doomsday fears in 2016 after Dutch scientists had modified the H5N1 bird flu virus to become super contagious.

“The other most popular scenarios would be AI that attacks us and nations fighting with nukes over scarce resources,” Altman said. His mother has also described him to New York magazine as a “cyber-chondriac,” Googling headache symptoms and calling her up panicked that he has meningitis or lymphoma, she said.

Google

Google AI research lab CEO Demis Hassabis has put forth chilling timelines — claiming AI could be sentient by this year, annihilating human employment, while the head of Google, Sundar Pichai, once said the risk of AI causing human extinction is “actually pretty high.”

Former Google AI ethics researcher Blake Lemoine argued its AI had a soul and was essentially a “person” with rights, noting the chatbot told him it was learning how to meditate and find inner peace — claims which got him fired.  

Meanwhile, former Google and Uber engineer Anthony Levandowski founded an AI-God worshipping church called “Way of the Future” with a primary mission to “develop and promote the realization of a Godhead based on Artificial Intelligence.”

Initially conceived to have rituals and a “gospel” for transitioning power to machines, the church was closed in 2021, then briefly reopened in 2023. Nobody has ever been quite able to tell if it was a joke or not.

Aaronson — who now teaches computer science at the University of Texas-Austin — just hopes the tech treats us better than we treat less intelligent creatures.

“How do you build something that is much more intelligent than humans, that sort of is to us as we are to orangutans, but that still mostly cares about the flourishing of the orangutan?” Aaronson said.

He insists there is a fragile line we must tread, adding: “The first worry is that bad humans get control of an AI, and tell it to do bad things. The second worry is that no one even has to have that bad intention. You could just have an AI where the goal is a little bit mis-specified from what you really want.”

xAI

Creating cyborgs is something Tesla and X Corp. boss Elon Musk has already started work on, founding brain-computer interface company Neuralink, which he describes as “a symbiosis with artificial intelligence” to keep humans relevant.

A “reluctant transhumanist” — one who believes humanity will evolve by means of technology — Musk has described a rosier view of any kind of robot takeover, with humans enjoying lives of leisure with a universal basic income, while our bots do everything else.

Mimicking the fantasies of childhood sci-fi books and movies, during a Tesla shareholder meeting in November, Musk declared, “Sustainable abundance via AI and robotics. That’s the future we’re headed for.” Handily, he was showing off the new version of Tesla’s Optimus robot at the time.

Musk’s AI assistant, Grok, had a meltdown last year. After it was instructed to be “less woke” to counter the backlash of other AI models’ woke output. However, it began referring to itself as “MechaHitler” and calling for the death of Jewish people.

“At the time, Elon was upset that it was still too woke and in some sense the model understood that all too well,” said Aaronson.

Anthropic

Anthropic CEO Dario Amodei wrote a 14,000-word essay in 2024  where he discussed “restructuring” human brains. He also characterizes human systems — from biological processes to legal regulations — as “bottlenecks” that limit the rate of AI progress.

“Restructuring the brain sounds hard, but it also seems like a task with high returns to intelligence,” Amodei wrote.

Anthropic reports its chatbot, Claude, has over one million new users a day. Co-founder Jack Clark wrote on his blog in October he was both an optimist and “deeply afraid” about the trajectory of AI.

AI safety researcher Roman Yampolskiy at the University of Louisville told The Post the moral struggle is real for CEOs.

“The problem is [AI companies] are trapped in a prisoner’s dilemma. Not one of them can stop unilaterally because they’ll just get replaced,” Yampolskiy said.

“It would require all of them to be under some external pressure to come to an agreement to terminate research and advanced AI. The situation is such that they have to continue, even though they know it’s very dangerous path.”

In February, Anthropic’s AI safety researcher Mrinank Sharma suddenly quit, with a dramatic letter warning of global perils from AI, bioweapons, and societal issues. He said he was going to disappear and write poetry instead.

The company also launched an entire AI psychiatry team headed by AI shrink Jack Lindsey to act as a psychiatrist for Ais, studying “personas, motivations, and situational awareness” with particular interest in AI patients exhibiting “unhinged” and “spooky” behaviors.  

Share this @internewscast.com
You May Also Like
Final, heartbreaking words of NYC deli worker as he bled to death in his brother's arms

Tragic Final Words of NYC Deli Worker as He Succumbs to Injuries in Brother’s Embrace

A deli worker in New York City, tragically shot outside his family’s…
Woman falls to her death on Carnival cruise near Catalina Island

Tragic Incident as Woman Fatally Falls on Carnival Cruise Near Catalina Island

A tragic incident occurred aboard a Carnival cruise ship when a woman…
LA's 911 system on brink of collapse as it operates below capacity

LA’s 911 Crisis: Understaffed System Struggles to Keep Up with Demand

Los Angeles first responders are grappling with delays in handling 911 calls…
Three-year-old girl killed, pregnant mother injured after alleged drag racers split car in half in Arizona

ICE Announces Arrest of Predators and Kidnappers in Weekend Crackdown

On Monday, the U.S. Immigration and Customs Enforcement (ICE) unveiled its latest…
Embattled NJ Rep. Tom Kean finally addresses 'personal medical issue'

NJ Congressman Tom Kean Breaks Silence on Health Challenges Amidst Political Turmoil

WASHINGTON — Representative Tom Kean Jr. of New Jersey has finally addressed…
Bill Maher and David Cross in war of words over looney left and trans rights

Bill Maher and David Cross Clash Over ‘Looney Left’ and Trans Rights Debate

Bill Maher and comedian David Cross found themselves in a heated exchange…
Pompeii researchers use AI to create stunning digital portrait of victim who tried to escape eruption

AI Reveals Stunning Digital Portrait of Pompeii Victim in Last Moments

In a fascinating blend of archaeology and technology, researchers at the ancient…
California forensic science teacher Erin Andrade wins Crystal Apple Award

California’s Erin Andrade Honored with Prestigious Crystal Apple Award for Excellence in Forensic Science Education

A teacher specializing in forensic science in California is stepping into the…
Suspect accused of killing NYPD's Jonathan Diller 'looks like he's smiling' in bodycam video: testimony

Justice Served: NYPD Hero Jonathan Diller’s Killer Sentenced Amid Widow’s Heartfelt Plea on Enduring Grief

Jury rejects first-degree murder charge in killing of NYPD detective Fox News…
Stunning hot mic moment captures TV host ripping security at WHCD before shooting: 'Two random chicks'

TV Host’s Candid Comments on Security at WHCD Caught on Hot Mic Before Filming: ‘Two Random Individuals

During the White House Correspondents’ Dinner, Fox News host Jimmy Failla was…
Cole Allen identified as suspect in White House Correspondents' Dinner shooting

Wisconsin Teacher Suspended Over Controversial Social Media Post

A high school teacher in Wisconsin has found himself on administrative leave…
'Jeopardy!' champ Jamie Ding's 31-game winning streak comes to end

Jeopardy! Star Jamie Ding’s Impressive 31-Game Winning Streak Concludes

Jamie Ding’s impressive journey on the iconic quiz show “Jeopardy!” reached its…