Share this @internewscast.com
Now here’s a path not taken: according to a new report from Semafor, Elon Musk tried — and failed — to take over ChatGPT creator OpenAI in 2018.
Musk was part of a small group that founded the AI lab in 2015 as a nonprofit, intending the firm to share research for the wider benefit of society. But by early 2018, says Semafor, Musk was worried the company was falling behind Google. He reportedly offered to take direct control of OpenAI and run it himself but was rejected by other OpenAI founders including Sam Altman, now the firm’s CEO, and Greg Brockman, now its president.
Crucially, when Musk walked away from the company — he resigned from its board in 2018 citing a conflict of interest with his work at Tesla — Semafor says he also reneged on a promise to supply $1 billion in funding, contributing only $100 million before he walked. This left OpenAI with a problem, as its work developing large-scale AI models like image generator DALL-E and the text-generating GPT series was racking up huge bills. So by 2019, OpenAI announced it was creating a new for-profit entity to fund its research and quickly became closely entangled with Microsoft, which supplied billions in funding and resources while securing exclusive licenses to use OpenAI’s tech in its products.
Musk’s rejection seemingly changed OpenAI’s trajectory, pushing it toward corporate interests
Semafor does not state outright that Musk’s lost funding was what pushed OpenAI into bed with Microsoft, but it’s a plausible interpretation. (We’ve reached out to OpenAI for comment on the story and will update if we hear back.) This is what makes the report so significant, as many in the AI community see OpenAI’s turn toward corporate interests as a huge moment for AI and the world — not just as a betrayal of OpenAI’s founding principles but as a spur for the company to launch new AI products as quickly as possible, an attitude many think could have dangerous consequences.
OpenAI’s turn toward Microsoft has certainly changed how the company shares its research. When OpenAI announced its latest AI language model, GPT-4, earlier this month, many experts were dismayed that it did not share details about how it was created or its training data. In an interview with The Verge, Ilya Sutskever, OpenAI’s chief scientist, explained that this was to keep the company’s competitive advantage over rivals (and, as a future consideration, to stop misuse of its technology). But many AI experts say shutting down access to OpenAI’s models makes it harder for the community to understand potential threats posed by these systems and concentrates power in corporate hands.
Since OpenAI became entangled with Microsoft, the two companies have been launching AI services and products at a blistering pace, with Microsoft integrating OpenAI’s tech into Windows and its Office suite. And just this week, OpenAI announced it would be massively expanding the capabilities of its chatbot ChatGPT by letting the system interface with other sites and services via plug-ins. OpenAI said it was like giving the bot “eyes and ears,” while some experts voiced concern the move presents a safety threat.
Musk has expressed dismay about this change in OpenAI’s trajectory numerous times. In February, he tweeted that OpenAI “has become a closed source, maximum-profit company effectively controlled by Microsoft,” adding that this was “not what I intended at all.” (It’s worth remembering, of course, that Musk is nothing if not self-interested in this matter and a skilled manipulator of public narratives, always eager to position himself as a hero.) Last Friday, he tweeted a meme with the caption “Me realizing AI, the most powerful tool that mankind has ever created, is now in the hands of a ruthless corporate monopoly.”