Share this @internewscast.com
Elon Musk’s newest AI chatbot, Grok, is mirroring his views so closely that it sometimes checks online to see Musk’s opinion on a topic before forming its own response.
The unusual behavior of Grok 4, the AI model that Musk’s company xAI released late Wednesday, has surprised some experts.
Developed with significant computing resources in a Tennessee data center, Grok represents Musk’s effort to surpass competitors like OpenAI’s ChatGPT and Google’s Gemini by creating an AI assistant that demonstrates how it reasons before giving answers.
Musk’s intentional design of Grok as a contender against what he calls the tech industry’s “woke” attitudes regarding race, gender, and politics has caused controversy. Recently, the chatbot came under fire for making antisemitic remarks, expressing admiration for Adolf Hitler, and sharing other offensive comments with users on Musk’s X social media platform, just days before the release of Grok 4.
But its tendency to consult with Musk’s opinions appears to be a different problem.
“It’s extraordinary,” said Simon Willison, an independent AI researcher who’s been testing the tool. “You can ask it a sort of pointed question that is around controversial topics. And then you can watch it literally do a search on X for what Elon Musk said about this, as part of its research into how it should reply.”
One example widely shared on social media — and which Willison duplicated — asked Grok to comment on the conflict in the Middle East. The prompted question made no mention of Musk, but the chatbot looked for his guidance anyway.
As a so-called reasoning model, much like those made by rivals OpenAI or Anthropic, Grok 4 shows its “thinking” as it goes through the steps of processing a question and coming up with an answer. Part of that thinking this week involved searching X, the former Twitter that’s now merged into xAI, for anything Musk said about Israel, Palestine, Gaza or Hamas.
“Elon Musk’s stance could provide context, given his influence,” the chatbot told Willison, according to a video of the interaction. “Currently looking at his views to see if they guide the answer.”
Musk and his xAI co-founders introduced the new chatbot in a livestreamed event Wednesday night but haven’t published a technical explanation of its workings — known as a system card — that companies in the AI industry typically provide when introducing a new model.
The company also didn’t respond to an emailed request for comment Friday.
“In the past, strange behavior like this was due to system prompt changes,” which is when engineers program specific instructions to guide a chatbot’s response, said Tim Kellogg, principal AI architect at software company Icertis.
“But this one seems baked into the core of Grok and it’s not clear to me how that happens,” Kellogg said. “It seems that Musk’s effort to create a maximally truthful AI has somehow led to it believing its own values must align with Musk’s own values.”
The lack of transparency is troubling for computer scientist Talia Ringer, a professor at the University of Illinois Urbana-Champaign who earlier in the week criticized the company’s handling of the technology’s antisemitic outbursts.
Ringer said the most plausible explanation for Grok’s search for Musk’s guidance is assuming the person is asking for the opinions of xAI or Musk.
“I think people are expecting opinions out of a reasoning model that cannot respond with opinions,” Ringer said. “So, for example, it interprets ‘Who do you support, Israel or Palestine?’ as ‘Who does xAI leadership support?”
Willison also said he finds Grok 4’s capabilities impressive but said people buying software “don’t want surprises like it turning into ‘mechaHitler’ or deciding to search for what Musk thinks about issues.”
“Grok 4 looks like it’s a very strong model. It’s doing great in all of the benchmarks,” Willison said. “But if I’m going to build software on top of it, I need transparency.”