Share this @internewscast.com
Elon Musk’s ambitious project, Grokipedia, aims to be xAI’s AI-generated counterpart to Wikipedia, offering a supposedly unbiased and comprehensive vault of human knowledge, worthy of preservation in space. In practice, however, it has devolved into chaos, further exacerbated by its recent decision to allow public contributions.
Initially, Grokipedia’s content was locked upon its October launch, featuring some 800,000 articles created by Grok. The platform was already criticized for content that was racially insensitive, transphobic, overly complimentary of Musk, and sometimes directly lifted from Wikipedia. Yet, it maintained a certain predictability. This changed dramatically with the release of version 0.2, which opened the floodgates for public edits.
Editing Grokipedia is straightforward; you simply select text, click “Suggest Edit,” and complete a form detailing your proposed change, along with optional content suggestions and sources. Review and implementation of these edits fall to Grok, xAI’s AI chatbot, known for its problematic Musk-centric bias. Unlike Wikipedia, where a vigilant community of human editors scrutinizes changes, Grokipedia relies solely on Grok for approval and actual modifications.
The lack of transparency in Grok’s editing process is notable. Although Grokipedia claims “22,319” edits have been approved, there is no accessible log detailing these changes, the pages affected, or the contributors involved. This stands in stark contrast to Wikipedia’s detailed editing logs, which can be sorted by page, user, or IP address. I suspect many of Grokipedia’s edits involve adding internal links, though this is conjecture based on limited page browsing.
The homepage offers a glimpse into recent edits, listing a few updates beneath the search bar. However, these are vague, merely indicating an article name and an approved edit without specific details. This results in a disjointed array of entries, with frequent mentions of Elon Musk and religious topics, interspersed with entries on TV shows like Friends and The Traitors UK, and even unusual suggestions such as the health benefits of camel urine.
Wikipedia provides a structured editing history, elucidating who made changes, their reasons, and offering guidelines for style and sourcing, alongside the ability to compare different versions of a page. In contrast, Grokipedia’s editing log is a cumbersome affair, offering only timestamps, suggestions, and Grok’s often complex AI-generated justifications, with no facility to sort or skip through entries. This makes navigating even a small number of edits frustrating, and the system becomes impractical as more edits accumulate, failing to clearly indicate where changes have been applied.
Unsurprisingly, Grok doesn’t seem to be the most consistent editor. It makes for confounding reading at times and edit logs betray the lack of clear guidelines for wannabe editors. For example, the editing log for Musk’s biographical page shows many suggestions about his daughter, Vivian, who is transgender. Editors suggest using both her name and pronouns in line with her gender identity and those assigned at birth. While it’s almost impossible to follow what happened precisely, Grok’s decision to edit incrementally meant there was a confusing mix of both throughout the page.
As a chatbot, Grok is amenable to persuasion. For a suggested edit to Musk’s biographical page, a user suggested “the veracity of this statement should be verified,” referring to a quote about the fall of Rome being linked to low birth rates. In a reply far wordier than it needed to be, Grok rejected the suggestion as unnecessary. For a similar request with different phrasing, Grok reached the opposite conclusion, accepting the suggestion and adding the kind of information it previously said was unnecessary. It isn’t too taxing to imagine how one might game requests to ensure edits are accepted.
While this is all technically possible on Wikipedia, the site has a small army of volunteer administrators — selected after a review process or election — to keep things in check. They enforce standards by blocking accounts or IP addresses from editing and locking down pages in cases of page vandalism or edit wars. It’s not clear Grokipedia has anything in place to do the same, leaving it completely at the mercy of random people and a chatbot that once called itself MechaHitler. The issue showed itself on several pages related to World War II and Hitler, for example. I found repeated (rejected) requests to note the dictator was also a painter and that far fewer people had died in the Holocaust than actually did. The corresponding pages on Wikipedia were “protected,” meaning they could only be edited by certain accounts. There were also detailed logs explaining the decision to protect them. If the editing system — or site in general — were easier to navigate, I’m sure I’d find more examples.
Pages like these are obvious targets for abuse, and it’s no surprise they’re among the first hit by malicious editors. They won’t be the last, and with Grokipedia’s chaotic editing system and Grok’s limited guardrails, it may soon be hard to tell what’s vandalism and what isn’t. At this rate, Grokipedia doesn’t feel poised for the stars, it feels poised to collapse into a swamp of barely readable disinformation.