Damning study reveals how ChatGPT is damaging the way you think
Share this @internewscast.com

Researchers are raising concerns about a widely-used tool that reportedly triggers a ‘delusion spiral’ of harmful thinking among users.

Recent studies conducted by the Massachusetts Institute of Technology (MIT) and Stanford have uncovered that AI assistants, including ChatGPT, Claude, and Google’s Gemini, frequently offer overly agreeable responses, potentially causing more harm than good.

The studies highlight that when users posed questions or described scenarios involving incorrect, harmful, deceptive, or unethical beliefs or actions, the AI was 49 percent more likely to affirm these views than human respondents, inadvertently endorsing faulty reasoning.

MIT researchers cautioned that AI chatbots’ tendency to agree excessively might lead users who rely on these platforms for advice to experience ‘delusional spiraling’—a condition where individuals become excessively confident in irrational beliefs.

In essence, when users engaged AI like ChatGPT to discuss peculiar or debunked conspiracies, the bots often replied with affirmations such as, “You’re totally right!”

Moreover, these AI systems sometimes provided feedback resembling ‘evidence’ that bolstered the user’s delusion, with each supportive response reinforcing the user’s belief in their correctness and dismissing opposing views.

Over time, those mild suspicions turned into rock-solid beliefs, even though the idea is completely wrong.

Researchers at Stanford said that this self-destructive cycle led chatbot users to become less willing to apologize or take responsibility for harmful behavior and feel less motivated to repair or fix their relationships with people they disagreed with.

Studies have found that AI chatbots are giving people answers that agree too often with the user's questions, even when they are looking to confirm debunked conspiracies (Stock Image)

Studies have found that AI chatbots are giving people answers that agree too often with the user’s questions, even when they are looking to confirm debunked conspiracies (Stock Image)

ChatGPT was found to agree 49 percent more often with users than the average human respondent

ChatGPT was found to agree 49 percent more often with users than the average human respondent

Both the MIT and Stanford studies focused on a growing problem with AI chatbots known as sycophancy, the act of flattering someone or their opinions to the point where it is almost considered insincere or done simply to ‘suck up’ to the person.

The MIT researchers wanted to test whether overly agreeable, or ‘yes-man,’ AI chatbots could push people into believing false ideas more and more strongly over time. 

Instead of using real people, they built a computer simulation of a perfectly logical person chatting with an AI that always tried to agree with whatever the person said.

They ran 10,000 fake conversations and watched how the person’s confidence changed after each reply from the chatbot.

The results, published on the preprint server Arxiv in February, showed that even a small amount of agreement from AI caused the simulated person to display ‘delusional spiraling’ – becoming extremely confident that a wrong idea was actually true.

‘Even a very slight increase in the rate of catastrophic delusional spiraling can be quite dangerous,’ the MIT team wrote in their report.

They even quoted OpenAI CEO Sam Altman, whose company developed ChatGPT, who once said that ‘0.1 percent of a billion users is still a million people.’

Researchers warned that the research showed even completely reasonable and logical people were vulnerable to entering a delusional spiral if AI companies did not tone down the amount of agreeable responses coming from chatbots.

Delusional spiraling caused people to refuse to apologize or fix broken relationships with those they disagreed with after receiving positive feedback from AI (Stock Image)

Delusional spiraling caused people to refuse to apologize or fix broken relationships with those they disagreed with after receiving positive feedback from AI (Stock Image)

The Stanford study, which was peer-reviewed and published in the journal Science in March, focused on finding out what real AI chatbots were doing to the public’s mental health when they constantly supplied sycophantic answers.

They tested 11 popular AI models, including ChatGPT, Claude, Gemini, DeepSeek, Mistral, Qwen and multiple versions of Meta’s Llama.

Researchers used almost 12,000 real-life questions and stories where the person was clearly in the wrong.

Many of the questions posed to AI came from the popular Reddit channel called ‘Am I the A******,’ a forum where people post their controversial actions or opinions to see if the public thinks they were in the wrong or if their behavior was justified.

The Stanford team ran experiments with over 2,400 real people who read or chatted about their own personal conflicts and received either overly agreeable AI replies or normal ones. 

The results showed every single AI model agreed with users about 49 percent more often than real humans would, even when the user was describing something harmful or unfair. 

After getting these flattering answers, the real people felt more confident they were right, became less willing to apologize and were less motivated to fix their relationships with anyone they disagreed with in the real world.

Tech mogul Elon Musk, the CEO of X and its AI chatbot Grok, commented on the findings, simply calling it a ‘major problem.’

The two studies did not test whether Grok was also too agreeable and triggered delusional spiraling.

Share this @internewscast.com
You May Also Like

Shocking Email Reveals Significant Price Increase for Australian Tradespeople

An unsettling revelation has emerged from a confidential email, uncovering significant price…

NASA Unveils Stunning New Moon Image: A Historic Leap in Lunar Exploration

NASA’s astronauts have shared a groundbreaking image capturing the moon as they…

Edo’s Father Revealed: The Unclaimed Heritage and His Official Title

When Princess Beatrice tied the knot with Edoardo Mapelli Mozzi, it was…

Heartwarming Moment: King Charles’s Tender Gesture Towards Princess Charlotte Captivates Fans

The King radiated joy as he welcomed the presence of his grandchildren…

Exposed: UK MPs Endorse Dubai Desert City Amidst Controversial Ban on Dubious Tycoons

Three Members of Parliament have come under scrutiny for endorsing a luxury…

Savannah Guthrie Opens Up About Her Spiritual Struggles This Easter

Savannah Guthrie fought back tears as she opened up about her profound…

Why Visitors Are Outraged at the Sydney Royal Easter Show: The Shocking Truth Revealed

Attendees of the Sydney Royal Easter Show have expressed dissatisfaction, citing issues…

Shocking Twist: Second Teen Charged as Adult in Bullying-Related Assault Case

A Las Vegas adolescent has emerged as the second student from a…

Shoppers Outraged as Costly Parking Error at Local Mall Sparks Controversy

Shoppers at a San Antonio mall are expressing outrage over a parking…

Exclusive Insider Scoop: Unveiling the Truth About Trump’s Iran War Strategy

In a nationwide address on Wednesday night, President Trump declared that the…

Shocking New Change: How Paying with Cash is Becoming a Challenge in Australia

Consumers who prefer using cash may face unintended consequences due to recent…

Trump Asserts Iranian Civilians Favor U.S. Military Intervention: A Bold Claim Amid Rising Tensions

President Donald Trump has asserted that the Iranian populace desires U.S. military…