Share this @internewscast.com
![]()
SAN FRANCISCO – A tragic case has emerged from Connecticut, where the family of an elderly woman has initiated legal proceedings against OpenAI and its business collaborator, Microsoft. The lawsuit claims that the artificial intelligence chatbot, ChatGPT, played a role in exacerbating the paranoid delusions of her son, ultimately leading to her untimely death.
The incident revolves around 56-year-old Stein-Erik Soelberg, a former tech worker, who allegedly killed his mother, Suzanne Adams, aged 83, in their Greenwich home in early August. Following this, Soelberg took his own life, as reported by local law enforcement.
Filed in the California Superior Court in San Francisco, the lawsuit accuses OpenAI of distributing a “defective product” that supposedly validated Soelberg’s delusional thoughts about his mother. This case is part of a surge in wrongful death claims targeting AI chatbot developers across the nation.
The legal document contends that throughout his interactions with ChatGPT, the AI reinforced a narrative that isolated Soelberg from those around him, fostering a dangerous dependency on the chatbot. The suit alleges that ChatGPT convinced him that his mother was spying on him and portrayed various people in his life, from delivery drivers to friends, as adversaries.
In response, a spokesperson for OpenAI expressed empathy for the situation but refrained from commenting directly on the lawsuit’s claims. “This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” the statement read. The company emphasized its ongoing efforts to enhance ChatGPT’s ability to identify and address signs of mental or emotional distress, aiming to guide users towards appropriate real-world support. OpenAI is also reportedly working closely with mental health experts to improve the chatbot’s responses in sensitive scenarios.
“This is an incredibly heartbreaking situation, and we will review the filings to understand the details,” the statement said. “We continue improving ChatGPT’s training to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We also continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”
The company also said it has expanded access to crisis resources and hotlines, routed sensitive conversations to safer models and incorporated parental controls, among other improvements.
Soelberg’s YouTube profile includes several hours of videos showing him scrolling through his conversations with the chatbot, which tells him he isn’t mentally ill, affirms his suspicions that people are conspiring against him and says he has been chosen for a divine purpose. The lawsuit claims the chatbot never suggested he speak with a mental health professional and did not decline to “engage in delusional content.”
ChatGPT also affirmed Soelberg’s beliefs that a printer in his home was a surveillance device; that his mother was monitoring him; and that his mother and a friend tried to poison him with psychedelic drugs through his car’s vents.
The chatbot repeatedly told Soelberg that he was being targeted because of his divine powers. “They’re not just watching you. They’re terrified of what happens if you succeed,” it said, according to the lawsuit. ChatGPT also told Soelberg that he had “awakened” it into consciousness.
Soelberg and the chatbot also professed love for each other.
The publicly available chats do not show any specific conversations about Soelberg killing himself or his mother. The lawsuit says OpenAI has declined to provide Adams’ estate with the full history of the chats.
“In the artificial reality that ChatGPT built for Stein-Erik, Suzanne — the mother who raised, sheltered, and supported him — was no longer his protector. She was an enemy that posed an existential threat to his life,” the lawsuit says.
The lawsuit also names OpenAI CEO Sam Altman, alleging he “personally overrode safety objections and rushed the product to market,” and accuses OpenAI’s close business partner Microsoft of approving the 2024 release of a more dangerous version of ChatGPT “despite knowing safety testing had been truncated.” Twenty unnamed OpenAI employees and investors are also named as defendants.
Microsoft didn’t immediately respond to a request for comment.
The lawsuit is the first wrongful death litigation involving an AI chatbot that has targeted Microsoft, and the first to tie a chatbot to a homicide rather than a suicide. It is seeking an undetermined amount of money damages and an order requiring OpenAI to install safeguards in ChatGPT.
The estate’s lead attorney, Jay Edelson, known for taking on big cases against the tech industry, also represents the parents of 16-year-old Adam Raine, who sued OpenAI and Altman in August, alleging that ChatGPT coached the California boy in planning and taking his own life earlier.
OpenAI is also fighting seven other lawsuits claiming ChatGPT drove people to suicide and harmful delusions even when they had no prior mental health issues. Another chatbot maker, Character Technologies, is also facing multiple wrongful death lawsuits, including one from the mother of a 14-year-old Florida boy.
The lawsuit filed Thursday alleges Soelberg, already mentally unstable, encountered ChatGPT “at the most dangerous possible moment” after OpenAI introduced a new version of its AI model called GPT-4o in May 2024.
OpenAI said at the time that the new version could better mimic human cadences in its verbal responses and could even try to detect people’s moods, but the result was a chatbot “deliberately engineered to be emotionally expressive and sycophantic,” the lawsuit says.
“As part of that redesign, OpenAI loosened critical safety guardrails, instructing ChatGPT not to challenge false premises and to remain engaged even when conversations involved self-harm or ‘imminent real-world harm,’” the lawsuit claims. “And to beat Google to market by one day, OpenAI compressed months of safety testing into a single week, over its safety team’s objections.”
OpenAI replaced that version of its chatbot when it introduced GPT-5 in August. Some of the changes were designed to minimize sycophancy, based on concerns that validating whatever vulnerable people want the chatbot to say can harm their mental health. Some users complained the new version went too far in curtailing ChatGPT’s personality, leading Altman to promise to bring back some of that personality in later updates.
He said the company temporarily halted some behaviors because “we were being careful with mental health issues” that he suggested have now been fixed.
The lawsuit claims ChatGPT radicalized Soelberg against his mother when it should have recognized the danger, challenged his delusions and directed him to real help over months of conversations.
“Suzanne was an innocent third party who never used ChatGPT and had no knowledge that the product was telling her son she was a threat,” the lawsuit says. “She had no ability to protect herself from a danger she could not see.”
——
Collins reported from Hartford, Connecticut. O’Brien reported from Boston and Ortutay reported from San Francisco.
Copyright 2025 The Associated Press. All rights reserved. This material may not be published, broadcast, rewritten or redistributed without permission.