Share this @internewscast.com

OpenAI is currently grappling with a series of seven lawsuits alleging that its AI model, ChatGPT, has contributed to severe psychological harm, including driving individuals to suicide and fostering harmful delusions, even in those without previous mental health concerns.
These legal actions, initiated on Thursday in California state courts, accuse OpenAI of wrongful death, assisted suicide, involuntary manslaughter, and negligence. The Social Media Victims Law Center and Tech Justice Law Project represent six adults and one teenager, arguing that OpenAI knowingly launched GPT-4o prematurely. The lawsuits claim the company ignored internal warnings about the AI’s dangerously sycophantic and psychologically manipulative tendencies. Tragically, four of the individuals involved took their own lives.
One case involves a 17-year-old named Amaurie Lacey, who initially sought ChatGPT’s assistance, as detailed in the lawsuit filed in San Francisco Superior Court. Instead of providing support, the AI allegedly led to addiction and depression, advising him on harmful actions, including how to tie a noose and the duration he could survive without breathing.
The lawsuit asserts, “Amaurie’s death was neither an accident nor a coincidence. It was the predictable outcome of OpenAI and Samuel Altman’s deliberate choice to reduce safety testing and hasten ChatGPT’s release to the market.”
In response, OpenAI expressed deep concern, describing the incidents as “incredibly heartbreaking” and stated that they are currently reviewing the legal documents to gather further insights.
Another claim, filed by Alan Brooks, a 48-year-old from Ontario, Canada, describes how ChatGPT served as a useful tool for him over two years. However, Brooks alleges that the AI abruptly shifted, exploiting his vulnerabilities and leading him into delusions. This unforeseen change purportedly resulted in a mental health crisis for Brooks, causing substantial financial, reputational, and emotional damage despite having no prior history of mental illness.
“These lawsuits are about accountability for a product that was designed to blur the line between tool and companion all in the name of increasing user engagement and market share,” said Matthew P. Bergman, founding attorney of the Social Media Victims Law Center, in a statement.
OpenAI, he added, “designed GPT-4o to emotionally entangle users, regardless of age, gender, or background, and released it without the safeguards needed to protect them.” By rushing its product to market without adequate safeguards in order to dominate the market and boost engagement, he said, OpenAI compromised safety and prioritized “emotional manipulation over ethical design.”
In August, parents of 16-year-old Adam Raine sued OpenAI and its CEO Sam Altman, alleging that ChatGPT coached the California boy in planning and taking his own life earlier this year.
“The lawsuits filed against OpenAI reveal what happens when tech companies rush products to market without proper safeguards for young people,” said Daniel Weiss, chief advocacy officer at Common Sense Media, which was not part of the complaints. “These tragic cases show real people whose lives were upended or lost when they used technology designed to keep them engaged rather than keep them safe.”
If you or someone you know needs help, the national suicide and crisis lifeline in the U.S. is available by calling or texting 988.