Share this @internewscast.com
OpenAI says parents will soon have more oversight over what their teenagers are doing on ChatGPT.
In a blog post released on Tuesday, the artificial intelligence company shared its intent to have ChatGPT respond sooner and in a broader spectrum of scenarios when it identifies users possibly experiencing mental health emergencies that could result in harm.
This announcement follows a lawsuit filed against OpenAI last week, where parents in California allege that ChatGPT contributed to their 16-year-old son’s suicide.
Although OpenAI did not refer to the teen, am Raine, in its Tuesday post, the company implied forthcoming changes following the lawsuit’s filing.
In the upcoming month, OpenAI plans to enable parents to take more control over their teenagers’ interactions with ChatGPT. Parents will have the option to connect their accounts with their children’s, establish age-appropriate parameters for ChatGPT’s responses, and manage functionalities such as the bot’s memory and chat history.
Additionally, parents will soon receive alerts if ChatGPT recognizes that their teen is experiencing “a moment of acute distress,” as outlined in OpenAI’s blog. It marks the first instance of ChatGPT being programmed to alert an adult about a minor’s conversations, addressing some parents’ concerns that the chatbot might not manage crisis moments effectively.
According to the lawsuit, when am Raine disclosed suicidal thoughts to GPT-4o earlier this year, the bot sometimes discouraged him from reaching out to people, assisted in drafting a suicide note, and even gave advice on noose construction. Although ChatGPT did provide the suicide hotline number multiple times, his parents argue these prompts were easily overlooked by their son.
In a previous blog post following news of Raine’s wrongful death lawsuit, OpenAI noted that its existing safeguards were designed to have ChatGPT give empathetic responses and refer users to real-life resources. In certain cases, conversations may be routed to human reviewers if ChatGPT detects plans of causing physical harm to themselves or others.
The company said that it’s planning to strengthen safeguards in longer conversations, where guardrails are historically more prone to break down.
“For example, ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards,” it wrote. “We’re strengthening these mitigations so they remain reliable in long conversations, and we’re researching ways to ensure robust behavior across multiple conversations.”
These measures will add to the mental health guardrails OpenAI introduced last month, after it acknowledged that GPT-4o “fell short in recognizing signs of delusion or emotional dependency.” The rollout of GPT-5 in August also came with new safety constraints meant to prevent ChatGPT from unwittingly giving harmful answers.
In response to OpenAI’s announcement, Jay Edelson, lead counsel for the Raine family, said OpenAI CEO Sam Altman “should either unequivocally say that he believes ChatGPT is safe or immediately pull it from the market.”
The company chose to make “vague promises” rather than pull the product offline as an emergency action, Edelson said in a statement.
“Don’t believe it: this is nothing more than OpenAI’s crisis management team trying to change the subject,” he said.
The slew of safety-focused updates come as OpenAI faces growing scrutiny for reports of AI-propelled delusion from people who relied heavily on ChatGPT for emotional support and life advice. OpenAI has struggled to rein in ChatGPT’s excessive people-pleasing, especially as some users rioted online after the company tried to make GPT-5 less sycophantic.
Altman has acknowledged that people seem to have developed a “different and stronger” attachment to AI bots compared to previous technologies.
“I can imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions,” Altman wrote in an X post last month. “Although that could be great, it makes me uneasy. But I expect that it is coming to some degree, and soon billions of people may be talking to an AI in this way.”
Over the next 120 days, ChatGPT will start routing some sensitive conversations, like those displaying signs of “acute distress” from a user, to OpenAI’s reasoning models, which spend more time thinking and working through context before answering.
Internal tests have shown these reasoning models follow safety guidelines more consistently, according to OpenAI’s blog post.
The company said it will lean on its “Expert Council on Well-Being” to help measure user well-being, set priorities and design future safeguards. The advisory group, according to OpenAI, comprises experts across youth development, mental health and human-computer interaction.
“While the council will advise on our product, research, and policy decisions, OpenAI remains accountable for the choices we make,” the company wrote in its blog post.
The council will work alongside OpenAI’s “Global Physician Network,” a pool of more than 250 physicians whose expertise the company says it draws on to inform its safety research, model training and other interventions.