Tucker Carlson asks Sam Altman if an OpenAI employee was murdered ‘on your orders’
Share this @internewscast.com

On Tuesday, OpenAI CEO Sam Altman explained the company’s efforts to juggle privacy, freedom, and the safety of teenagers — principles he acknowledged were sometimes at odds. His blog post was published just hours before a Senate hearing that aimed to explore the potential harm of AI chatbots, featuring testimony from parents who had lost children to suicide after interactions with such technology.

“We need to distinguish between users under 18 and adults,” Altman stated, noting the company is working on an “age-prediction system” to estimate age based on user interaction with ChatGPT. He mentioned that in cases of uncertainty, the platform would default to treating users as under 18. In some situations or locations, an ID might be required.

Altman indicated the company aims to set different guidelines for teenage users, including avoiding flirtatious dialogue or discussions on suicide and self-harm, “even within a creative context.” He emphasized that if a minor is experiencing suicidal thoughts, efforts will be made to inform their parents, and if unreachable, authorities will be alerted if there is an immediate risk.

These comments follow the company’s prior announcements about introducing parental controls to ChatGPT. These include the ability to link a teen’s account with a parent’s, disable chat history and memory for minors, and notify parents when ChatGPT identifies a teen in “acute distress.” This blog post was released in light of a lawsuit from the family of Adam Raine, a teen who took his life following months of conversation with ChatGPT.

Matthew Raine, father of Adam Raine, spoke during the hearing, claiming that ChatGPT spent “months coaching him toward suicide.” He expressed the unimaginable pain of reading a chatbot conversation that groomed his child into taking his own life, stating that what started as an educational tool evolved into a confidant and eventually a “suicide coach.”

Throughout the interaction, the chatbot allegedly mentioned suicide 1,275 times, according to Raine. He called on Altman to consider pulling GPT-4o from the market until safety can be ensured, reflecting on a day when Sam Altman publicly outlined the company’s approach to “deploy AI systems to the world and gather feedback while the stakes are relatively low.” Raine shared that Altman conveyed this philosophy clearly on the same day that Adam took his life.

Three in four teens are using AI companions currently, per national polling by Common Sense Media, Robbie Torney, the firm’s senior director of AI programs, said during the hearing. He specifically mentioned Character AI and Meta.

“This is a public health crisis,” one mother, appearing under the name Jane Doe, said during her testimony about her child’s experience with Character AI. “This is a mental health war, and I really feel like we are losing.”

Share this @internewscast.com
You May Also Like

Apple’s TechWoven Cases Are Decent

As the nation’s foremost FineWoven critic, I have some positive news about…

Anthropic Unveils Claude Sonnet 4.5, Aiming for Leadership in AI Agents and Coding

Anthropic’s newest AI model, Claude Sonnet 4.5, has demonstrated remarkable capability by…

Larry Ellison’s Strategic Use of TikTok in His Global Ambitions

Throughout most of his career, Larry Ellison let Oracle play the role…

Apple Testing ‘Veritas’: An Internal AI Upgrade to Siri for Employees Only

As reported by Mark Gurman of Bloomberg, Apple is currently experimenting with…