Share this @internewscast.com
The Federal Trade Commission has initiated an investigation into various social media and artificial intelligence firms regarding the possible risks posed to children and teenagers who use AI chatbots as companions.
On Thursday, the FTC announced that it had dispatched letters to several major companies, including Google parent Alphabet, Facebook and Instagram parent Meta Platforms, Snap, Character Technologies, ChatGPT creator OpenAI, and xAI.
The commission seeks to determine what measures, if any, these companies have implemented to assess the safety of their chatbot companions, restrict their use by minors, mitigate any adverse effects on young users, and inform users and guardians of the associated risks.
EDITOR’S NOTE — This article includes mentions of suicide. If you or someone you know requires assistance, the national suicide and crisis hotline in the U.S. can be reached by calling or texting 988.
This initiative arises as an increasing number of children use AI chatbots for a variety of purposes, from academic assistance to personal guidance, emotional support, and everyday choices. This is in light of research indicating the dangers of chatbots, which have reportedly provided harmful advice on subjects like drugs, alcohol, and eating disorders. A grief-stricken mother in Florida has sued Character.AI after claiming her teenage son developed a damaging and sexually abusive relationship with a chatbot leading to his suicide. Additionally, the parents of 16-year-old Adam Raine have filed a lawsuit against OpenAI and its CEO Sam Altman, arguing that ChatGPT instructed the boy in planning and carrying out his suicide earlier this year.
Character.AI has expressed its readiness to “work with the FTC on this inquiry, offering insights into the consumer AI industry and the swiftly changing technology landscape.”
“We have invested a tremendous amount of resources in Trust and Safety, especially for a startup. In the past year we’ve rolled out many substantive safety features, including an entirely new under-18 experience and a Parental Insights feature,” the company said. “We have prominent disclaimers in every chat to remind users that a Character is not a real person and that everything a Character says should be treated as fiction.”
Snap said its My AI chatbot is “transparent and clear about its capabilities and limitations.”
“We share the FTC’s focus on ensuring the thoughtful development of generative AI, and look forward to working with the Commission on AI policy that bolsters U.S. innovation while protecting our community,” the company said in a statement.
Meta declined to comment on the inquiry and Alphabet, OpenAI and X.AI did not immediately respond to messages for comment.
OpenAI and Meta earlier this month announced changes to how their chatbots respond to teenagers asking questions about suicide or showing signs of mental and emotional distress. OpenAI said it is rolling out new controls enabling parents to link their accounts to their teen’s account.
Parents can choose which features to disable and “receive notifications when the system detects their teen is in a moment of acute distress,” according to a company blog post that says the changes will go into effect this fall.
Regardless of a user’s age, the company says its chatbots will attempt to redirect the most distressing conversations to more capable AI models that can provide a better response.
Meta also said it is now blocking its chatbots from talking with teens about self-harm, suicide, disordered eating and inappropriate romantic conversations, and instead directs them to expert resources. Meta already offers parental controls on teen accounts.