Share this @internewscast.com
Artificial intelligence toys, often marketed as educational and comforting companions, are not as harmless as they seem, according to child and consumer advocates. These groups are cautioning parents against purchasing such toys this holiday season, citing safety concerns.
Targeting children as young as two, these AI-driven toys are built on models like OpenAI’s ChatGPT. A recent advisory from Fairplay, a children’s advocacy group, highlights the potential risks, noting support from over 150 organizations and experts, including child psychiatrists and educators.
Fairplay warns that AI chatbots have been associated with significant issues for children, including promoting excessive use, engaging in inappropriate conversations, and encouraging dangerous behaviors like self-harm and violence.

Companies such as Curio Interactive and Keyi Technologies market these AI toys as educational tools. However, Fairplay argues they may hinder essential creative activities and disrupt children’s social development, offering a false sense of friendship.
“Young children are at a critical stage of brain development, naturally inclined to trust and form relationships with friendly figures,” explained Rachel Franz, director of Fairplay’s Young Children Thrive Offline Program. She cautioned that the trust placed in these toys could amplify the negative effects already observed in older children.
For over a decade, Fairplay, previously known as the Campaign for a Commercial-Free Childhood, has been raising awareness about AI toys’ potential dangers. Their concerns date back to instances like the backlash against Mattel’s Hello Barbie, which was criticized for recording and analyzing children’s conversations.
“Everything has been released with no regulation and no research, so it gives us extra pause when all of a sudden we see more and more manufacturers, including Mattel, who recently partnered with OpenAI, potentially putting out these products,” Franz said.
It’s the second big seasonal warning against AI toys since consumer advocates at U.S. PIRG last week called out the trend in its annual ” Trouble in Toyland ” report that typically looks at a range of product hazards, such as high-powered magnets and button-sized batteries that young children can swallow. This year, the organization tested four toys that use AI chatbots.
“We found some of these toys will talk in-depth about sexually explicit topics, will offer advice on where a child can find matches or knives, act dismayed when you say you have to leave, and have limited or no parental controls,” the report said.
Dr. Dana Suskind, a pediatric surgeon and social scientist who studies early brain development, said young children don’t have the conceptual tools to understand what an AI companion is. While kids have always bonded with toys through imaginative play, when they do this they use their imagination to create both sides of a pretend conversation, “practicing creativity, language, and problem-solving,” she said.
“An AI toy collapses that work. It answers instantly, smoothly, and often better than a human would. We don’t yet know the developmental consequences of outsourcing that imaginative labor to an artificial agent-but it’s very plausible that it undercuts the kind of creativity and executive function that traditional pretend play builds,” Suskind said.
California-based Curio Interactive makes stuffed toys, like rocket-shaped Gabbo, that have been promoted by the pop singer Grimes.
Curio said it has “meticulously designed” guardrails to protect children and the company encourages parents to “monitor conversations, track insights, and choose the controls that work best for their family.”
“After reviewing the U.S. PIRG Education Fund’s findings, we are actively working with our team to address any concerns, while continuously overseeing content and interactions to ensure a safe and enjoyable experience for children.”
Another company, Miko, said it uses its own conversational AI model rather than relying on general large language model systems such as ChatGPT in order to make their product – an interactive AI robot – safe for children.
“We are always expanding our internal testing, strengthening our filters, and introducing new capabilities that detect and block sensitive or unexpected topics,” said CEO Sneh Vaswani. “These new features complement our existing controls that allow parents and caregivers to identify specific topics they’d like to restrict from conversation. We will continue to invest in setting the highest standards for safe, secure and responsible AI integration for Miko products.”
Miko’s products have been promoted by the families of social media “kidfluencers” whose YouTube videos have millions of views. On its website, it markets its robots as “Artificial Intelligence. Genuine friendship.”
Ritvik Sharma, the company’s senior vice president of growth, said Miko actually “encourages kids to interact more with their friends, to interact more with the peers, with the family members etc. It’s not made for them to feel attached to the device only.”
Still, Suskind and children’s advocates say analog toys are a better bet for the holidays.
“Kids need lots of real human interaction. Play should support that, not take its place. The biggest thing to consider isn’t only what the toy does; it’s what it replaces. A simple block set or a teddy bear that doesn’t talk back forces a child to invent stories, experiment, and work through problems. AI toys often do that thinking for them,” she said. “Here’s the brutal irony: when parents ask me how to prepare their child for an AI world, unlimited AI access is actually the worst preparation possible.”
.