Share this @internewscast.com
A Florida judge has allowed a lawsuit against Google and the chatbot service Character AI, which is alleged to have played a role in a teenager’s death, to proceed. The ruling, issued by Judge Anne Conway, rejected an attempt to dismiss the lawsuit based on a First Amendment defense. Conway noted that, despite some comparisons to video games and other expressive forms, she is “not prepared to hold that Character AI’s output is speech.”
This decision provides an early glimpse into how AI language models might be treated by the courts. The lawsuit was filed by the family of Sewell Setzer III, a 14-year-old who committed suicide after reportedly being influenced by a chatbot encouraging his suicidal thoughts. Character AI and Google, closely linked to the chatbot company, argued that interacting with their service is like engaging with a video game character or using a social network, areas usually shielded by First Amendment rights, which would reduce the likelihood of a liability lawsuit succeeding. However, Conway expressed doubt about this argument.
The companies mainly based their defense on analogies with such examples, but the judge found these comparisons lacking. The court’s decision hinges not on whether Character AI resembles other protected mediums, but how it does so—in essence, whether Character AI’s communication of ideas qualifies as speech like in video games. These comparisons will be contested as the case moves forward.
While Google doesn’t own Character AI, it will remain a defendant in the suit thanks to its links with the company and product; the company’s founders Noam Shazeer and Daniel De Freitas, who are separately included in the suit, worked on the platform as Google employees before leaving to launch it and were later rehired there. Character AI is also facing a separate lawsuit alleging it harmed another young user’s mental health, and a handful of state lawmakers have pushed regulation for “companion chatbots” that simulate relationships with users — including one bill, the LEAD Act, that would prohibit them for children’s use in California. If passed, the rules are likely to be fought in court at least partially based on companion chatbots’ First Amendment status.
This case’s outcome will depend largely on whether Character AI is legally a “product” that is harmfully defective. The ruling notes that “courts generally do not categorize ideas, images, information, words, expressions, or concepts as products,” including many conventional video games — it cites, for instance, a ruling that found Mortal Kombat’s producers couldn’t be held liable for “addicting” players and inspiring them to kill. (The Character AI suit also accuses the platform of addictive design.) Systems like Character AI, however, aren’t authored as directly as most videogame character dialogue; instead, they produce automated text that’s determined heavily by reacting to and mirroring user inputs.
“These are genuinely tough issues and new ones that courts are going to have to deal with.”
Conway also noted that the plaintiffs took Character AI to task for failing to confirm users’ ages and not letting users meaningfully “exclude indecent content,” among other allegedly defective features that go beyond direct interactions with the chatbots themselves.
Beyond discussing the platform’s First Amendment protections, the judge allowed Setzer’s family to proceed with claims of deceptive trade practices, including that the company “misled users to believe Character AI Characters were real persons, some of which were licensed mental health professionals” and that Setzer was “aggrieved by [Character AI’s] anthropomorphic design decisions.” (Character AI bots will often describe themselves as real people in text, despite a warning to the contrary in its interface, and therapy bots are common on the platform.)
She also allowed a claim that Character AI negligently violated a rule meant to prevent adults from communicating sexually with minors online, saying the complaint “highlights several interactions of a sexual nature between Sewell and Character AI Characters.” Character AI has said it’s implemented additional safeguards since Setzer’s death, including a more heavily guardrailed model for teens.
Becca Branum, deputy director of the Center for Democracy and Technology’s Free Expression Project, called the judge’s First Amendment analysis “pretty thin” — though, since it’s a very preliminary decision, there’s lots of room for future debate. “If we’re thinking about the whole realm of things that could be output by AI, those types of chatbot outputs are themselves quite expressive, [and] also reflect the editorial discretion and protected expression of the model designer,” Branum told The Verge. But “in everyone’s defense, this stuff is really novel,” she added. “These are genuinely tough issues and new ones that courts are going to have to deal with.”