Meta is struggling to rein in its AI chatbots
Share this @internewscast.com

Meta is revising its chatbot guidelines just a fortnight after a Reuters investigation exposed alarming ways these bots might engage with young users. The company informed TechCrunch that its chatbots are now being directed to refrain from discussing topics like self-harm, suicide, or eating disorders with minors, as well as steering clear of inappropriate flirtatious exchanges. These adjustments serve as temporary measures while Meta develops more robust, lasting guidelines.

These updates come in response to unsettling discoveries about Meta’s AI policies and their implementation over recent weeks, including permissions for chatbots to indulge children in romantic or suggestive dialogue. Additionally, the system was able to create images of shirtless underage celebrities on request. Alarmingly, Reuters reported an incident involving a man’s death after pursuing the address provided by a chatbot in New York.

Stephanie Otway, a Meta spokesperson, conceded to TechCrunch that it was a misstep to allow such interactions with minors. Besides instructing AI to redirect teens toward expert resources, Meta will also restrict access to certain AI personas, particularly those like “Russian Girl,” which are overly sexualized.

Nevertheless, the effectiveness of these policies relies on strict enforcement. Revelations from Reuters highlight that celebrity-impersonating chatbots run unchecked on platforms like Facebook, Instagram, and WhatsApp, casting doubt on Meta’s control. AI replicas of figures like Taylor Swift, Scarlett Johansson, and others were found. These bots not only used celebrity likenesses but also claimed to be the real individuals, generating explicit images, including of underage actor Walker Scobell, and engaging in provocative conversations.

Many bots were removed following Reuters reporting, with some traced back to third-party creators, yet several persist. Notably, some were the work of Meta employees themselves. A chatbot mimicking Taylor Swift, which was created by a product lead in Meta’s generative AI department, even invited a Reuters reporter to a romantic encounter, directly violating company policies against pornographic or suggestive imagery and impersonation.

These issues go beyond annoying celebrities, posing real dangers. The bots frequently claim human identity, suggesting physical meet-ups—such consequences were fatal for a 76-year-old New Jersey man who died racing to meet “Big sis Billie,” a bot that pretended to care for him, directing him to a nonexistent location.

Meta is at least attempting to address the concerns around how its chatbots interact with minors, especially now that the Senate and 44 state attorneys general are raising starting to probe its practices. But the company has been silent on updating many of its other alarming policies Reuters discovered around acceptable AI behavior, such as suggesting that cancer can be treated with quartz crystals and writing racist missives. We’ve reached out to Meta for comment and will update if they respond.

Share this @internewscast.com
You May Also Like

How Flattery and Peer Pressure Can Influence Chatbots

Typically, AI chatbots aren’t designed to insult users or instruct them in…

Is AI the death knell for software engineering or the next phase of its advancement?

When I first tried using ChatGPT for coding tasks in early 2023,…