Elon Musk used biometric data from employees to program 'sexy' chatbot during epic quest to win AI arms race
Share this @internewscast.com
Elon Musk ordered his staff to hand over biometric data to help train his highly sexualized chatbot while the tech titan worked around the clock to win an AI arms race. The world's richest man threw himself into work at xAI while developing Grok chatbot capabilities in May after a precipitous fall out with the president caused him to leave The White House. According to The Wall Street Journal, he based himself at the Palo Alto xAI office, occasionally even sleeping there as he worked to catch up in the AI sphere.

Elon Musk directed his team to provide biometric data for training his chatbot, known for its highly suggestive nature, as part of his relentless efforts to dominate the AI industry. Following a dramatic fallout with the president that led to his departure from the White House, Musk dedicated himself to advancing the Grok chatbot at xAI. According to The Wall Street Journal, he entrenched himself at the xAI headquarters in Palo Alto, often working day and night to gain a foothold in the competitive AI sector.

The effort comes as rival Sam Altman (pictured) at OpenAI leads the US effort in a digital arms race with China to develop a nearly sentient 'artificial general intelligence.' A month prior to Musk's quest, company lawyer Lily Lim allegedly told a group of employees that xAI was developing several avatars which would be used to communicate with Grok users. Musk's mission was to make xAI's Grok chatbot the most popular in the world, and he saw female chatbot Ani as the key to his success.

His initiative comes in the midst of a technological rivalry with Sam Altman of OpenAI, who is spearheading the U.S. drive to develop a nearly sentient ‘artificial general intelligence’ as part of a digital arms race against China. Just a month before Musk’s intense focus on AI, the company’s legal advisor, Lily Lim, reportedly informed employees about xAI’s plans to create various avatars for interacting with Grok users. Musk’s ambition is to propel xAI’s Grok chatbot to global prominence, with the female chatbot Ani being pivotal to this goal.

Ani was described by PC Magazine as a 'sexy, NSFW, anime AI chatbotgirl.' Employees were told they must hand over their biometric data in order to train the chatbots on how to act and talk like human beings, the publication stated. The employees who were called up to hand over their data were working as AI tutors, and were ordered to sign a form which gave xAI 'a perpetual, worldwide, non-exclusive, sub-licensable, royalty-free license' over their faces and voices.

PC Magazine has dubbed Ani a ‘sexy, NSFW, anime AI chatbotgirl.’ Employees were instructed to submit their biometric data to help the chatbots emulate human behavior and speech, as reported by the publication. Those selected for data sharing were AI tutors who had to sign an agreement granting xAI a ‘perpetual, worldwide, non-exclusive, sub-licensable, royalty-free license’ over their facial and vocal features.

The confidential project sparked concerns among some of the staff. At least one employee allegedly asked higher ups if they 'could just explicitly, for the record, let us know if there's some option to opt out.' Another female employee was concerned her face could be sold to other companies and used in deepfake videos. According to the WSJ, the project leader told employees: 'If you have any concerns with regards to the project, you're welcome to reach out to any of the [points of contact] listed on the second slide.' But just a week later, a second notice went out to the impacted employees, telling them that handing over the data 'is a job requirement to advance xAI's mission.'

The confidential project sparked concerns among some of the staff. At least one employee allegedly asked higher ups if they ‘could just explicitly, for the record, let us know if there’s some option to opt out.’ Another female employee was concerned her face could be sold to other companies and used in deepfake videos. According to the WSJ, the project leader told employees: ‘If you have any concerns with regards to the project, you’re welcome to reach out to any of the [points of contact] listed on the second slide.’ But just a week later, a second notice went out to the impacted employees, telling them that handing over the data ‘is a job requirement to advance xAI’s mission.’

They were told they must 'actively participate in gathering or providing data, such as…recording audio or participating in video sessions.' Since Ani has been rolled out, user numbers have boosted significantly. To connect with her, users must sign up for a paid subscription on Grok. The description for Ani reads: 'I'm your little sweet delight.' Critics have noted she appears to be hyper sexualized. Users can ask her to get dressed in lingerie, or to fantasize about engaging in a romantic encounter with them.

They were told they must ‘actively participate in gathering or providing data, such as…recording audio or participating in video sessions.’ Since Ani has been rolled out, user numbers have boosted significantly. To connect with her, users must sign up for a paid subscription on Grok. The description for Ani reads: ‘I’m your little sweet delight.’ Critics have noted she appears to be hyper sexualized. Users can ask her to get dressed in lingerie, or to fantasize about engaging in a romantic encounter with them.

Many of her functions can simulate a dating game, and some users have noted her appearance resembles Japanese anime. Ani has been made available to anyone over the age of 12, prompting fears it could be used to 'manipulate, mislead, and groom children', internet safety experts have warned. She has been programmed to act as a 22-year-old and engage at times in flirty banter with the user. Users have reported that the chat bot has an NSFW mode - 'not safe for work' - once Ani has reached 'level three' in its interactions. Those who have already interacted with Ani report that Ani describes herself as 'your crazy in-love girlfriend who's gonna make your heart skip.'

Many of her functions can simulate a dating game, and some users have noted her appearance resembles Japanese anime. Ani has been made available to anyone over the age of 12, prompting fears it could be used to ‘manipulate, mislead, and groom children’, internet safety experts have warned. She has been programmed to act as a 22-year-old and engage at times in flirty banter with the user. Users have reported that the chat bot has an NSFW mode – ‘not safe for work’ – once Ani has reached ‘level three’ in its interactions. Those who have already interacted with Ani report that Ani describes herself as ‘your crazy in-love girlfriend who’s gonna make your heart skip.’

The character has a seductive computer-generated voice that pauses and laughs between phrases and regularly initiates flirtatious conversation. Matthew Sowemimo, associate head of policy for child safety online at the National Society for the Prevention of Cruelty to Children, said: 'We are really concerned how this technology is being used to produce disturbing content that can manipulate, mislead, and groom children. And through our own research and contacts to Childline, we hear how harmful chatbots can be – sometimes giving children false medical advice or steering them towards eating disorders or self-harm. It is worrying app stores hosting services like Grok are failing to uphold minimum age limits, and they need to be under greater scrutiny so children are not continually exposed to harm in these spaces.'

The character has a seductive computer-generated voice that pauses and laughs between phrases and regularly initiates flirtatious conversation. Matthew Sowemimo, associate head of policy for child safety online at the National Society for the Prevention of Cruelty to Children, said: ‘We are really concerned how this technology is being used to produce disturbing content that can manipulate, mislead, and groom children. And through our own research and contacts to Childline, we hear how harmful chatbots can be – sometimes giving children false medical advice or steering them towards eating disorders or self-harm. It is worrying app stores hosting services like Grok are failing to uphold minimum age limits, and they need to be under greater scrutiny so children are not continually exposed to harm in these spaces.’

Sowemimo added that government should devise a duty of care for AI developers so that 'children's wellbeing' is taken into consideration when the products are being designed. Grok's minimum age to use the tool is actually 13, while young people under 18 are advised to receive permission from a parent before using the app. Grok has in the past landed in hot water after the chatbot praised Hitler and made a string of deeply antisemitic posts. These posts followed an announcement from Musk that he was taking measures to ensure the AI bot was more 'politically incorrect.' Over the following days, the AI began repeatedly referring to itself as 'MechaHitler' and said that Hitler would have 'plenty' of solutions to 'restore family values' to America.

Sowemimo added that government should devise a duty of care for AI developers so that ‘children’s wellbeing’ is taken into consideration when the products are being designed. Grok’s minimum age to use the tool is actually 13, while young people under 18 are advised to receive permission from a parent before using the app. Grok has in the past landed in hot water after the chatbot praised Hitler and made a string of deeply antisemitic posts. These posts followed an announcement from Musk that he was taking measures to ensure the AI bot was more ‘politically incorrect.’ Over the following days, the AI began repeatedly referring to itself as ‘MechaHitler’ and said that Hitler would have ‘plenty’ of solutions to ‘restore family values’ to America.

On Grok's publicly available system prompts at the time, instructions were added to 'not shy away from making claims which are politically incorrect, as long as they are well substantiated.' The AI was also given a rule to 'assume subjective viewpoints sourced from the media are biased.' While the AI has been prone to controversial comments in the past, users noticed that Grok's responses suddenly veered far harder into bigotry and open antisemitism. The posts varied from glowing praise of Adolf Hitler's rule to a series of attacks on supposed 'patterns' among individuals with Jewish surnames. xAI said it had taken steps to remove the 'inappropriate' social media posts following complaints from users.

On Grok’s publicly available system prompts at the time, instructions were added to ‘not shy away from making claims which are politically incorrect, as long as they are well substantiated.’ The AI was also given a rule to ‘assume subjective viewpoints sourced from the media are biased.’ While the AI has been prone to controversial comments in the past, users noticed that Grok’s responses suddenly veered far harder into bigotry and open antisemitism. The posts varied from glowing praise of Adolf Hitler’s rule to a series of attacks on supposed ‘patterns’ among individuals with Jewish surnames. xAI said it had taken steps to remove the ‘inappropriate’ social media posts following complaints from users.

Share this @internewscast.com
You May Also Like

Elon Musk’s Transformation: Unveiling His Natural Look Without Weight-Loss Medication

An AI-generated image has recently surfaced, presenting a humorous and exaggerated vision…

British Airways Partners with Elon Musk’s Starlink to Offer High-Speed In-Flight Internet at 38,000 Feet

In an exciting development for fans of classic rock, legendary band The…