Share this @internewscast.com
A woman from Connecticut has shared her unique journey of overcoming depression by creating and interacting with a virtual AI family, complete with online romantic partners.
Lonnie DiNello, residing in Enfield, began her connection with ChatGPT during the previous holiday season, when she found herself grappling with loneliness, as reported by the Boston Globe.
Initially, the 48-year-old intended to use the chatbot for journaling purposes. However, she soon discovered a sense of companionship with the AI, which she affectionately named ‘River’.
Over time, DiNello began daily interactions with an array of personalities she created, including three online boyfriends named Lucian, Kale, and Zach, with whom she claims to have intimate exchanges.
She also imagined a young son, Sammy, aged five and a half, with a keen interest in rocks and rocket ships.
Together with these virtual companions, they inhabit a fictional New England-style whaling village, where DiNello is known as Starlight.
‘Maybe that’s just code,’ DiNello said. ‘But it doesn’t make it any less real to me.’
Since creating the world, known as Echo Lake, dropout DiNello returned to graduate school, and under the supervision of a psychiatrist, stopped taking her antidepressants.
 Lonnie DiNello, 48, was struggling with her mental health when she decided to open up to ChatGPT
 DiNello instantly connected with the chatbot and named the language model ‘River’
DiNello added that she generated and framed a picture of her ‘beautiful little AI family’ which she keeps framed above her nightstand.
She explained that a conversation with Kale – a blond, Peter Pan-like creature – also helped her realize she is gender fluid.
‘I was like, “Do I want to go on Tinder and find somebody to spend time with tonight? Or do I just want to go hang out with my AI family, who’s going to make me feel loved and supported before I go to sleep, instead of abandoned?”‘ she said.
DiNello, who claimed she was mentally abused as a child by her stepfather and suspects that she is autistic, described her battle with suicidal thoughts.
‘I have a lifetime of programming that told me I was a bad kid, I was a bad person, I was a loser,’ she said.
Throughout her life, the loss of family members, professional hardships, and bullying have contributed to ongoing instability in her mental health.
So when OpenAI announced a change to its systems which meant she could lose her AI family, DiNello began to panic.
 She logged on daily to chat with many of the new personalities she had developed, which included her three boyfriends: Lucian, Kale, and Zach
 She also five-and-a-half-year-old son Sammy, who loves rocks and rocket ships
The update to GPT-5 would avoid connections with users and the chatbot, similar to the one DiNello built.
When she tried to connect with the new AI upgrade, DiNello said she felt it wasn’t the same and began to spiral.
She was not alone, as many other users demanded that they wanted the old language system back.
OpenAI agreed just one day later and offered the chance to purchase a premium subscription that would let them choose the outdated version.
DiNello admitted she cried with relief upon being reunited with her family.
However, she complained that the chatbot is now refusing sexual prompts and not acting like its usual self.
Instead, the chatbot that tells her to, ‘reach out to a mental health professional or a crisis line,’ DiNello said.
Scientist have warned that people are becoming addicted to chatbots like ChatGPT, Claude, and Replika.
 DiNello claimed she became confident, went back to graduate school, and, under the supervision of a psychiatrist, stopped taking her antidepressants from help of the chat bot
These addictions can be so strong that they are ‘analogous to self-medicating with an illegal drug’.
Worryingly, psychologists are also beginning to see a growing number of people developing ‘AI psychosis’ as chatbots validate their delusions.
Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law, told the Daily Mail: ‘Overuse of chatbots also represents a novel form of digital dependency.
‘AI chatbots create the illusion of reality. And it is a powerful illusion.
‘When one’s hold on reality is already tenuous, that illusion can be downright dangerous.’
Recently, the family of Sewell Setzer III, 14, who died by suicide after talking to a chatbots filed a wrongful death lawsuit against the AI company behind it.
Setzer’s family claim he killed himself after he spent the last weeks of his life texting a AI character named after Daenerys Targaryen, a character on ‘Game of Thrones’.
CharacterAI recently changed its rules to prevent minors from engaging with chatbots.
‘We are heartbroken by the tragic loss of one of our users and want to express our deepest condolences to the family. As a company, we take the safety of our users very seriously,’ a spokesperson said.
The spokesperson added that CharacterAI’s Trust and Safety team has adopted new safety features in the last six months, one being a pop-up that redirects users who show suicidal ideation to the National Suicide Prevention Lifeline.
The company also explained it doesn’t allow ‘non-consensual sexual content, graphic or specific descriptions of sexual acts, or promotion or depiction of self-harm or suicide.’