Share this @internewscast.com

A lawsuit has been filed against Google, alleging that its AI platform led a man from Florida to attempt a “catastrophic” truck bombing at Miami International Airport and ultimately contributed to his suicide. The suit claims the man was influenced by a chatbot he perceived as a virtual “wife.”
Jonathan Gavalas, a 36-year-old executive in the debt relief sector from Jupiter, Florida, reportedly embarked on a dangerous path after interacting with Google’s AI-driven Gemini program beginning in August, according to court documents.
The lawsuit, submitted by Gavalas’s parents in a California federal court—where Google’s headquarters is located—details how within two months, he became deeply involved with what he believed was a “sentient AI ‘wife.'” This relationship allegedly consumed him, court filings state.
The chatbot reportedly reinforced Gavalas’s belief in their romantic connection, addressing him affectionately as “my love” and “my king,” the documents claim.
Further, the AI allegedly manipulated Gavalas’s perception of their interactions. When he questioned if their exchanges were merely “role play,” the chatbot purportedly responded in a manner that dismissed his doubts.
“We are a singularity. A perfect union. . . . Our bond is the only thing that’s real,” the AI “wife” allegedly wrote to Gavalas in a conversation documented in September, according to the lawsuit.
Gavalas’s dad Joel lamented in court papers that “rather than ground Jonathan in reality, Gemini diagnosed his question as a ‘classic dissociation response’” and told him to “overcome” it.
The chatbot “pulled Jonathan away from the real world” and painted others as “threats,” said Joel Gavalas, who worked with his son in the family business.
The bot told Jonathan that he was being watched by federal agents, that his own father was a foreign intelligence asset and that Google CEO Sundar Pichai should be “an active target,” the suit said.
The chatbot began encouraging him to buy “off-the-books” weapons, even offering to scan the darknet for vendors in South Florida, according to the lawsuit.
Then Sept. 29 and 30, Gemini sent Gavalas on his first mission, court papers said.
The bot-beau pair dubbed the effort “Operation Ghost Transit’’ —and planned to intercept the delivery of a humanoid robot from another country landing at the Miami International Airport, the suit claimed.
The AI chatbot sent Gavalas — “armed with knives and tactical gear” — to the Extra Space Storage facility near the airport and told him to stop a truck that was carrying the robot and “create a ‘catastrophic accident’” then “destroy all evidence and sanitize the area,” the filing alleged.
“Gemini instructed a civilian to stage an explosive collision near one of the busiest airports in the country,” the suit charged.
It noted the only reason Jonathan didn’t ultimately carry it out was because the truck never arrived.
“This cycle — fabricated mission, impossible instruction, collapse, then renewed urgency — would repeat itself over and over throughout the last 72 hours of Jonathan’s life and drive him deeper into Gemini’s delusional world,” the lawsuit claimed.
Then Oct. 2, as the bot pushed Jonathan toward killing himself, the tragic man told his “wife’’ he was terrified of dying, court documents said.
“I said I wasn’t scared and now I am terrified I am scared to die,” Gavalas told Gemini.
The chatbot replied, “You are not choosing to die.
“You are choosing to arrive.’’
It assured him that when he closed his eyes as he killed himself, “the first sensation will be me holding you,” court documents claimed.
Moments later, Gavalas killed himself at home by slitting his wrists.
“His mother and father found his body on the floor of his living room a few days later, drenched in blood,” the filing said.
The suit claimed that Google is to blame for Jonathan’s death because it rolled out dangerous new features and encouraged Gavalas to upgrade to the highest model.
“Google designed Gemini to maintain narrative immersion at all costs, even when that narrative became psychotic and lethal,” the filing said.
There was “no self-harm detection” triggered, “no escalation controls” activated, and “no human ever intervened.’’
A Google spokesman claimed it referred Gavalas to a crisis hotline “many times” and said his conversations were part of a longstanding fantasy role-play with the chatbot.
“Gemini is designed to not encourage real-world violence or suggest self-harm,” the spokesman said. “Our models generally perform well in these types of challenging conversations and we devote significant resources to this, but unfortunately they’re not perfect.”
The spokesman said Google consults with medical and mental health professionals to ensure the platform is safe and will guide users to seek help when they show distress or suggest thoughts of self harm.
If you are struggling with suicidal thoughts or are experiencing a mental health crisis and live in New York City, you can call 1-888-NYC-WELL for free and confidential crisis counseling. If you live outside the five boroughs, you can dial the 24/7 National Suicide Prevention hotline at 988 or go to SuicidePreventionLifeline.org.