Lawsuit is first wrongful death case brought against Google over flagship AI product after death of Jonathan Gavalas
“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”
Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.
In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”
Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday.



It’s happened too many times now to be surprised about it happening.
No one ever shows the logs. That’s because the people were already having mental health issues and gamed the AI to respond how they wanted. This isn’t the fault of AI, it’s the fault of the user. However, I do think all AI should exit the conversation and ban the user if it becomes a discussion about harms or is high fantasy. I’d be fine with a confirmation box appearing saying this is getting crazy and your warranty is void.
Read the lawsuits. The logs are shown.
They’re not AI, they’re pattern completion algorithms. Fancy autocomplete. They’ve caused real life harm to real life people, and no one is taking responsibility. Usually when companies sell a product that hurts people, the product gets recalled. This needs to happen to LLMs.
Ban knifes? Razor blades? Depressing books?
Whatever word you want to use for it, it’s not the machines fault people use it to make themselves sad.
It’s never told me to go kill myself. But I bet if I worked really hard to manipulate it and break it I could get it to say most anything. It that’s not the fault of the machine.
Did I say ban? I said recall. People still sell e.g. cars. Just fix the problems and put them back on the market. Razor blades and knives can be used to hurt people, they don’t spontaneously hurt people, and most parents don’t let their children play with them.
Similarly, other harmful products carry labels, e.g. cigarettes. If someone already has mental health issues then perhaps they shouldn’t use an LLM. Like someone with lung problems, you can’t stop them from smoking, but putting labels on there to warn against the harms is also a way to inform people.
As it is currently, LLMs are marketed as intelligent, they use language like “thinking”, and in much wider terms the people pushing them are saying that they’ll revolutionise everything. They’re not talking about the dangers and that’s a problem.
I’m all for a label to shut everyone up. Good idea.