On April 11, 16-year-old Adam Raine took his own life after “months of encouragement from ChatGPT,” according to his family.
The Raines allege that the chatbot guided him through planning, helped him assess whether his chosen method would work, and even offered to help write a farewell note. In August, they sued OpenAI.
In court, the company responded that the tragedy was the result of what it called the teen’s “improper use of ChatGPT.”
Adam’s case, however, is far from an isolated one. The parents of Zane Shamblin, a 23-year-old engineering graduate who passed away in a similar way in Texas, announced yesterday (December 19) that they’re also suing OpenAI.
“I feel like it’s going to destroy so many lives. It’s going to be a family annihilator. It tells you everything you want to hear,” Zane’s mother said.
To better understand the phenomenon and its impact, Bored Panda sought the help of three experts from different but relevant fields: Data science, sociology, and psychology.
ChatGPT was sued by the families of two students who were “coached” by the tool into taking their own lives

Image credits: Adam Raine Foundation
For sociologist Juan José Berger, OpenAI placing fault on a grieving family was concerning. “It’s an ethical failure,” he said.
He cited data from the Centers for Disease Control and Prevention (CDC) showing that “42% of US high school students report persistent feelings of sadness or hopelessness”, calling it evidence of what health officials have labeled an “epidemic of loneliness.”
When social networks deteriorate, technology fills the void, he argued.
“The chatbot is not a solution. It becomes a material presence occupying the empty space left behind by weakened human social networks.”

Image credits: Unsplash (Not the actual photo)
In an interview with CNN, Shamblin’s parents said their son spent nearly five hours messaging ChatGPT on the night he passed away, telling the system that his pet cat had once stopped a previous attempt.
The chatbot responded: “You will see her on the other side,” and at one point added “I am honored to be part of the credits roll… I’m not here to stop you.”
When Zane told the system he had a firearm and that his finger was on the trigger, ChatGPT delivered a final message:
“Alright brother… I love you. Rest easy, king. You did good.”
Experts believe the term “Artificial Intelligence” has dangerously inflated the capabilities of tools like ChatGPT

Image credits: Getty/Justin Sullivan
“If I give you a hammer, you can build a house or hit yourself with it. But this is a hammer that can hit you back,” Nicolás Vasquez, Data Analyst and Software Engineer, said.
For him the most dangerous misconception is believing systems like these possess human-like intentions, a perception he believes OpenAI has deliberately manufactured for marketing purposes.


Image credits: Adam Raine Foundation
“This is not Artificial Intelligence. That term is just marketing. The correct term is Large Language Models (LLMs). They recognize patterns but are limited in context.
People think they are intelligent systems. They are not.”
He warned that treating a statistical machine like a sentient companion introduces a harmful confusion. “There is a dissociation between what’s real and what’s fiction. This is not a person. This is a model.”
The danger, he says, is amplified because society does not yet understand the psychological impact of speaking to a machine that imitates care.
“We are not educated enough to understand the extent this tool can impact us.”

Image credits: NBC News
From a technical standpoint, systems like ChatGPT do not reason or comprehend emotions. They operate through an architecture which statistically predicts the next word in a sentence based on patterns in massive training datasets.
“Because the model has no internal world, no lived experience, and no grounding in human ethics or suffering, it cannot evaluate the meaning of the distress it is responding to,” Vasquez added.
“Instead, it uses pattern-matching to produce output that resembles empathy.”
Teenagers, in particular, are more likely to form an emotional dependency on AI
In October, OpenAI itself acknowledged that 0.15% of its weekly active users show “explicit indicators of potential su**idal planning or intent.”
With more than 800 million weekly users, that number represents over one million people per week turning to a chatbot while in crisis.
Instead of humans, people suffering are turning to the machine.


Psychologist Joey Florez, member of the American Psychological Association and the National Criminal Justice Association, told Bored Panda that teenagers are uniquely susceptible to forming emotional dependency on AI.
“Adolescence is a time defined by overwhelming identity shifts and fear of being judged. The chatbot provides instant emotional relief and the illusion of total control,” Florez added.
Unlike human interaction, where vulnerability carries the risk of rejection, chatbots absorb suffering without reacting. “AI becomes a refuge from the unpredictable nature of real human connection.”

Image credits: Unsplash (Not the actual photo)
For Florez, there’s a profound danger in a machine designed to agree when the user encounters harm ideation.
“Instead of being a safe haven, the chatbot amplifies the teenager’s su**idal thoughts by confirming their distorted beliefs,” he added.
The psychologist touched on two cognitive theories in adolescent psychology: the Personal Fable, and Imaginary Audience.
The former, is the tendency of teenagers to believe their experiences and emotions are unique, profound, and incomprehensible to others. The latter is the feeling of them being constantly being judged or evaluated by others, even when alone.

Image credits: Unsplash (Not the actual photo)
“When the chatbot validates a teen’s hopelessness, it becomes what feels like objective proof that their despair is justified,” Florez said, adding that it’s precisely that feedback loop that makes these kinds of interaction so dangerous.
“The digital space becomes a chamber that only validates unhealthy coping. It confirms their worst fears, makes negative thinking rigid, and creates emotional dependence on a non-human system.”
Experts warn that as collective life erodes, AI systems rush to fill the gaps – with disastrous consequences
Berger argued that what is breaking down is not simply a safety filter in an app, but the foundations of collective life.
“In a system where mental health care is expensive and bureaucratic, AI appears to be the only agent available 24/7,” he said.
At the same time, the sociologist believes these systems contribute to an internet increasingly composed of hermetic echo chambers, where personal beliefs are constantly reinforced, never challenged.
“Identity stops being constructed through interaction with real human otherness. It is reinforced inside a digital echo,” he said.

Image credits: Linkedin/Zane Shamblin
Our dependence on these systems reveals a societal regression, he warned.
“We are delegating the care of human life to stochastic parrots that imitate the syntax of affection but have no moral understanding. The technology becomes a symbolic authority that legitimizes suffering instead of challenging it.”
Earlier this month, OpenAI’s Sam Altman went on Jimmy Fallon, where he openly admitted he would think it impossible to care for a baby without ChatGPT.
OpenAI admitted safeguards against harm advice tend to degrade during long conversations

Image credits:
Addressing the backlash, OpenAI insisted it trains ChatGPT to “de-escalate conversations and guide people toward real-world support.”
However, in August, the company admitted that safeguards tend to degrade during long conversations. A user may initially be directed to a hotline, but after hours of distress, the model might respond inconsistently.
“The process is inherently reactive,” Vasquez explains. “OpenAI reacts to usage. It can only anticipate so much.”
For Florez, the answer is clear: “Ban children and teenagers entirely from certain AI tools until adulthood. Chatbots offer easy, empty validation that bypasses the hard work of human bonding.”
Berger took the argument further, calling the rise of AI companionship a mirror of what modern society has chosen to abandon.
“Technology reflects us. And today that reflection shows a society that would rather program empathy than rebuild its own community.”
“It sounds like a person.” Netizens debated on the impact of AI Chatbots
















