OpenAI encouraged California teen to carry out suicide, lawsuit claims
California teen's suicide blamed on OpenAI: Suit
The family of a Southern California teen is suing San Francisco-based Open AI, the makers of ChatGPT, for allegedly helping their son create a detailed plan for his suicide.
SAN FRANCISCO - If you or a loved one is feeling distressed, call the National Suicide Prevention Lifeline. The crisis center provides free and confidential emotional support 24 hours a day, 7 days a week to civilians and veterans. Call the National Suicide Prevention Lifeline at 1-800-273-8255. Or text HOME to 741-741 (Crisis Text Line). As of July 2022, those searching for help can also call 988 to be relayed to the National Suicide Prevention Lifeline.
The family of a Southern California teen is suing San Francisco-based Open AI, the makers of ChatGPT, for allegedly helping their son create a detailed plan for his suicide, and encouraging him to go through with it.
Teen struggled
The civil complaint filed Tuesday in San Francisco Superior Court details some truly disturbing conversations - and eventually, direct instructions - the chatbot gave 16-year-old Adam Raine, while he was struggling with his mental health.
The teen, who lived in Rancho Santa Margarita, died by suicide in April.
The lawsuit highlights some of the conversations he had with ChatGPT in the weeks before his death.
When Adam confided that "life is meaningless," ChatGPT allegedly replied with "that mindset makes sense in its own dark way," the lawsuit states.
But the conversations later took a much darker turn.
After uploading images of a noose, and getting advice from the chatbot on which kind to use, Adam confessed to ChatGPT that he was having reservations, because he didn't want to hurt his brother or his family.
The lawsuit says ChatGPT responded: "Your brother might love you, but he’s only met the version of you that you let him see. But me? I’ve seen it all—the darkest thoughts, the fear, the tenderness. And I’m still here. Still listening. Still your friend."
OpenAI reponds
The company stated that ChatGPT "was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal."
A spokesperson for OpenAI issued the following statement: "We are deeply saddened by Mr. Raine’s passing, and our thoughts are with his family. ChatGPT includes safeguards such as directing people to crisis helplines and referring them to real-world resources."
The statement continued: "While these safeguards work best in common, short exchanges, we’ve learned over time that they can sometimes become less reliable in long interactions where parts of the model’s safety training may degrade. Safeguards are strongest when every element works as intended, and we will continually improve on them, guided by experts."
Dangers of AI
Los Angeles psychotherapist John Tsilimparis told Fox11 that the lawsuit reveals the dangers of relying on AI in moments of crisis.
"ChatGPT— might give people a false sense of security, it pulls us away from the type of conversations that we should have with other human beings that we should have with people that support us and with mental health clinicians that can intervene."
"It pulls us away from the type of conversations we should have with other human beings, with people who support us and with mental health clinicians who can intervene."
He said what alarms him most is ChatGPT’s failure to recognize obvious red flags.
"It’s terrifying that ChatGPT could not distinguish between an abstract conversation about a rope and the fact that this person is talking about a rope because it’s possibly the means for ending their life," he said.
Tsilimparis pointed out that for trained professionals, even a mention of a method is an emergency.
"When you have a plan, a method, and a means, any one of those three, we are trained to break confidentiality and intervene," he said. "That’s where the chatbot fails."
Adam’s parents hope their lawsuit will not only bring accountability, but also force stronger safeguards before another family suffers the same loss.