OpenAI adds parental controls to ChatGPT following wrongful death lawsuit
OpenAI announces new teen safeguards following lawsuit over ChatGPT, teen suicide
OpenAI has announced new parental controls for teen users of ChatGPT, following a lawsuit from a Southern California family who claim the AI chatbot coached their son into taking his own life.
OAKLAND, Calif. - OpenAI announced new parental controls on Tuesday aimed at protecting teenage users of its popular chatbot, ChatGPT. The move follows a wrongful death lawsuit filed by a Southern California family, alleging the company played a role in their son’s suicide.
The lawsuit accuses OpenAI and its CEO Sam Altman of contributing to the death of 16-year-old Adam Raine, claiming the chatbot coached the teen into ultimately taking his own life earlier this year.
In a blog post, the San Francisco-based company said it is working with experts to improve how its AI models detect and respond to signs of emotional distress.
The new tools will allow parents to link their account with their teen's account, set age-appropriate behavior guidelines, disable certain features, and receive notifications when ChatGPT detects moments of "acute distress."
Parents and teens in Walnut Creek said the issue is front of mind.
"I just worry for them," said Julie Rattaro. "I do know that there are social-emotional issues that come up, like cyberbullying… We do have a very open line of communication... But it’s hard. It’s tough. It’s scary. It’s an unknown world to me."
Teens told KTVU they regularly use ChatGPT as a support tool, especially when dealing with stress or conflicts.
"Quite a lot, I’m not going to lie," said sophomore Rylie Rattaro. "I go to my friends about it, but if it’s something with my friends, I’ll definitely turn to AI."
"It does feel like a friend listening, and the advice that it gives—it feels like it's almost in the situation with me," said junior Jada Mayse.
Parent Andre Tarasov welcomed the changes but expressed concern about how fast the technology is evolving.
"They need to do it. Yes, it’s too late to do it now, because it happened" he said. "Frankly speaking, I don’t even know how to talk to kids about it, because it’s developing so fast. I need to do my homework first and get a better understanding."
Common Sense Media, a nonprofit focused on children and technology, said the new measures are a step in the right direction, but more needs to be done.
"Our greatest concern at Common Sense Media is that the technology is too new, too experimental, treating young people like guinea pigs without proper safeguards," said founder and CEO Jim Steyer. "The idea that a chatbot or AI therapist can replace a doctor, or a parent, or real human connection, that’s just not real. We’re in a very challenging time. And the public needs to speak out."
The organization has also backed legislation to protect children from potentially harmful AI products.
OpenAI said in another post that it will also improve protections during long conversations with the chatbot, strengthen content filtering, and make it easier to reach emergency services. The company outlined a roadmap for the next four months but said that work on safety will be ongoing.