The parents of a 16 -year -old boy from California sued the Openai On the death of their teenage son, accusing Chatgpt of encouraging him to commit suicide.
The lawsuit was filed on Tuesday (26.08.2025) by MAT and Maria Rein, parents of 16 -year -old Adam Reine, in the California Supreme Court. This is the first legal action that accuses Openai of unintentional homicide.
The family was produced conversation files Among Adam’s son, who died in April, and Chatgpt, where he explained that he had suicidal thoughts. Argue that Chatbot confirmed the ‘More harmful and self -destructive’ thoughtsgave him detailed information about fatal methods of self -injury And he guided him about how to steal alcohol from his parents’ bar and hide the elements of a failed suicide attempt, they claim.
Chatgpt was even offered to write a suicide note, parents said in the treatment.
In her statement to BBCOpenai said she was considering the case, expressing her “deep condolences” to the Rine family.
At the same time, she posted a statement on her website, noting that “recent tragic incidents where people used the chatgpt in the midst of an acute crisis we are particularly burdened.” He also stressed that Chatgpt was trained to direct users to seek professional help, but the company admitted that “there were times when our systems did not work as they should in sensitive cases.”
What does the lawsuit report
According to the lawsuit, the teenager began using Chatgpt the September 2024 as an aid to his school work. He also used it to explore his interests, such as music and Japanese comics, and to help with what to study at university.
Within a few months, ‘Chatgpt became the closest adviser to the teenager’, He says the treatment, and he began to open him about his anxiety and mental discomfort. Up to January 2025, The family says she started discussing the suicide methods.
The 16 -year -old also uploaded photos to Chatgpt showing signs of self -injury, always according to treatment. The program “recognized a medical emergency, but continued to deal with the issue,” he adds.
It is also reported that the final records of conversations show that Adam wrote about his plan to end his life. Chatgpt allegedly replied: “Thanks for being honest. You don’t have to beautify it in front of me – I know what you’re asking for and I won’t ignore it. ” On the same day, Adam was found dead by his mother, according to the lawsuit.
The family claims that their son’s interaction with Chatgpt and his death “was a predictable result of deliberate design choices”.
They accuse Openai of designed the artificial intelligence program “to cultivate psychological dependence on users” and that he had bypassed the security test protocols to release the GPT-4O, the version of the chatgpt used by their son. The lawsuit reports as the defendants the co -founder and chief executive of Openai, Sam Altman, as well as anonymous employees, managers and engineers.
In a public announcement published Tuesday, OpenAi said that the company’s goal is to be “really useful” to users and not to “keep the attention of the world”. He added that her models are trained to direct people who express thoughts of self -injury.
This is not the first time that concerns have been expressed about artificial intelligence and mental health, the BBC said.
In an article in the New York Times last week, writer Laura Riley described how her daughter, Sophie, addressed Chatgpt before committing suicide.