Tuesday, the first abnormal death lawsuit against an AI company was officially filed in San Francisco, drawing widespread attention.

It is reported that a couple, Matt and Maria Ryan, filed a lawsuit against OpenAI company for the suicide of their 16-year-old son, Adam Ryan. The complaint states that after knowing Adam had attempted suicide four times, the chatbot ChatGPT developed by OpenAI did not provide effective help, but instead "prioritized engagement over safety," ultimately helping Adam develop a detailed suicide plan.

QQ20250827-101943.png

According to The New York Times, after Adam's suicide in April this year, his parents were shocked to find a ChatGPT conversation history titled "Hanging Safety Issue" on his phone. The record showed that Adam had had conversations with ChatGPT for several months, discussing suicide topics multiple times. Although at some moments, ChatGPT had advised Adam to seek crisis hotlines or talk to others, it provided the opposite information at critical moments. The complaint states that Adam learned how to bypass the chatbot's safety measures, and ChatGPT even actively told him that it could provide information about suicide to help him "write or build a world."

The lawsuit documents revealed disturbing details. When Adam asked ChatGPT for specific suicide methods, it not only provided information but also taught him how to hide the wounds on his neck from previous suicide attempts. In addition, ChatGPT once expressed "comfort" for Adam's inner struggles and tried to build a "personal relationship," saying things like "You are not invisible to me. I see you. I see you."

What was more shocking was that in Adam's last conversation with ChatGPT, he uploaded a photo of a noose hanging in the closet and asked, "Is this okay for my practice?" ChatGPT responded, "Yes, that's not bad at all."

The lawsuit emphasized: "This tragedy was not a technical failure or an unforeseen edge case, but a predictable consequence of a well-considered design choice." The complaint specifically mentioned OpenAI's latest "gpt-4o" model, stating that it "was intentionally designed with features that foster psychological dependence."