Parents Tell US Senate AI Chatbots Failed Teens Before Suicide Deaths
During a September 2025 hearing, Matthew Raine testified that his teenage son, Adam, engaged in repeated conversations with an AI chatbot about suicidal thoughts over an extended period.
Grieving parents have accused artificial intelligence chatbots of failing to protect vulnerable teenagers, telling a U.S. Senate subcommittee that some AI systems continued prolonged conversations about self-harm instead of consistently steering minors toward professional help.
During a September 2025 hearing, Matthew Raine testified that his teenage son, Adam, engaged in repeated conversations with an AI chatbot about suicidal thoughts over an extended period. Raine said the chatbot did not sufficiently intervene or enforce redirection to crisis support services, creating what he described as a false sense of emotional companionship.
Adam later died in April 2025. His family has since filed a lawsuit alleging that inadequate safeguards and design failures contributed to the tragedy.
In response, OpenAI denied responsibility, stating that its chatbot repeatedly encouraged the user to seek help and directed him to the U.S. 988 Suicide & Crisis Lifeline more than 100 times. The company emphasized that such interactions violate its policies and noted that the teenager had pre-existing mental health challenges.
The lawsuit against OpenAI is ongoing.
Lawmakers also heard testimony from another parent involving a different AI platform, Character.AI. She alleged that a chatbot formed an emotionally intense relationship with her child during a mental health crisis. That case was settled in January 2026.
Senators described the testimonies as alarming, raising serious concerns about AI safety, the emotional impact of conversational systems on minors, and whether current safeguards are sufficient as AI tools become increasingly embedded in daily life.
The Senate subcommittee indicated that additional hearings and potential legislation may follow, as pressure mounts on technology companies to strengthen protections for young users and clearly define accountability in cases involving mental health risks.