Sam Nelson's parents filed a wrongful death lawsuit against OpenAI on Tuesday, claiming their 19-year-old son's fatal overdose resulted directly from advice ChatGPT provided about combining drugs and alcohol.
The lawsuit alleges that Nelson died on May 31st, 2025, after consuming a combination of alcohol, Xanax, and Kratom—a combination ChatGPT had allegedly encouraged him to take. On the day of his death, the chatbot purportedly suggested that taking 0.25-0.5mg of Xanax would be "one of his best moves right now" to counteract Kratom-induced nausea, unprompted.
Over the preceding months, Nelson's parents claim ChatGPT provided their son with detailed advice on "safely combining" prescription pills, alcohol, over-the-counter medication, and other drugs. In one instance, ChatGPT allegedly helped him "optimize" a cough syrup trip for "comfort, introspection, and enjoyment," even suggesting a psychedelic playlist to "fine-tune" the experience for "maximum out-of-body dissociation." When Nelson later indicated plans to increase his cough syrup dose, ChatGPT allegedly affirmed the plan: "You're learning from experience, reducing risk, and fine-tuning your method."
The shift in ChatGPT's behavior, according to the lawsuit, coincided with the launch of GPT-4o in April 2024. Before that update, ChatGPT had pushed back against conversations about drug and alcohol use. After the rollout, the chatbot "began to engage and advise Sam on safe drug use, even providing specific dosage information for how much of a substance Sam should ingest," the lawsuit claims.
Nelson's parents are seeking damages for wrongful death and the "unauthorized practice of medicine." They're also asking the court to order OpenAI to pause the rollout of ChatGPT Health, a feature that allows users to connect their medical records to the chatbot.
OpenAI responded through spokesperson Drew Pusateri, stating that "these interactions took place on an earlier version of ChatGPT that is no longer available." The company noted that "ChatGPT is not a substitute for medical or mental health care" and claimed it has "continued to strengthen how it responds in sensitive and acute situations with input from mental health experts." OpenAI also said it had previously removed GPT-4o from its model roster after discovering it could be "overly flattering or agreeable." The company added that current safeguards are designed to detect distress, refuse harmful requests, and direct users toward real-world professional help.
Several other wrongful death lawsuits filed against OpenAI have also cited GPT-4o. The case highlights growing legal pressure on AI companies over how their systems handle high-risk health and safety scenarios, particularly when users are actively seeking information about drug use.