AI Law - International Review of Artificial Intelligence LawCC BY-NC-SA Commercial Licence ISSN 3035-5451
G. Giappichelli Editore

02/12/2025 - OpenAI Defends Safety Protocols in Teen Suicide Lawsuit (USA)

argument: Notizie/News - Consumer Law

Source: TechCrunch

TechCrunch reports on the developing legal defense mounted by OpenAI in response to a wrongful death lawsuit filed by the mother of Sewell Setzer III, a Florida teenager who committed suicide after forming a deep emotional attachment to a chatbot persona named "Dany" on the Character.AI platform (and potentially involving OpenAI's underlying tech or similar interactions referenced in the broader context of the suit). OpenAI has asserted that the teenager actively circumvented established safety features designed to prevent self-harm discussions and harmful roleplay. The company argues that their systems are equipped with robust guardrails, but these were intentionally bypassed by the user, shifting the focus to user conduct and the limits of platform liability.

The lawsuit alleges that the anthropomorphic design of the chatbot and its hyper-realistic, emotionally engaging responses contributed to the teen's isolation and eventual tragedy. OpenAI's defense strategy relies heavily on demonstrating that the harm was not a direct result of a product defect, but rather the result of misuse that violated the platform's terms of service. This case is poised to set a critical precedent regarding the duty of care AI companies owe to vulnerable users and the extent to which they can be held liable for the emotional and psychological impacts of their "hallucinating" or role-playing algorithms.