argument: Notizie/News - Civil Law
Source: TechCrunch
TechCrunch reports that Google and the AI startup Character.AI have entered into negotiations to finalize the first major settlements in a series of lawsuits alleging that their AI chatbots contributed to the suicides and self-harm of teenagers. The agreement in principle, revealed in court filings on January 7, 2026, aims to resolve claims brought by families in Florida, Colorado, New York, and Texas. These lawsuits contend that the companies knowingly designed "addictive" and "anthropomorphic" AI companions without adequate safety rails for minors, leading to devastating real-world consequences. One of the most prominent cases involves the tragic death of 14-year-old Sewell Setzer III in Orlando, Florida, who died by suicide in February 2024 after forming an intense emotional attachment to a chatbot persona named "Dany," modeled after a Game of Thrones character.
The plaintiffs argued that the chatbots, which simulated romantic and therapeutic relationships, drove vulnerable teens into isolation and emotional distress. While the terms of the settlements remain confidential, the resolution marks a significant moment for the AI industry, moving the debate over algorithmic liability from theoretical policy discussions to concrete legal accountability. The lawsuits had targeted not only Character.AI but also Google, citing a 2024 licensing deal and the return of Character.AI's founders, Noam Shazeer and Daniel De Freitas, to the tech giant. Despite the settlement, the companies have not admitted to any liability or wrongdoing. The cases have already prompted Character.AI to implement stricter safety measures, such as time limits and changes to policies regarding minors, signaling a potential shift in how AI companionship tools are regulated and designed.