argument: Notizie/News - Civil Law
Source: Columbia Undergraduate Law Review
Columbia Undergraduate Law Review published an analysis on December 14, 2025, addressing the urgent need to adapt defamation law to the era of generative artificial intelligence. The article argues that traditional standards of proof, such as negligence and actual malice, are ill-suited for cases involving "hallucinations"—falsehoods confidently presented by AI models. Since AI lacks legal personhood and the capacity for intent, proving "malice" or subjective awareness of falsity becomes nearly impossible for plaintiffs, leaving victims of AI-generated reputational harm without adequate recourse.
The author proposes a "hybrid standard of liability" that shifts the focus from the intent of the machine to the responsibility of the developers. This approach would hold tech companies accountable for training models on unreliable data sources and for the subsequent distribution of false information. Citing recent cases where developers avoided liability by simply posting user warnings, the article advocates for a modified negligence standard that redefines defamation in the context of AI, ensuring that the burden of ensuring accuracy rests more heavily on the creators of these powerful technologies.