argument: Notizie/News - Civil Law
Source: The Regulatory Review
The Regulatory Review publishes an analysis by James Andrews on December 22, 2025, discussing the pivotal case of Starbuck v. Google, filed in October 2025. The lawsuit arose after Google's AI chatbot generated false accusations against social media activist Robby Starbuck, including fabricated criminal records. The article argues that current common law defamation frameworks are ill-suited for AI, as Google successfully employed defenses arguing a lack of "publication" (since users trigger the output) and "actual malice" (as AI hallucinations are unintentional system artifacts).
Andrews suggests that the legal system should look to the Fair Credit Reporting Act (FCRA) of 1970 for a viable liability model. Just as credit bureaus are held responsible for the accuracy of data regardless of intent, AI developers should be liable for the verification of the information their systems output. The article highlights that modern AI systems, much like pre-FCRA credit reports, often aggregate unverifiable data from dispersed sources, leading to harm without clear accountability.
The piece concludes that relying on traditional tort law is insufficient because it focuses on human intent rather than systemic negligence in data handling. A shift towards a statutory duty of care, requiring source disclosure and accuracy standards similar to those in the credit reporting industry, is presented as a necessary evolution for AI governance to protect individuals from algorithmic defamation.