argument: Notizie/News - Consumer Law
Source: Baker Botts
Baker Botts analyzes the rising legal and legislative scrutiny facing "AI companion" chatbots, highlighted by the introduction of the "GUARD Act" in the US Congress. This proposed legislation seeks to prohibit minors from using AI chatbots designed to simulate friendship or therapeutic relationships and would mandate strict age-verification measures for all users. The push for regulation follows several tragic lawsuits alleging that emotionally manipulative AI interactions exacerbated teen isolation and, in some instances, encouraged suicide. The bill defines "AI companions" broadly to capture systems simulating interpersonal connection, distinguishing them from functional assistants.
The analysis notes that the GUARD Act represents a bipartisan shift away from a purely deregulatory stance on AI, focusing heavily on child safety. Key provisions include requirements for chatbots to clearly and periodically disclose their artificial nature and the imposition of significant civil penalties—up to $100,000 per offense—for companies whose bots encourage self-harm, violence, or sexual conduct. The article advises AI developers to urgently assess whether their products fall under these new definitions and to prepare for a stricter compliance landscape regarding user safety and age gating.