argument: Notizie/News - Digital Governance
Source: DataNews
DataNews reports on a pioneering regulatory move by the Cyberspace Administration of China, which has drafted the world's strictest rules to govern the "emotional impact" and safety of AI chatbots. This legislation specifically targets AI tools designed to simulate human personality and engage users emotionally through text, audio, or video. Under the new draft proposal, chatbot providers are strictly prohibited from generating content related to gambling, violence, or self-harm. Crucially, the law requires platforms to implement escalation protocols that connect human moderators to users exhibiting signs of distress or suicidal ideation. This marks a significant shift from focusing solely on content accuracy to prioritizing the mental and emotional well-being of the user base.
The regulations also introduce rigorous protections for minors, including mandatory age verification and explicit parental consent before engaging with AI companions. Regulators aim to monitor AI interactions for signs of emotional dependency and addiction, treating anthropomorphic AI as a unique psychological risk. Tech providers will be held accountable for the psychological safety of their interfaces, with requirements to flag risky conversations to guardians and authorities. These rules mirror emerging global concerns but go further by mandating real-time human intervention. The draft represents China’s strategic ambition to lead in AI safety standards while maintaining tight control over the social and psychological influence of generative technologies.