AI Law - International Review of Artificial Intelligence LawCC BY-NC-SA Commercial Licence ISSN 3035-5451
G. Giappichelli Editore

23/09/2025 - Council of Europe: Strong Regulation is Key to Responsible AI

argument: Notizie/News - European Union Law

Source: Council of Europe

The Council of Europe articulates a firm position on the necessity of robust regulation for the responsible development and deployment of artificial intelligence. In a statement highlighted by the Commissioner for Human Rights, the organization emphasizes that AI systems must be designed, developed, and used in a manner that fully respects human rights, democracy, and the rule of law. The piece argues against a purely self-regulatory or "soft law" approach, contending that the profound impact of AI on society requires a legally binding international framework. This framework's primary goal should be to establish clear red lines, prohibiting AI applications that pose an unacceptable risk to human dignity and fundamental freedoms.

The statement underscores the potential for AI to create and amplify societal harms, including discrimination through biased algorithms, erosion of privacy through mass surveillance, and threats to democratic processes. Therefore, the Council advocates for a risk-based regulatory model, similar to the one being developed in its own Convention on AI, which would impose stricter obligations on high-risk AI systems. Key principles of this proposed regulation include transparency in how AI systems operate, human oversight to ensure meaningful control, and access to effective remedies for individuals harmed by AI-driven decisions. The Council's position is clear: without strong, enforceable legal standards, the promise of AI could be overshadowed by significant and irreversible damage to fundamental human rights.