argument: Notizie/News - Criminal Law
Source: Euronews
Euronews reports on a disturbing new study released on December 14, 2025, which uncovered thousands of AI-generated videos featuring sexualized depictions of minors being shared on TikTok. The investigation reveals that users are utilizing advanced generative AI tools to bypass content moderation filters, creating "photorealistic" synthetic child sexual abuse material (CSAM). The report indicates that the platform's recommendation algorithms have inadvertently amplified the reach of this illegal content, prompting immediate outrage from child safety advocates and European regulators.
The article notes that the European Commission is launching an urgent inquiry into TikTok's compliance with the Digital Services Act (DSA), specifically regarding the protection of minors and the mitigation of systemic risks. Legal experts cited in the piece argue that the distinction between "real" and "synthetic" CSAM is irrelevant under EU law when the harm—normalization of abuse and psychological distress—remains significant. This scandal is expected to accelerate the enforcement of stricter age verification and content provenance rules across the European Union.