AI Law - International Review of Artificial Intelligence LawCC BY-NC-SA Commercial Licence ISSN 3035-5451
G. Giappichelli Editore

03/10/2025 - Hallucinating AI: The Dangers in Police Report Generation

argument: Notizie/News - Criminal Procedure Law

Source: Futurism

Futurism reports on the concerning phenomenon of "hallucinating" AI when used in the generation of police reports, highlighting the significant risks and ethical dilemmas this presents for law enforcement. The article explains that AI models, particularly large language models (LLMs), can sometimes generate information that is plausible but entirely false or fabricated, a phenomenon commonly referred to as "hallucination." When such unreliable information makes its way into critical documents like police reports, it can have severe consequences, including miscarriages of justice, wrongful accusations, and erosion of public trust in legal processes.

The piece emphasizes the inherent unreliability of AI in sensitive applications where factual accuracy is paramount. It discusses how AI hallucinations can lead to the creation of non-existent events, false eyewitness accounts, or inaccurate details, all of which can compromise investigations and legal proceedings. The article calls for a cautious approach to integrating AI into law enforcement, advocating for stringent human oversight, validation mechanisms, and transparent disclosure when AI tools are employed. The potential for AI-generated falsehoods underscores the urgent need for robust ethical guidelines and technical safeguards to prevent the spread of misinformation within the justice system.