argument: Notizie/News - Legal Technology
Source: Thomson Reuters
Thomson Reuters outlines a new risk-based framework designed to guide courts and legal professionals in the adoption of generative AI (GenAI). The framework suggests classifying AI tools into risk categories based on their specific workflow and context: "low risk" for general productivity tasks, "moderate" for research, "moderate to high" for drafting and public-facing tools, and "high risk" for decision-support systems. A key recommendation is that courts should not rely exclusively on vendor-provided performance data, which may be optimized for known tests, but should instead develop their own independent benchmarks and evaluation datasets to ensure reliability.
The article emphasizes that risk is dynamic; a tool considered low risk in one context (e.g., scheduling) can become high risk in another (e.g., national security cases). Experts quoted in the piece, including judges and academic deans, stress that human supervision remains non-negotiable, particularly for decision-making processes. The framework calls for continuous monitoring of AI models to detect "drift" or degradation over time, ensuring that the efficiency benefits of GenAI do not come at the cost of ethical obligations, accuracy, or public trust in the justice system.