argument: Notizie/News - International Law
Source: ItaliaOggi
The article examines the increasing use of artificial intelligence (AI) in kamikaze drones—autonomous aerial vehicles designed to seek and destroy targets without direct human intervention. AI algorithms analyze data, select targets, and execute attacks, raising profound ethical, legal, and operational questions. The article explains how these drones process real-time information and make split-second decisions on whom to strike, often in highly complex environments.
Experts and policymakers in Italy are debating the implications of allowing machines to make lethal choices, especially concerning accountability, potential errors, and compliance with international humanitarian law. The risk of algorithmic bias, collateral damage, and the challenge of defining responsibility in case of mistakes are central themes. The article also touches on the absence of comprehensive international regulations specifically addressing autonomous weapons.
The discussion highlights the urgent need for clear legal frameworks and robust oversight to govern the development and deployment of AI-driven military technology. The article concludes that as AI capabilities grow, so do the risks and the need for transparency, accountability, and ethical standards in warfare.