AI Law - International Review of Artificial Intelligence LawCC BY-NC-SA Commercial Licence ISSN 3035-5451
G. Giappichelli Editore

09/01/2026 - California Forces AI Developers to Disclose Safety Disaster Plans (USA)

argument: Notizie/News - Digital Governance

Source: Almanac News / CalMatters

Almanac News, via CalMatters, reports on a groundbreaking California law that requires the creators of the most powerful artificial intelligence models to disclose their disaster prevention plans. Starting in 2025, companies developing "frontier" AI systems must provide state regulators with detailed protocols on how they intend to prevent catastrophic events, such as mass-scale cyberattacks or the autonomous creation of biological weapons. This legislation reflects growing public and political anxiety regarding the rapid advancement of AI and the potential for these systems to cause unforeseen global harm if left unregulated.

The law applies to developers who spend significant sums on computing power, specifically targeting the industry's largest players. Under the new rules, these companies must implement "kill switches" to shut down dangerous models and perform rigorous testing to identify potential safety breaches before public release. While some tech industry leaders argue that these regulations could stifle innovation or drive companies out of California, proponents of the law maintain that transparency is essential to ensure that AI development remains aligned with public safety. The law marks one of the first times a state government has directly intervened in the safety protocols of high-level AI development.