argument: Notizie/News - Digital Governance
Source: Global Policy Watch
Global Policy Watch reports that New York Governor Kathy Hochul has signed the "Frontier AI Safety Act" into law, marking the most significant state-level intervention in the oversight of extremely large-scale artificial intelligence models. This legislation specifically targets "frontier models," defined as systems that cost more than $100 million to train or utilize massive amounts of computing power. Unlike earlier disclosure laws, this act focuses on preventing systemic risks, such as the use of AI to develop biological weapons, facilitate large-scale cyberattacks, or cause significant infrastructure failures. Developers of such models must now implement robust safety testing and security protocols before deploying their systems.
The law mandates that developers provide the state with a "safety and security protocol" that includes a plan for a full shutdown or "kill-switch" in the event of an emergency. It also establishes a New York AI Safety Bureau within the state government to monitor compliance and investigate potential hazards. Proponents of the bill argue that it is a necessary step to fill the regulatory vacuum at the federal level, while critics in the tech industry expressed concerns that it could drive innovation away from the state. However, the legislation includes a "compliance safe harbor" for companies that meet high-level federal standards, aiming to harmonize state and national safety efforts while maintaining local public protection.