New York State legislators have approved a bill aimed at preventing the catastrophic consequences of using advanced artificial intelligence models. The document is called RAISE (Responsible Artificial Intelligence Safety and Evaluation) and provides for measures to limit the use of technologies that can pose a threat to human life and health, as well as cause economic damage in the amount of more than 1 billion US dollars.
The bill is aimed at regulating the activities of leading AI developers, including OpenAI, Google and Anthropic. If the law is passed, companies working with advanced AI models will be required to publish detailed reports on the security of their solutions and inform about cases of data leaks.
The initiative is linked to growing concerns about the risks that accompany the rapid development of artificial intelligence technologies against the background of innovations coming from Silicon Valley. The bill is seen as the first step towards creating transparency and control standards for AI developers at the federal level.
The next stage will be the review of the document by the Governor of New York, Kathy Hokul. She will be able to sign the bill, send it for revision, or veto it. According to supporters of the initiative, RAISE lays the foundation for the formation of a national system of mandatory standards in the field of AI security.