Recently, California Governor Gavin Newsom officially vetoed the controversial "Frontier AI Model Safety Innovation Act" (SB 1047), sparking significant attention from both the tech industry and policymakers.
The purpose of SB 1047 was to establish safety standards for the development of large-scale AI models, particularly targeting developers whose training costs exceed $100 million. The bill required developers to implement several preventive measures before deployment, including testing, simulating cyberattacks, installing cybersecurity protections, and providing whistleblower protections to ensure the safety of AI models.
However, Newsom identified several issues within the bill. He acknowledged that while SB 1047 had good intentions, it failed to account for AI systems deployed in high-risk environments or those involved in critical decision-making. He argued that the strict requirements for large systems might not effectively protect the public and could, in fact, hinder technological advancements.
Newsom expressed concern that the bill could create a false sense of security for the public. He pointed out that smaller, specialized models could pose equal or greater risks than those targeted by SB 1047. He believes that fostering innovation and ensuring public safety are not mutually exclusive and that the state should develop more nuanced and targeted policies to address the challenges posed by AI technologies.
Jamie Radice, a public affairs manager at Meta, welcomed Newsom’s decision, stating the bill would hinder AI innovation and negatively impact business growth and job opportunities, calling for more reasonable AI regulations.
Since its introduction in February of this year, SB 1047 has generated considerable controversy. Despite revisions released in August, it failed to gain Newsom's support. Opinions within the tech community are divided; some scientists and AI companies worry it would stifle innovation, while others advocate for stricter safety standards.
Newsom's veto of SB 1047 represents a pivotal moment in California's AI regulatory policy. The challenge of balancing AI innovation with safety will be a critical issue to monitor in the future.