The California State Assembly and Senate have passed the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act (SB 1047), marking a significant step in regulating artificial intelligence in the U.S. This legislation mandates that AI companies operating in California implement essential safeguards before training advanced foundation models. Key requirements include the ability to quickly deactivate the model, protection against “unsafe post-training modifications,” and a rigorous testing protocol to assess risks of “causing or enabling critical harm.”
Senator Scott Wiener, the main author of the bill, highlighted that SB 1047 is a reasonable measure that aligns with the commitments of major AI labs to evaluate their models for catastrophic safety risks. He stated, “We’ve worked hard all year, along with open source advocates and industry leaders, to refine this bill. SB 1047 is calibrated to foreseeable AI risks and deserves to be enacted.”
However, critics, including OpenAI, Anthropic, and California politicians Zoe Lofgren and Nancy Pelosi, argue that the bill is overly focused on catastrophic threats and could adversely affect small, open-source AI developers. In response to these concerns, amendments were made, replacing potential criminal penalties with civil penalties, narrowing enforcement powers for California’s attorney general, and modifying the requirements for joining a “Board of Frontier Models” established by the bill.
The AI safety bill now awaits action from Governor Gavin Newsom, who will have until the end of September to make a decision. In a recent letter to Governor Newsom, Anthropic CEO Dario Amodei expressed that the bill had been “substantially improved” through amendments, suggesting that its benefits now likely outweigh its costs. OpenAI did not provide further comments on the matter.