California Governor Newsom Vetoes SB 1047, the Bill Designed to Mitigate AI Disasters

California Governor Gavin Newsom has vetoed SB 1047, a bill designed to prevent misuse of artificial intelligence (AI) that could lead to "critical harm" to humans. The California State Assembly passed the legislation on August 28 with a 41-9 vote, but organizations like the Chamber of Commerce urged the governor to reject it.

In his veto message dated September 29, Newsom acknowledged that while the bill is "well-intentioned," it fails to consider critical factors such as the deployment of AI in high-risk environments and the handling of sensitive data. Instead, he noted, it imposes stringent standards on even basic AI functions when used within large systems.

SB 1047 sought to hold developers of AI models liable for implementing safety protocols to avert catastrophic outcomes. Required safety measures included rigorous testing, external risk assessments, and a fail-safe "emergency stop" feature, with penalties starting at $10 million for initial violations and escalating to $30 million for subsequent infractions. However, the revised version removed the state attorney general's ability to sue AI companies for negligence unless a catastrophic event occurred, limiting potential legal actions to injunctive relief in such cases.

The legislation aimed to regulate AI models with a minimum operational cost of $100 million and training capabilities of 10^26 FLOPS. It would also have affected projects developed with significant third-party investments exceeding $10 million. Any company operating in California would need to comply with the bill's stipulations if they met these criteria.

Newsom expressed concern over the bill's narrow focus on large-scale AI systems, stating, "I do not believe this is the best approach to protecting the public from real threats posed by the technology." He cautioned that this emphasis might create a misleading sense of security, as smaller, specialized models could pose equivalent, if not greater, risks while stifling innovation beneficial to the public.

The earlier iteration of SB 1047 proposed the establishment of a Frontier Model Division to regulate AI technologies. However, the bill was modified to place oversight under a Board of Frontier Models within the Government Operations Agency, with members appointed by the governor and the legislature.

Authored by California State Senator Scott Wiener, SB 1047 faced significant opposition from the tech community. Prominent AI researchers Geoffrey Hinton and Yoshua Bengio supported the initiative, voicing concerns over the dangers of AI. Wiener emphasized the importance of proactive measures, stating, "Let’s not wait for something bad to happen."

In his veto message, Newsom reaffirmed the state's commitment to safety, insisting, "California will not abandon its responsibility. Safety protocols must be adopted, and clear consequences for bad actors must be enforceable." He argued for a regulatory framework that evolves in tandem with AI advancements.

The bill received considerable pushback from tech leaders and organizations, including researcher Fei-Fei Li and Meta Chief AI Scientist Yann LeCun, who criticized it for potentially hampering AI innovation. Tech trade groups, representing major companies like Amazon, Apple, and Google, cautioned that SB 1047 could stifle new developments in California's tech sector. Additionally, venture capital firm Andreessen Horowitz, along with various startups, raised concerns about the financial implications for AI innovators, leading to amendments in the version of SB 1047 that progressed through California's Appropriations Committee on August 15.

Most people like

Find AI tools in YBX