Sign or Veto: The Future of California's SB 1047 AI Disaster Bill – What Comes Next?

A contentious California bill aimed at regulating AI risks, known as SB 1047, has successfully passed its final votes in the state Senate and is now headed to Governor Gavin Newsom for consideration. He faces the challenge of balancing the potential for significant theoretical AI risks—including the possibility of AI-related fatalities—against the need to foster California’s burgeoning AI industry. The deadline for him to either sign SB 1047 into law or veto it is September 30.

Introduced by state senator Scott Wiener, SB 1047 seeks to mitigate the risks posed by large AI models that could lead to catastrophic outcomes, including loss of life or cyberattacks resulting in over $500 million in damages.

Importantly, the bill is focused on forward-looking AI model developments, as few existing AI systems today fall under its jurisdiction, and no AI has yet been utilized in a cyberattack of such magnitude. SB 1047 would hold AI model developers accountable for any resultant harms—akin to holding gun manufacturers responsible for mass shootings—empowering California’s attorney general to sue AI firms for substantial fines if their technology leads to a catastrophic incident. Furthermore, in cases of reckless behavior by AI companies, a court could mandate an immediate halt to operations; all covered models must include a "kill switch" to mitigate potential dangers.

The implications of this bill could fundamentally alter the landscape of America’s AI industry, bringing it one signature away from becoming law. Here's a glimpse into possible future scenarios for SB 1047.

Reasons Newsom Might Sign the Bill

Wiener asserts that Silicon Valley requires greater accountability, emphasizing the need to learn from past regulatory failures in technology. Newsom might feel pressured to take decisive action on AI regulation to ensure that Big Tech operates responsibly. Some AI leaders, including Elon Musk, have expressed cautious optimism about SB 1047.

Former Microsoft Chief AI Officer Sophia Velastegui notes that “SB 1047 is a good compromise,” although she acknowledges its imperfections. “We need an office of responsible AI for the U.S., not just Microsoft,” she stated. Similarly, while not officially endorsing the bill, Anthropic’s CEO Dario Amodei believes the benefits of SB 1047 likely outweigh its drawbacks, particularly due to amendments that allow lawsuits to proceed only after catastrophic harm occurs.

Reasons Newsom Might Veto the Bill

Given the substantial opposition from industry leaders, it would not be surprising if Newsom decide to veto SB 1047. Signing the bill would place significant responsibility on his shoulders, while a veto could delay the contentious issues for another year, potentially opening the door for federal intervention.

According to Andreessen Horowitz’s Martin Casado, "This [SB 1047] changes the precedent for how we've approached software policy for 30 years," as it shifts liability from applications to underlying infrastructure, a shift that has not been made before.

In response to SB 1047, multiple influential entities—including Speaker Nancy Pelosi, OpenAI, and various tech trade groups—have urged Newsom not to proceed with the bill, fearing that its liability changes could stifle innovation in California’s AI sector.

The potential chilling effect on the startup ecosystem is a serious concern, especially as the AI boom has become a vital economic driver for the U.S. The U.S. Chamber of Commerce has also requested a veto, emphasizing in a letter that “AI is foundational to America’s economic growth.”

If SB 1047 Becomes Law

Should Newsom sign SB 1047, nothing would immediately change. However, by January 1, 2025, tech companies would be required to submit safety reports for their AI models. At this point, California’s attorney general could seek a court order to compel an AI company to cease operations if their model is deemed dangerous.

In 2026, further provisions of the bill would come into effect, including the establishment of the Board of Frontier Models to collect safety reports from tech companies. This nine-member board, appointed by California’s governor and legislature, would advise on regulatory compliance.

That year, AI model developers would also need to hire auditors to evaluate their safety practices, thus creating a new sector dedicated to AI safety compliance. Additionally, California’s attorney general would gain the authority to sue AI model developers if their technologies caused catastrophic events.

By 2027, the Board of Frontier Models may begin advising developers on best practices for safely training and operating AI models.

If SB 1047 Is Vetoed

If Newsom decides to veto SB 1047, it may pave the way for federal regulators to assume control over AI model regulation in the future.

On Thursday, OpenAI and Anthropic began outlining a framework for potential federal AI regulations. They agreed to provide the newly proposed AI Safety Institute with early access to their advanced AI models, as stated in a press release. Simultaneously, OpenAI has backed a legislative proposal that would empower the AI Safety Institute to establish standards for AI models.

OpenAI CEO Sam Altman emphasized the importance of national-level regulation in a recent tweet.

However, history suggests that federal agencies typically create less stringent tech regulations than those proposed in California, often taking longer to implement. Additionally, Silicon Valley has a well-documented history of collaborating strategically with the federal government.

“There’s a long-standing precedent of state-of-the-art computer systems being shared with the feds," Casado remarked, noting that new supercomputers were historically provided to government entities first to ensure they had the necessary capabilities.

This ongoing dialogue highlights the critical juncture at which California’s AI industry currently stands amid evolving regulatory landscapes.

Most people like

Find AI tools in YBX