California Legislature Passes AI Bill SB 1047: Why Some Are Urging the Governor to Veto It

Update: On Thursday, August 15, California’s Appropriations Committee passed SB 1047 with significant amendments, altering the bill's original provisions. You can find more details about these changes here.

In fiction, we've seen AI systems wreak havoc, but in reality, no precedent exists for such catastrophic events. Nonetheless, some lawmakers are advocating for preemptive safeguards to prevent dangerous scenarios from becoming a reality. SB 1047, a California bill, aims to address potential disasters before they arise. It passed the state senate in August and is now awaiting the approval or veto from California Governor Gavin Newsom.

While protecting the public is a shared objective, SB 1047 has sparked controversy among various stakeholders in Silicon Valley, including venture capitalists, tech trade groups, researchers, and startup founders. Although numerous AI legislation is being proposed across the nation, California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act has generated significant debate. Here’s an overview of the bill and its implications.

What Does SB 1047 Propose?

SB 1047 seeks to prevent large AI models from inflicting "critical harms" on society.

The bill lists specific examples, such as malicious actors using AI to develop weaponry that leads to mass casualties, or executing a cyberattack that results in damages exceeding $500 million (for context, the CrowdStrike outage is estimated to incur over $5 billion). Under this legislation, developers—specifically, the companies creating these models—will be held accountable for enforcing adequate safety protocols to avoid such outcomes.

Who is Affected by These Regulations?

The requirements outlined in SB 1047 will apply exclusively to major AI models, defined as those costing at least $100 million and utilizing 10^26 FLOPS (floating point operations) during training. For context, OpenAI CEO Sam Altman mentioned that training GPT-4 fell within this cost range. These thresholds may be adjusted as necessary.

Currently, only a handful of companies have developed AI products on this scale, but tech giants like OpenAI, Google, and Microsoft are poised to create models that will meet this criteria. As AI models scale up, their predictive capabilities improve, a trend that is expected to continue. Mark Zuckerberg recently noted that the next iteration of Meta’s Llama will demand tenfold the computing resources, thereby placing it under SB 1047’s jurisdiction.

In the case of open-source models and their derivatives, the bill stipulates that the original developer will be responsible unless a subsequent developer invests an additional $10 million to create a derivative model.

Additionally, the bill mandates that a safety protocol be established to prevent misuse of AI products, including an “emergency stop” feature to deactivate the entire AI model. Developers must also implement testing procedures to mitigate risks associated with AI models and engage third-party auditors annually to evaluate their safety protocols. The outcome must provide “reasonable assurance” that these measures will avert critical harm, acknowledging that absolute certainty is unattainable.

Who Will Enforce the Regulations?

Oversight of these new rules will fall under a newly established California agency known as the Board of Frontier Models. Each new public AI model that meets SB 1047's criteria must undergo individual certification, accompanied by a written safety protocol.

Comprising nine members, the Board will include representatives from the AI sector, the open-source community, and academia, with appointments made by California's governor and legislature. This board will advise the attorney general on potential violations of SB 1047 and provide guidance on best safety practices for AI model developers.

Developers will be required to submit an annual certification to the board detailing their AI model's risks, the effectiveness of its safety protocols, and compliance with SB 1047. Similarly, in the event of an “AI safety incident,” the developer must notify the Board within 72 hours of becoming aware of the incident.

If a developer's safety measures are found lacking, SB 1047 empowers the California attorney general to issue an injunctive order against the developer. This could necessitate halting operations or training of their AI model. If a model is implicated in a catastrophic event, California's attorney general can pursue legal action against the company, with penalties for models costing $100 million to train potentially reaching $10 million for the first violation and $30 million for subsequent infractions.

Lastly, the bill includes whistleblower protections for employees who report unsafe AI models to the attorney general.

What Do Proponents Argue?

California State Senator Scott Wiener, the bill's author representing San Francisco, emphasizes that SB 1047 aims to learn from past failures in technology regulation, particularly around issues like social media and data privacy, in a proactive manner.

"We have a history of waiting for harm to occur, then regretting it," Wiener remarked. "Let’s avoid that scenario and act in advance."

Importantly, even if a company trains a $100 million model outside California, such as in Texas or France, the bill will still apply if they conduct business within the state. With Congress having limited progress on tech legislation over the past 25 years, Wiener believes that California must lead by example.

Wiener noted that his office has engaged in discussions with all major AI labs regarding SB 1047.

Support for the bill has emerged from prominent figures in the AI community, including Geoffrey Hinton and Yoshua Bengio, often referred to as the "godfathers of AI." They express concern over potential doomsday scenarios linked to AI technology, aligning with the bill's safety provisions. Another supporter, the Center for AI Safety, previously published an open letter advocating for prioritizing the mitigation of existential AI risks akin to pandemic or nuclear threats.

Dan Hendrycks, director of the Center for AI Safety, pointed out that a significant safety incident could severely hinder advancements in AI technology, reinforcing the bill's long-term benefits for California’s tech ecosystem.

What Are the Opponents Saying?

Resistance to SB 1047 is rising among Silicon Valley stakeholders. Hendrycks’ mention of “billionaire VC opposition” likely refers to a16z, founded by Marc Andreessen and Ben Horowitz, which has voiced strong concerns about the bill. In early August, their chief legal officer explained that the bill may impose unnecessary burdens on startups, stifling innovation as the costs of AI technology rise.

Fei-Fei Li, a respected figure in AI research, publicly criticized the bill, asserting that it could damage California's growing AI ecosystem. Li, who co-founded a billion-dollar AI startup called World Labs, aligns with other revered academics like Andrew Ng, who labeled the bill as an "assault on open source”—a concern echoed by many in the field due to the implications for open-source models.

Yann LeCun, Meta’s chief AI scientist, argues that SB 1047 could hinder research efforts, driven by an unfounded notion of "existential risk" propagated by select think tanks.

Startup leaders are also expressing alarm. Jeremy Nixon, CEO of AI startup Omniscience, highlights the bill's potential to punish those developing innovative technology rather than the bad actors who misuse it.

OpenAI has opposed SB 1047, advocating instead for federal regulations to manage AI’s national security implications. Additionally, the Chamber of Progress—a trade organization representing major tech firms—issued an open letter decrying the bill, arguing it risks stifling free speech and innovation within California.

U.S. Congressman Ro Khanna, along with former Speaker Nancy Pelosi and the Chamber of Commerce, has voiced opposition, underscoring concerns that the bill would harm entrepreneurship and hinder California’s innovative spirit.

What Lies Ahead?

As of now, SB 1047 rests on California Governor Gavin Newsom’s desk, where he must decide whether to sign it into law before the end of August. Wiener notes that he has not communicated with Newsom regarding the bill and is unaware of his stance.

If approved, the bill would not take effect immediately; the Board of Frontier Models is scheduled to be established in 2026. Furthermore, legal challenges may arise post-enactment, potentially spearheaded by the same groups currently opposing the legislation.

Correction: This article initially misstated previous draft language regarding responsibility for fine-tuned models. Under the current provisions of SB 1047, a derivative model's developer is accountable only if they invest three times more than the original developer in training.

Most people like

Find AI tools in YBX