California AI Bill Veto: A Path for Smaller Developers and Models to Thrive

California Governor Gavin Newsom recently vetoed SB 1047, a bill that many believed could significantly impact AI development in the state and nationwide. This veto, announced on Sunday, may enable AI companies to demonstrate their commitment to proactively protecting users from AI-related risks.

SB 1047 aimed to mandate that AI companies implement a “kill switch” for their models, develop formal safety protocols, and engage a third-party safety auditor before model training could begin. The legislation also intended to grant California’s attorney general access to audit reports and the authority to sue AI developers.

Some industry veterans expressed concerns that the bill could hinder AI development, with many thanking Newsom for the veto. They argue that it would help safeguard open-source development in the future. Yann Le Cun, chief AI scientist at Meta and a vocal critic of SB 1047, described the decision as “sensible” on X (formerly Twitter).

Prominent AI investor Marc Andreessen characterized Newsom's veto as a stand for “California Dynamism, economic growth, and freedom to compute.” Other industry leaders echoed similar sentiments, advocating for regulations that don’t stifle smaller developers and models.

“The core issue isn’t the AI models themselves; it’s their applications,” said Mike Capone, CEO of Qlik. He emphasized the importance of focusing on context and use cases rather than the technology itself. Capone called for regulatory frameworks to ensure safe and ethical usage.

Andrew Ng, co-founder of Coursera, also praised the veto as “pro-innovation,” arguing it would protect open-source initiatives.

However, opposition voices exist as well. Dean Ball, an AI and tech policy expert at George Mason University’s Mercatus Center, stated that the veto is crucial for California and America. He noted that the bill's model size thresholds are outdated and wouldn’t account for newer models like OpenAI’s.

Lav Varshney, an associate professor at the University of Illinois, criticized the bill for penalizing original developers for the downstream use of their technology. He proposed a shared responsibility model that allows for innovation in an open-source manner.

The veto gives AI developers an opportunity to enhance their safety policies and practices. Kjell Carlsson, head of AI strategy at Domino Data Lab, urged companies to proactively address AI risks and embed strong governance across the AI lifecycle.

Navrina Singh, founder of AI governance platform Credo AI, pointed out the need for a nuanced understanding of what regulations are necessary, advocating for governance to be central to innovation while maintaining trust and transparency in the market.

Conversely, not all reactions were positive. Tech policy groups condemned the veto, with Nicole Gill, co-founder of Accountable Tech, asserting it favors Big Tech at the expense of public safety. She claimed the veto entrenches the status quo, allowing major companies to profit without accountability.

The AI Policy Institute echoed this concern, with executive director Daniel Colson criticizing the decision as “misguided and reckless.”

California, home to the majority of the nation’s AI companies, lacks robust regulations that align with public demand for oversight. Currently, there are no federal mandates surrounding generative AI, with the closest being an executive order from President Biden. This order outlines a framework for agencies’ use of AI and encourages companies to voluntarily submit models for evaluation.

The Biden administration also aims to monitor open-weight models for potential risks, highlighting the ongoing discussions about AI safety and regulation.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles