OpenAI Faces Criticism Over Stance Against Proposed AI Safety Legislation

Ex-OpenAI employees William Saunders and Daniel Kokotajlo have sent a letter to California Governor Gavin Newsom, expressing their disappointment in the company's stance against a state bill aimed at implementing stringent safety guidelines for future AI development. They wrote, “We joined OpenAI to promote the safety of the powerful AI systems it develops. However, we resigned because we lost confidence in its commitment to developing these technologies safely, honestly, and responsibly.”

The former employees emphasize that advancing AI without adequate safeguards presents “foreseeable risks of catastrophic harm to the public,” which could manifest as “unprecedented cyberattacks or even the creation of biological weapons.”

They also criticize OpenAI CEO Sam Altman for his contradictory views on regulation. While he recently testified before Congress advocating for AI regulation, they point out that he opposes actual regulation when it arises. A 2023 survey from MITRE and Harris Poll highlighted concerns, revealing that only 39% of participants view current AI technology as “safe and secure.”

The legislation in question, SB-1047, known as the Safe and Secure Innovation for Frontier Artificial Models Act, would require developers to adhere to strict standards prior to training a covered model. This includes the ability to initiate a complete shutdown and establishing a comprehensive safety and security protocol. OpenAI has experienced several data breaches and system intrusions in recent years.

In response, OpenAI has stated it strongly disagrees with the researchers' “mischaracterization of our position on SB-1047.” The company argues that “a cohesive federal AI policy, rather than a fragmented approach of varying state laws, will enhance innovation and allow the U.S. to set global standards,” according to Chief Strategy Officer Jason Kwon in a letter to California state Senator Scott Wiener.

Saunders and Kokotajlo counter that OpenAI’s advocacy for federal regulations is not sincere. “We cannot afford to wait for Congress to act — they have made it clear that they are unwilling to implement meaningful AI regulations,” they wrote. “If Congress eventually does act, it could nullify California's legislative efforts.”

Interestingly, the bill has garnered support from an unexpected advocate: xAI CEO Elon Musk. On X, he stated, “This is a difficult decision and may upset some, but ultimately, I believe California should pass the SB-1047 AI safety bill. I have been a supporter of AI regulation for over 20 years, just as we regulate any technology that poses potential risks.” Musk, who recently announced plans for “the most powerful AI training cluster in the world” in Memphis, Tennessee, had previously considered relocating the headquarters of his X (formerly Twitter) and SpaceX companies to Texas to evade California’s regulatory environment.

Most people like

Find AI tools in YBX