OpenAI's New Safety Committee Comprised Entirely of Insiders

OpenAI has established a new committee focused on overseeing crucial safety and security decisions related to its projects and operations. However, this move has sparked concerns among ethicists, as the committee consists primarily of company insiders, including OpenAI CEO Sam Altman, rather than independent observers.

The Safety and Security Committee, which includes OpenAI board members Bret Taylor, Adam D’Angelo, and Nicole Seligman, along with chief scientist Jakub Pachocki, Aleksander Madry from the "preparedness" team, and others responsible for safety and security, will evaluate OpenAI's safety protocols over the next 90 days. According to a post on OpenAI's corporate blog, the committee's findings will be presented to the full board of directors for review, after which an update on any adopted recommendations will be published while ensuring commitment to safety and security.

OpenAI has recently initiated training for its next frontier model, anticipating substantial advancements toward achieving artificial general intelligence (AGI). “While we are proud of our industry-leading capabilities and safety, we encourage active debate at this pivotal time,” OpenAI states.

The company has faced notable staff turnover in its technical safety team lately, with departing members expressing concerns about a perceived reduction in focus on AI safety. Daniel Kokotajlo, who was part of OpenAI's governance team, resigned in April, feeling that the organization would not “act responsibly” regarding the release of increasingly powerful AI systems. Similarly, Ilya Sutskever, a co-founder and former chief scientist, left in May following disagreements with Altman, particularly regarding the launch of AI products over safety considerations.

More recently, Jan Leike, a former DeepMind researcher involved in developing ChatGPT, stepped down from his safety role, sharing on X that he believed OpenAI "wasn’t on the trajectory" to address AI security and safety issues effectively. Gretchen Krueger, another AI policy researcher who left OpenAI last week, echoed this sentiment, urging the company to enhance its accountability and transparency in technology use.

It's worth noting that, according to Quartz, at least five particularly safety-conscious employees have either resigned or been let go since late last year, including former board members Helen Toner and Tasha McCauley. In a recent op-ed for The Economist, Toner and McCauley expressed skepticism about OpenAI's ability to self-regulate under Altman's leadership. “[B]ased on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives,” they stated.

In line with this, it was reported earlier this month that OpenAI's Superalignment team, responsible for overseeing “superintelligent” AI governance, had been promised 20% of the company’s computing resources, though they rarely received this allocation. This team has since been disbanded, with its responsibilities transferred to John Schulman and a newly formed safety advisory group.

While OpenAI advocates for AI regulation, it also actively influences this framework by hiring in-house lobbyists and engaging various law firms, reportedly spending hundreds of thousands of dollars on U.S. lobbying in the last quarter of 2023. Additionally, the U.S. Department of Homeland Security recently appointed Altman to its newly established Artificial Intelligence Safety and Security Board to advise on secure AI development within the nation's critical infrastructure.

To mitigate concerns over its exec-led Safety and Security Committee, OpenAI has committed to employing third-party experts in safety, security, and technical matters to support the committee’s work. Notable experts include cybersecurity veteran Rob Joyce and former U.S. Department of Justice official John Carlin. However, the company has not provided further details on the number or influence of these outside experts.

Highlighting concerns about corporate oversight, Bloomberg columnist Parmy Olson remarked that boards like the Safety and Security Committee tend to have very limited actual oversight. OpenAI has stated its commitment to addressing "valid criticisms" of its work—though the definition of “valid” varies among stakeholders.

In a 2016 New Yorker article, Altman had promised a significant role for outsiders in OpenAI's governance, suggesting a system to allow broader participation in electing representatives to its governance board. However, this promise remains unfulfilled, casting doubt on the likelihood of its realization now.

We’re launching an insightful AI newsletter! Sign up here to receive it in your inboxes starting June 5.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles