OpenAI Introduces an Independent Safety Board with the Authority to Halt Model Releases

OpenAI is transforming its Safety and Security Committee into an independent "Board Oversight Committee" with the power to delay model launches if safety concerns arise, as announced in a recent blog post. This decision follows a comprehensive 90-day review of the company's safety and security processes.

Chaired by Zico Kolter, the committee includes notable members such as Adam D’Angelo, Paul Nakasone, and Nicole Seligman. It will receive briefings on safety evaluations for major model releases and, alongside the full board, will oversee model launches, retaining the authority to postpone releases until safety issues are resolved.

OpenAI’s entire board of directors will periodically be briefed on safety and security matters. While the committee’s members also sit on the broader board, raising questions about its independence, CEO Sam Altman is no longer part of the committee.

In establishing this independent safety board, OpenAI appears to take cues from Meta’s Oversight Board, which reviews content policy decisions without having any members on Meta's board of directors.

Additionally, the review process highlighted new opportunities for industry collaboration and shared information aimed at enhancing AI security. OpenAI is committed to amplifying transparency regarding its safety efforts and seeking more independent testing avenues for its systems.

Most people like

Find AI tools in YBX