OpenAI Board Gains Veto Power Over AI Model Releases: New Oversight Measures Implemented

In the wake of recent boardroom turbulence, OpenAI is committing to enhanced oversight with the establishment of a dedicated Preparedness team focused on rigorous safety testing of its AI models. This strategic initiative comes after the company’s controversial decision to dismiss CEO Sam Altman, driven by concerns from Chief Scientist Ilya Sutskever about the rapid commercialization of technology without adequate safeguards against potential risks. Altman was reinstated five days later following widespread employee unrest, highlighting the importance of employee support in shaping corporate decisions.

To strengthen its safety protocols, OpenAI's new Preparedness team will conduct comprehensive evaluations and stress tests on its foundational models. Reports generated by this team will be shared with the OpenAI leadership and the newly formed board of directors. While the leadership team will decide whether to proceed with models based on testing outcomes, the board now possesses the authority to overturn those decisions, underscoring a shift towards more scrutiny in the decision-making process.

In a recent blog post, OpenAI emphasized, “This technical work is critical to inform OpenAI’s decision-making for safe model development and deployment.” The restructuring of the board follows significant changes made during the leadership crisis, with plans to expand its membership from three to nine, including a non-voting observer role for Microsoft. The current board comprises respected figures such as former U.S. Treasury Secretary Larry Summers, Bret Taylor, former co-CEO of Salesforce, and Adam D’Angelo, co-founder of Quora, who is the only remaining member from the previous board.

OpenAI's mission is clear: its primary fiduciary responsibility is to humanity, and it is deeply committed to ensuring the safety of Artificial General Intelligence (AGI). The organization’s new Preparedness Framework seeks to derive valuable insights from its model deployments, enabling it to mitigate emerging risks effectively. OpenAI asserts that as innovation accelerates, the pace of safety work must also increase, necessitating continuous learning through iterative deployment.

Under this Preparedness Framework, the new team will conduct regular safety drills to ensure prompt responses to any issues that may arise. OpenAI also plans to engage qualified independent third parties to carry out comprehensive audits, enhancing accountability and transparency.

All OpenAI models will now undergo continuous updates and evaluations every time effective computational power doubles during training runs. The testing process will encompass a range of critical areas, including cybersecurity vulnerabilities, the potential for misuse, and the capacity for autonomous decision-making, particularly concerning hazardous outputs related to biological, chemical, or nuclear threats.

Models will be classified into one of four safety risk levels, mirroring the EU AI Act’s classification system. These categories range from low, medium, high, to critical risk levels. Models deemed to have a medium risk or lower will be considered suitable for deployment, whereas those scoring high or critical will require further development and precautionary measures.

Currently, OpenAI’s Preparedness Framework is in beta, with ongoing adjustments planned as new insights emerge. This proactive approach reflects the company’s dedication to ensuring not just the advancement of AI technology, but also the safety and ethical considerations that accompany such innovations.

Most people like

Find AI tools in YBX