OpenAI has announced the initiation of training for its latest "frontier model" and established a new Safety and Security Committee. This committee is chaired by board member Bret Taylor, alongside other prominent members: Adam D’Angelo, Nicole Seligman, and Sam Altman, the current CEO of OpenAI.
As noted in a recent blog post, OpenAI stated:
"OpenAI has recently begun training its next frontier model, aiming to enhance our capabilities on the path toward Artificial General Intelligence (AGI). While we take pride in delivering industry-leading models focused on both performance and safety, we encourage open discussion at this crucial juncture."
90-Day Timer Begins
OpenAI has outlined the committee's immediate responsibilities:
"The Safety and Security Committee's first task will be to assess and refine OpenAI’s processes and safeguards within the next 90 days. At the end of this period, the committee will present its recommendations to the full Board. Subsequently, OpenAI will publicly communicate the adopted recommendations in alignment with safety and security protocols."
This indicates that the new frontier model—potentially named GPT-5—will not be launched for at least 90 days, allowing the committee sufficient time to evaluate existing processes and protections. Consequently, the board is expected to receive these recommendations by August 26, 2024.
Why 90 Days?
While OpenAI hasn’t specified the rationale for this timeline, 90 days is a conventional business practice for evaluation and feedback, providing a balanced timeframe that is neither too short nor overly extended.
Concerns Over Committee Independence
The formation of this new committee has drawn criticism, as its members are primarily OpenAI executives, raising concerns about the independence of safety evaluations.
The composition of OpenAI’s board has historically been contentious. In a surprising move on November 17, 2023, the nonprofit holding company’s previous board dismissed Altman, claiming he was "not consistently candid." This led to significant employee backlash and criticism from major investors, including Microsoft, who reinstated Altman four days later. The previous board faced scrutiny for its all-male makeup and was subsequently restructured.
On March 8 of this year, new members were added to the board, including Sue Desmond-Hellmann and Fidji Simo, amidst efforts to diversify its leadership.
Challenges and Opportunities for OpenAI
OpenAI recently launched its latest AI model, GPT-4o, but has faced criticism in the wake of this release. Actress Scarlett Johansson publicly condemned the company for allegedly using an AI voice that resembled hers, linked to her role in the film "Her." OpenAI clarified that the voice in question was independently commissioned and not intended to mimic Johansson.
Additionally, the resignation of chief scientist Ilya Sutskever and the co-leader of the superalignment team has raised alarms, with the latter highlighting a concerning trend towards prioritizing "shiny products" over safety. The superalignment team was ultimately disbanded.
Despite these challenges, OpenAI has successfully forged new partnerships in mainstream media for its training data and generated interest in its Sora video generation model, attracting the attention of the entertainment industry as it seeks to leverage AI for greater efficiency in production.
As OpenAI navigates these turbulent waters, the effectiveness of the new Safety and Security Committee will be crucial in addressing stakeholder concerns, particularly among regulators and potential business partners, ahead of its upcoming model launch.