On Tuesday, May 28, OpenAI announced the formation of a new safety advisory committee to oversee crucial safety decisions for the company's projects and operations. The committee, composed entirely of internal members—including board members Bret Taylor (Chair), Adam D’Angelo, Nicole Seligman, and CEO Sam Altman—will provide recommendations to the board on significant safety and security matters.
The primary goal of this safety committee is to evaluate and enhance OpenAI's processes and safeguards over the next 90 days, after which they will present their recommendations to the entire board. OpenAI is committed to transparently updating stakeholders on the implementation of these suggestions.
In addition to the internal committee members, OpenAI will engage technology and policy experts, including Aleksander Madry, Lilian Weng, John Schulman, Matt Knight, and Jakub Pachocki. The company also plans to consult with external safety and technology specialists, such as former cybersecurity official Rob Joyce and legal expert John Carlin, to deepen their security consultations.
Recently, several senior safety personnel have left OpenAI, raising concerns about the company's commitment to AI safety. Daniel Kokotajlo, who focused on team governance, resigned in April, citing a loss of confidence in the company's dedication to responsible AI development. Co-founder and former Chief Scientist Ilya Sutskever also departed in May amid ongoing tensions related to the urgency of product launches impacting safety.
Furthermore, Jan Leike, a former DeepMind researcher who contributed to ChatGPT and InstructGPT, announced his resignation as a safety researcher, expressing concerns about the organization's attention to AI safety. AI policy researcher Gretchen Krueger recently also left, advocating for greater accountability and transparency within OpenAI.
Reports reveal that since late last year, at least five key safety-focused employees have resigned or been pushed out, including former board members Helen Toner and Tasha McCauley, raising doubts about OpenAI's accountability under Altman’s leadership.
In another significant development, OpenAI has initiated the training of its next-generation models to exceed the capabilities of the current GPT-4 technology. This effort aims to further advance the company's progress toward achieving Artificial General Intelligence (AGI). While OpenAI boasts industry-leading technology, it is open to rigorous discussion at this pivotal moment.
Since launching GPT-4 in March 2023, OpenAI has introduced various applications for generating text, images, audio, and video. Just two weeks ago, the company presented GPT-4o, which features dramatically improved interaction speeds for a more lifelike conversational experience. However, GPT-4o’s enhancements build upon the framework established by GPT-4.
Given the high expectations for "GPT-5," OpenAI has indicated that the new model may not be available to the public until next year. AI models typically require extensive training periods, often spanning months or years, and additional fine-tuning is necessary before their public release. As OpenAI approaches the development of human-like intelligence, the organization is adopting a more cautious stance regarding claims about general artificial intelligence. Anna Makanju, Vice President of Global Affairs, reiterated that the mission centers around creating intelligence capable of performing tasks at the human cognitive level. Meanwhile, Sam Altman has previously emphasized that OpenAI aims to develop "superintelligence," with much of his time dedicated to realizing this vision.