OpenAI's recent announcement has finally clarified the uncertainty surrounding the company over the past five days: Sam Altman is reinstated as CEO, and three initial board members have been appointed, with more to follow.
However, emerging details about the events leading to this upheaval indicate the necessity for OpenAI to address trust issues that Altman may face due to his recent actions. Additionally, there remains a lack of clarity on how the company plans to resolve ongoing governance challenges, including its board structure and responsibilities, which have been perceived as confusing and contradictory.
For enterprise decision-makers observing this saga, understanding what transpired is critical to evaluating OpenAI's credibility moving forward. This situation marks a potential shift towards a more aggressive, product-oriented approach. While OpenAI's language models like ChatGPT and GPT-4 will likely maintain their popularity among developers as APIs across various applications, the company's reputation as a reliable provider of comprehensive AI solutions for enterprises may diminish.
One significant factor impacting trust within the company arose from Altman's criticism of board member Helen Toner's work on AI safety. In October, Altman took issue with a paper authored by Toner that discussed OpenAI's decision-making processes compared to its competitor, Anthropic. While OpenAI opted to release its language model, Anthropic delayed its launch due to safety concerns. According to the study, this strategic decision by Anthropic represented a commitment to AI safety, contrasting with OpenAI's more rapid approach.
Following Altman's complaints about Toner's paper, he communicated to colleagues that this criticism was necessary for the company’s safety, especially given ongoing investigations by the FTC into OpenAI's data use. Toner, in response, emphasized the academic nature of her work, leading to discussions among senior leaders at OpenAI regarding her potential removal from the board. Ultimately, co-founder Ilya Sutskever, concerned about AI risks, supported Altman's ousting for not being “consistently candid” with the board.
Tension between Altman and Toner highlighted alignment issues with the company’s foundational goal of developing safe artificial general intelligence (AGI) for the benefit of humanity, rather than investors.
In last night's decision to reinstate Altman, Toner and fellow board member Tasha McCauley chose to resign in pursuit of a fresh start, despite believing their actions had been justified. The board retained Adam D’Angelo, another member who backed Altman's removal, to facilitate negotiations with the interim CEO Emmett Shear and the incoming board members. The new board, featuring Bret Taylor and Lawrence Summers, possesses strong credentials and experience, promising to steer the company towards a growth-focused mission.
This leadership transition raises questions about OpenAI's future. The board’s composition suggests a transformation into a robust product company capable of resolving previous governance issues. Nonetheless, the departure of members advocating for effective altruism indicates a critical pivot necessary for OpenAI's viability. Jaan Tallinn, a vocal advocate for effective altruism, highlighted the risks of relying on altruistic governance in the context of OpenAI's crises.
As the company moves forward, establishing a credible and diverse board of directors will be essential in maintaining its trajectory towards success. A commitment to fairness and ethical considerations in AI development is crucial.
In summary, as OpenAI navigates this phase, it appears poised for a more established for-profit and product-driven future. The aim is to serve hundreds of millions with general-purpose language models that cater to diverse tasks. Yet, it remains unclear whether OpenAI will achieve the trust and governance needed for handling the precise, safe, and unbiased applications that enterprise companies require. Other players may need to fill this gap.