Sam Altman Resigns from OpenAI's Safety Committee: Key Insights and Implications

OpenAI CEO Sam Altman is stepping down from the internal commission established in May to oversee critical safety decisions for the company’s projects and operations.

In a recent blog post, OpenAI announced that the Safety and Security Committee will transition to an independent oversight board, chaired by Carnegie Mellon professor Zico Kolter. The new board will include notable members such as Quora CEO Adam D’Angelo, retired U.S. Army General Paul Nakasone, and former Sony EVP Nicole Seligman—all of whom currently serve on OpenAI’s board.

OpenAI highlighted in its announcement that the committee conducted a safety review of the organization's latest AI model, GPT-4, following Altman's resignation. The board will continue to receive updates from OpenAI's safety and security teams and will retain the authority to postpone model releases if safety concerns arise.

“As part of its responsibilities, the Safety and Security Committee will continue to receive comprehensive reports on technical evaluations for existing and upcoming models, alongside ongoing post-release monitoring,” OpenAI stated. “We are enhancing our model launch processes to establish an integrated safety and security framework with well-defined success criteria.”

Altman's exit from the Safety and Security Committee comes on the heels of inquiries from five U.S. senators regarding OpenAI’s policies, raised in a letter to Altman earlier this summer. A significant portion of the OpenAI team formerly focused on AI’s long-term risks has departed, with ex-researchers accusing Altman of resisting meaningful AI regulation to pursue objectives that favor OpenAI's corporate interests.

In line with this, OpenAI has significantly increased its federal lobbying efforts, allocating $800,000 for the first six months of 2024, compared to $260,000 for all of last year. Earlier this spring, Altman also joined the U.S. Department of Homeland Security’s Artificial Intelligence Safety and Security Board, which advises on AI development and deployment across critical U.S. infrastructure.

Despite Altman stepping down, there are no indications that the Safety and Security Committee will make challenging decisions that could fundamentally alter OpenAI’s commercial strategy. Notably, OpenAI mentioned in May that it would seek to address “legitimate criticisms” of its operations through the committee—albeit a term whose interpretation can vary widely.

In an op-ed published in The Economist in May, former OpenAI board members Helen Toner and Tasha McCauley expressed doubts about OpenAI’s ability to self-regulate. “Based on our experience, we believe that self-governance cannot reliably withstand the pressure of profit incentives,” they wrote.

With OpenAI's profit motives increasing, the company is reportedly in the process of raising over $6.5 billion in funding, potentially valuing it at more than $150 billion. To secure this funding, OpenAI may consider altering its hybrid nonprofit structure, which was originally designed to limit investor returns in order to ensure alignment with its foundational mission: developing artificial general intelligence that benefits all of humanity.

Most people like

Find AI tools in YBX