OpenAI’s Ex-Superalignment Leader Critiques Company: ‘Safety Culture and Processes Have Been Neglected’

Earlier this week, Ilya Sutskever, co-founder and former chief scientist, and Jan Leike, a key researcher, both announced their resignations from OpenAI within hours of each other. This development is significant not only due to their senior positions but also because they led the superalignment team focused on creating systems to manage superintelligent AI models—those surpassing human intelligence.

Following their departure, reports indicate that OpenAI’s superalignment team has been disbanded, as highlighted in a recent Wired article, where my wife serves as editor-in-chief.

Today, Leike took to his personal X account to voice his criticisms of OpenAI and its leadership, accusing them of prioritizing “shiny products” over safety. In one post, he stated, “Over the past years, safety culture and processes have taken a backseat to shiny products.”

Leike, who joined OpenAI in early 2021, admitted to having disagreements with the company's leadership, likely referring to CEO Sam Altman and other top executives, including President Greg Brockman and CTO Mira Murati. He noted, “I have been disagreeing with OpenAI leadership about the company’s core priorities for quite some time, until we finally reached a breaking point.”

He emphasized the urgent need to address how to steer and control AI systems that are significantly smarter than humans.

In July 2023, OpenAI committed to allocating 20% of its computational resources—specifically its powerful Nvidia GPU clusters—to the superalignment initiative. This effort aims to responsibly develop artificial general intelligence (AGI), defined in the company charter as “highly autonomous systems that outperform humans at most economically valuable work.”

Despite this pledge, Leike expressed frustration: “My team has been sailing against the wind. Sometimes we struggled for compute, and it was getting harder to accomplish crucial research.”

In a response to Leike's thread, Altman quoted him, expressing appreciation for Leike’s contributions to OpenAI's alignment research and safety culture. He acknowledged the need for further work and promised a more detailed post in the coming days.

This news casts a shadow over OpenAI, coinciding with the recent rollout of the new GPT-4o multimodal foundation model and the ChatGPT desktop app for Mac, announced on Monday. It also poses challenges for Microsoft, a key investor and partner, as they prepare for the upcoming Build conference next week.

We have reached out to OpenAI for a statement regarding Leike’s comments and will provide updates as they become available.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles