OpenAI Formed a Team to Manage ‘Superintelligent’ AI, but It’s Now Fading Away, Sources Reveal

OpenAI’s Superalignment team, tasked with developing methods to govern and direct “superintelligent” AI systems, was promised 20% of the company’s computing resources. However, team members reported that requests for even a fraction of this allocation were frequently denied, hindering their important work. This issue, among others, led several team members to resign this week, including co-lead Jan Leike, a former DeepMind researcher who played a significant role in the development of ChatGPT, GPT-4, and its predecessor, InstructGPT.

On Friday morning, Leike publicly shared some of the reasons behind his resignation. “I have been at odds with OpenAI leadership regarding the company’s core priorities for quite some time, reaching a breaking point,” Leike expressed in a series of posts on X. “I believe we should focus much more on preparing for the next generations of models, emphasizing security, monitoring, preparedness, safety, adversarial robustness, (super)alignment, confidentiality, societal impact, and related topics. These challenges are complex, and I am worried we aren’t on a path to address them effectively.” He emphasized the gravity of developing smarter-than-human machines, stating, “OpenAI bears a tremendous responsibility for humanity.” — Jan Leike (@janleike) May 17, 2024.

OpenAI has not yet commented on the resources originally promised to the Superalignment team. Formed in July of last year, the Superalignment team was led by Leike and OpenAI co-founder Ilya Sutskever, who also resigned this week. The team aimed to tackle the critical technical challenges related to controlling superintelligent AI in the next four years. It comprised scientists and engineers from OpenAI’s previous alignment division, as well as researchers from various organizations, focusing on ensuring the safety of both OpenAI's models and those developed externally. They also intended to foster collaboration within the broader AI community through initiatives like a research grant program.

Despite publishing significant safety research and distributing millions of dollars in grants to outside researchers, the Superalignment team found itself competing for resources as product launches consumed more and more of OpenAI leadership’s attention. They argued that securing upfront investments was essential to fulfilling OpenAI's mission of advancing superintelligent AI for the benefit of all humanity. “Building smarter-than-human machines is fundamentally perilous,” Leike noted. “However, over the past few years, our safety culture and processes have taken a backseat to flashy product launches.”

Sutskever's conflict with OpenAI CEO Sam Altman further complicated the situation. He, alongside the previous board of directors, attempted to abruptly terminate Altman late last year due to concerns over his transparency with the board. Facing pressure from investors, including Microsoft, and OpenAI employees, Altman was eventually reinstated, leading to the resignation of most of the board while Sutskever reportedly did not return to work.

According to insiders, Sutskever played a crucial role in the Superalignment team, contributing research and acting as a liaison to other divisions within OpenAI, emphasizing the team's significance to key decision-makers.

In response to Leike’s departure, Altman acknowledged on X that more work lies ahead and that the company is dedicated to addressing it. Co-founder Greg Brockman offered further insights the following day, saying, "We’re thankful to Jan for his contributions to OpenAI, and know he will continue to support the mission from outside. Given the concerns raised by his departure, we wanted to clarify our overall strategy."

While Brockman’s comments lacked specific policy changes or commitments, he mentioned the importance of establishing a “tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and a balance between safety and capabilities.”

Following the exits of Leike and Sutskever, John Schulman, another OpenAI co-founder, has taken charge of overseeing the types of work previously handled by the Superalignment team. However, there will no longer be a dedicated team for this purpose; instead, it will consist of a loosely connected group of researchers integrated within various divisions across the company. An OpenAI spokesperson characterized this as “a deeper integration.”

There are concerns that this shift may lead to OpenAI’s AI development lacking the robust safety focus it requires.

We’re launching an AI newsletter! Sign up here to start receiving it in your inboxes on June 5.

Most people like

Find AI tools in YBX