Miles Brundage, OpenAI’s senior adviser for AGI (artificial general intelligence) readiness, issued a stark warning upon his departure on Wednesday: no one, including OpenAI, is prepared for AGI. He stated, “Neither OpenAI nor any other frontier lab is ready for AGI, and the world is also not ready.” Brundage, who spent six years guiding the company’s AI safety initiatives, believes this sentiment is widely shared among OpenAI's leadership. However, he differentiates between acknowledgment of the lack of readiness and actual progress toward it.
His exit adds to a series of notable departures from OpenAI’s safety teams. Jan Leike, a key researcher, left after raising concerns about the prioritization of product development over safety culture. Additionally, co-founder Ilya Sutskever departed to establish his own AI startup aimed at safe AGI development.
The disbandment of Brundage’s “AGI Readiness” team, following the recent dissolution of the “Superalignment” team focused on long-term AI risk, underscores growing tensions within OpenAI. These tensions stem from a conflict between the company’s founding mission and its commercial objectives. OpenAI is reportedly under pressure to transition from a nonprofit to a for-profit public benefit corporation within two years, amid concerns it may need to return funds from a recent $6.6 billion investment round. Brundage has voiced apprehensions about this shift toward commercialization since OpenAI established its for-profit branch in 2019.
In explaining his departure, Brundage cited limitations on his research and publication freedom at the organization. He stressed the necessity of independent voices in AI policy, free from industry biases and conflicts of interest. He believes his impact on global AI governance will be greater from outside the company, having previously advised OpenAI’s leadership on internal preparedness.
This departure may also signify deeper cultural divides within OpenAI. Many researchers who joined to advance AI now navigate an increasingly product-focused environment. Resource allocation has become contentious; reports indicate that Leike’s team was denied computing power for safety research before it was ultimately disbanded.
Despite these challenges, Brundage mentioned that OpenAI has offered to support his future work, providing funding, API credits, and early model access without conditions.