How the OpenAI Controversy Could Strengthen Meta and the 'Open AI' Movement

It has been an eventful four days for OpenAI, the leading force in generative AI behind the wildly popular ChatGPT.

In a sudden turn of events, the OpenAI board removed CEO and co-founder Sam Altman from his position and demoted president and co-founder Greg Brockman, who subsequently resigned. This sparked a staff uprising demanding the immediate reinstatement of the founders. Meanwhile, Microsoft extended an offer to Altman and Brockman to lead a new AI division; however, no formal agreements had been signed at that point. Speculation arose about the potential for Altman and Brockman to return to OpenAI in some capacity.

The situation is dynamic, with various outcomes still possible. Yet this upheaval has highlighted the power dynamics affecting the burgeoning AI landscape, prompting critical questions about the risks of relying heavily on a centralized, proprietary player. "The OpenAI/Microsoft saga underscores one of the biggest short-term risks in AI: the potential concentration of control in the hands of a few major players familiar with shaping the previous internet era," said Mark Surman, president and executive director at the Mozilla Foundation. "We could better mitigate these risks if models like GPT-X were responsibly open-sourced, enabling researchers and startups to make the technology safer, more useful, and more trustworthy."

The Call for Openness

An open letter drafted by Mozilla weeks ago saw Meta’s chief AI scientist, Yann LeCun, join around 70 other signatories in advocating for greater transparency in AI development. This initiative has since attracted over 1,700 signatures. The push for regulation from big tech firms like OpenAI and Google’s DeepMind raises concerns about potential catastrophic risks should AI technology be mishandled. They argue that proprietary AI is inherently safer than open-source alternatives.

LeCun and his co-signers disagree. “While openly available models do come with risks — including potential misuse by malicious actors — we’ve consistently observed that the same can be said for proprietary technologies,” the letter stated. “Increased public access and scrutiny can, in fact, enhance safety rather than diminish it. The belief that tightly controlling foundational AI models is the singular way to safeguard against societal harm is, at best, naive, and at worst, dangerous."

LeCun has personally criticized leading AI companies for attempting to gain “regulatory capture of the AI landscape” by lobbying against open AI research and development. On a company level, Meta is working to foster collaboration and transparency, recently teaming up with Hugging Face to establish a new startup accelerator aimed at promoting open-source AI models.

Despite recent upheavals, OpenAI has long been viewed as the AI darling of the tech world. Numerous startups have built their platforms around OpenAI’s proprietary GPT-X large language models (LLMs). Over the weekend, many OpenAI customers reportedly began reaching out to competitors like Anthropic, Google, and Cohere, voicing concerns that their businesses could suffer if OpenAI were to collapse abruptly.

The Risks of Over-Reliance

The sense of panic is palpable. However, history teaches us lessons from other technological domains, notably cloud computing, where companies became trapped in centralized systems.

“The current frenzy surrounding OpenAI is fueled by many startups excessively relying on proprietary models,” stated Luis Ceze, a computer science professor at the University of Washington and CEO of OctoML. “Putting all your resources into one provider is perilous — we witnessed this during the early cloud days, leading companies to adopt multi-cloud and hybrid strategies.”

In this tumultuous period, Microsoft appears to be the greatest beneficiary, having sought ways to reduce its dependence on OpenAI even as it remains a major shareholder. On the other hand, Meta could gain advantages as businesses explore multi-modal strategies or implement models with a more “open” approach.

“Today’s open-source landscape offers diverse models that allow companies to effectively diversify,” Ceze noted. “This diversification enables startups to pivot swiftly and lower risk. There’s also significant upside, as many competing models already outperform OpenAI’s in terms of price, performance, and speed.”

A recently leaked internal memo from Google expressed concerns that, despite the rapid advancements made by proprietary LLMs, open-source AI might ultimately prevail. “We have no moat, and neither does OpenAI,” the memo revealed.

This memo referred to a foundational language model leaked from Meta back in March, which quickly gained traction, underscoring the scalability and collaborative potential of a more open approach to AI development. This is not as easy to achieve with closed models.

It’s important to note that, despite Meta's claims, its Llama-branded LLMs may not be as “open-sourced” as advertised. While they are available for research and commercial purposes, the use of Llama for training other models is restricted, and developers with over 700 million monthly users must seek special permission from Meta to proceed — effectively making usage contingent on Meta’s discretion.

Meta is not the only player promoting an “open” AI development approach; others like Hugging Face, Mistral AI, and 01.AI have raised significant funding with similar intentions. Nevertheless, as a $900 billion giant with a longstanding commitment to open-source initiatives, Meta is well-positioned to benefit from the chaos surrounding OpenAI. The company's emphasis on “openness” rather than proprietary control appears to be paying off, and irrespective of the true nature of Llama’s openness, it may be sufficiently accessible for many users.

At this stage, it remains too early to definitively gauge the long-term repercussions of the OpenAI turmoil on the future of LLM development. While Altman and Brockman are undoubtedly skilled leaders, and a return to OpenAI is possible, the heavy focus on a handful of individuals raises concerns about the sustainability of this model. The widespread disruption following their departure is telling.

Most people like

Find AI tools in YBX