Research Group Calls for Worldwide Halt on Foundation Model Development

The Machine Intelligence Research Institute (MIRI), a non-profit organization focused on AI safety, is advocating for a global halt to research on foundation or "frontier" models due to grave concerns about potential risks they pose. Foundation models are advanced AI systems that exhibit a wide range of capabilities across different modalities. MIRI warns that these models could eventually surpass human intelligence, raising fears of catastrophic outcomes.

Prominent figures in the tech industry, such as Elon Musk and Steve Wozniak, have previously voiced concerns and called for a moratorium on the development of models that exceed the capabilities of OpenAI’s GPT-4. However, MIRI is pushing for more drastic measures. In its recently released communication strategy, the organization is seeking a total cessation of efforts aimed at creating AI systems with intelligence greater than that of humans.

MIRI expresses concern regarding policymakers, who often negotiate compromises that may lead to ineffective legislation. The organization argues that the urgency of the situation cannot be overstated: AI laboratories continue to invest heavily in creating increasingly advanced systems, while meaningful regulatory action remains elusive. “We do not seem to be close to getting the sweeping legislation we need,” the group notes.

To address these risks, MIRI proposes that governments mandate the inclusion of an “off switch” in the development of foundation models. This would allow for the immediate shutdown of AI systems if they begin to exhibit harmful or "existential risk" tendencies. MIRI emphasizes the importance of recognizing and addressing AI-related risks, stating that “we want humanity to wake up and take AI x-risk seriously.”

The organization asserts its commitment to the development of AI that exhibits greater intelligence than humans, but insists that this should only occur once the necessary safety measures are firmly in place. Founded in 2000 by Eliezer Yudkowsky, MIRI has gained support from notable individuals including Peter Thiel and Vitalik Buterin, the co-founder of Ethereum. The Future of Life Institute, which has advocated for a six-month pause in AI development, is also among MIRI’s key supporters.

Bradley Shimmin, chief analyst in AI and data analytics at the research firm Omdia, points out that MIRI's lack of supporting research may hinder its efforts to persuade lawmakers. He observes that the market has already considered these concerns, noting that the current generation of transformer-based generative AI models largely succeeds in creating useful representations of complex topics. “MIRI's call to action seems to be a step behind the industry,” he states, emphasizing the need for a forward-looking and actionable framework to mitigate future existential risks linked to AI.

Despite these critiques, Shimmin acknowledges MIRI's valuable role in highlighting the knowledge gaps between AI developers and regulators. He asserts that their efforts to raise awareness of potential risks should be taken seriously by those involved in the creation of the next generation of generative AI and, ultimately, artificial general intelligence (AGI). In this rapidly evolving landscape, it is essential for all stakeholders to engage thoughtfully with the implications of advanced AI technologies.

Most people like

Find AI tools in YBX