Insights from DeepMind Co-founder on AGI and the AI Race at SXSW 2024

Artificial general intelligence (AGI) is drawing closer to reality, but its practical applications remain several decades away, according to Shane Legg, co-founder of DeepMind. During a discussion at SXSW 2024, Legg shared insights about the development and future of AGI, emphasizing that while the groundwork for AGI may be laid in the near term, several economic and technological factors must align before we can fully harness its capabilities.

Legg pointed out that costs associated with AI technology need to decrease significantly and that advancements in robotics are essential for the practical deployment of AGI. Without economical feasibility, widespread corporate adoption is unlikely, regardless of the revolutionary potential AGI may hold. However, exciting near-term applications are already emerging, including AI-driven scientific research assistants that hint at AGI's future.

Origin of the Term "AGI"

Legg was instrumental in popularizing the term "artificial general intelligence." He coined the term after a conversation with an author seeking a title for a book discussing AI with broad capabilities, not limited to excelling in narrow tasks. This sparked discussions in online forums, leading to its wider acceptance. Yet, four years later, another individual claimed to have invented the term, showcasing the collaborative nature of idea development in the tech industry.

Defining AGI

In his fireside chat, Legg described AGI as a system capable of performing cognitive tasks similar to, or even surpassing, human abilities. He maintains a bold prediction that AGI will likely arrive by 2028, a timeframe that might have previously seemed overly optimistic given historical beliefs that AGI was still 50 to 100 years off.

Legg noted a shift in perspective over the years, explaining that many researchers were reluctant to explore AGI safety because they doubted its imminent arrival. However, recent advancements in foundation models, such as Google's Gemini and OpenAI's GPT-4, indicate that AGI is becoming increasingly plausible. He categorized current AI models at "level 3" on a six-level scale developed by Google DeepMind, where level 3 represents an “expert” status akin to the top 10% of skilled adults. Yet, these models are still considered narrow AI, proficient in specific tasks rather than exhibiting the broad workings of human-like intelligence.

The Path to Advanced AI

Legg compared the evolution of these models to two systems of thinking in psychology: System 1, which operates spontaneously, and System 2, which engages in thoughtful planning and reasoning. He asserted that current models are operating primarily at the System 1 level, advocating for their progression to System 2, where AI can analyze outcomes, critique its decisions, and adapt accordingly.

"I'm confident that AGI is on the horizon," he stated. "When it arrives, it will bring profound transformations to society." The integration of machine intelligence alongside human ingenuity signals a future filled with limitless possibilities.

Navigating the Risks of Transformation

However, such profound changes come with their own set of challenges and risks. Legg warned that the introduction of advanced technology at a global scale often brings unpredictable outcomes. He highlighted the potential for misuse by malevolent actors as well as unintentional disruptions caused by those who do not fully understand the technology.

Traditionally, discussions surrounding AI safety have focused on two categories: immediate risks, such as biases in algorithms, and long-term challenges posed by superintelligent systems that might exceed human control. Legg noted that advancements in foundation models have blurred these lines, suggesting that current powerful models not only display preliminary AGI capabilities but also present immediate risks.

Furthermore, the emergence of multimodal models—those trained on various data types like text, images, video, and audio—enhances their understanding of human culture, making them more potent and nuanced.

The Imperative for AGI Development

Given the current success of narrow AI across various industries, some question the necessity of striving for AGI. Legg argues that many complex problems require large, diverse datasets for resolution, which a general AI can facilitate by integrating knowledge across multiple domains.

In addition, he emphasized the challenge of halting AGI development, especially as it becomes a "mission critical" component for major corporations and numerous smaller enterprises. Intelligence agencies, like the U.S. National Security Agency (NSA), also possess vast amounts of data, making it difficult to envision a credible strategy for constraining AGI progress.

“Stopping AGI development is a significant challenge,” Legg concluded, urging for thoughtful discourse on how to navigate this rapidly evolving landscape. The future of AGI promises to redefine our world, and understanding both its potential and peril is critical.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles