Exploring Google DeepMind's Six Tiered Levels of Artificial General Intelligence (AGI)

Google's DeepMind team is actively refining the discussion around Artificial General Intelligence (AGI) by establishing a clear definition of the term. While many AI enthusiasts view AGI as the ultimate goal in the quest for artificial superintelligence, the specifics of what AGI entails are rarely articulated. The term is often vaguely applied to describe software that, once it crosses a certain threshold, achieves capabilities akin to human intelligence.

In a preprint published on arXiv, the DeepMind researchers emphasized the importance of a precise definition of AGI, highlighting the need to quantify attributes such as performance, generality, and autonomy in AI systems. By proposing a standardized framework for evaluating AGI, they aim to create benchmarks that can assist in assessing the capabilities of various AI models.

### Defining AGI

Currently, there is no universally accepted definition of AGI. OpenAI's charter characterizes AGI as "highly autonomous systems that outperform humans at most economically valuable work." Experts agree that, unlike narrow AI—which excels in specific tasks such as language translation or game playing—AGI should demonstrate the flexibility and adaptability to handle any intellectual task that a human can perform. This means mastering specific domains while also having the capacity to transfer knowledge across different fields, exhibit creativity, and solve novel problems.

To clarify the concept of AGI, researchers at Google took a cue from the six-level framework used to assess autonomous driving progress. They analyzed various definitions of AGI and identified several key principles that should underpin any definition.

**First**, the Google team argues that AGI definitions should focus on capabilities rather than the processes through which AI achieves them. This perspective emphasizes that AI does not need to replicate human thought patterns or consciousness to qualify as AGI.

**Second**, they assert that achieving AGI requires not only general ability but also specific performance benchmarks for various tasks. However, they clarify that these benchmarks do not need to be validated in real-world environments; it is adequate for a model to demonstrate the potential to exceed human capabilities in a given area.

Some experts suggest that AGI may need to be embedded in robots to interact with the physical world. However, the DeepMind researchers contend that this is not requisite. They propose that AGI should primarily focus on intelligent cognitive tasks, such as self-directed learning. Furthermore, they stress the importance of tracking how AGI evolves over time rather than fixating on a singular end goal.

### Levels of AGI

To rank AGI, DeepMind developed a classification system called "Levels of AGI," beginning with "emerging" (comparable to or slightly better than an unskilled human) and progressing through categories like "competent," "expert," "virtuoso," and ending with "superhuman" (exceeding all human capabilities). This ranking framework applies to both simple and complex AI systems.

The researchers point out that existing AI technologies such as DeepMind's AlphaFold already exhibit superhuman performance in specific tasks. They also suggest that advanced chatbots like GPT-4 and Google's Bard may represent early stages of AGI.

### The Future of AGI

Some members of the AI community are optimistic about the imminent arrival of AGI. Jensen Huang, CEO of Nvidia, recently expressed the belief that AGI could be realized within the coming decade, or even sooner. Nicole Valentine, an AI and fintech specialist, proposed that AGI may already be present but has not yet reached its full potential. She argues that as AI systems evolve and learn from their environments, they will exhibit greater sophistication over time. "The real challenge is how we as humans navigate the risks and opportunities presented by software that can learn, communicate in natural language, and reason," she stated.

Earlier in the year, a group of AI experts drew attention with their paper titled "Sparks of Artificial General Intelligence: Early Experiments with GPT-4," in which they highlighted GPT-4's ability to perform complex tasks across diverse fields, suggesting it could be seen as an initial—though incomplete—instance of AGI.

Conversely, some experts believe we are still distant from achieving human-level intelligence in machines. Meta's Chief AI Scientist, Yann LeCun, argues against the existence of AGI, suggesting the term should be replaced with "human-level AI." However, he acknowledges that machines will eventually surpass human intelligence in all domains, aligning with the general definition of AGI.

Proponents of AGI assert that it holds the potential to revolutionize various sectors, from healthcare to space exploration. However, experts like Assaf Melochna, the president of AI company Aquant, note that while AGI could lead to extraordinary advancements, it also poses significant risks akin to those witnessed in the manipulation of social media during societal and political events.

Most people like

Find AI tools in YBX