Why Artificial General Intelligence Is Beyond the Scope of Deep Learning

Sam Altman’s recent employment developments and speculation surrounding OpenAI’s innovative Q* model have sparked renewed interest in the opportunities and risks associated with artificial general intelligence (AGI).

AGI aims to perform intellectual tasks on par with humans. Rapid advancements in artificial intelligence, particularly through deep learning, have incited both enthusiasm and concern about the potential arrival of AGI. Various organizations, including OpenAI and Elon Musk’s xAI, are committed to progressing toward AGI, raising a critical question: Are today’s AI advancements guiding us toward AGI?

Limitations of Deep Learning

Deep learning, a prominent machine learning method involving artificial neural networks, underpins ChatGPT and much of modern AI. Praised for its ability to manage various data types with minimal preprocessing, many expect deep learning to play a pivotal role in the development of AGI.

Nonetheless, deep learning has notable limitations. Creating effective models demands vast datasets and significant computational resources. These models extract statistical rules based on training data, which are then applied to new information to generate responses. This approach rests on a predictive logic; models update rules as new phenomena arise, yet their vulnerability to real-world uncertainties hampers their suitability for fulfilling AGI goals. For instance, a June 2022 incident involving a cruise Robotaxi illustrates this risk: the vehicle faltered in an unforeseen situation it had not been trained to navigate, leading to decision-making failures.

The 'What If' Conundrum

Humans, the blueprints for AGI, do not formulate exhaustive rules for every scenario. Instead, we interact with our environment through real-time perception, drawing on existing knowledge to understand context and influencing factors. Unlike deep learning models that categorize objects based on fixed criteria, humans leverage a flexible approach, adapting established rules as necessary to arrive at effective choices.

For example, if you encounter an unfamiliar cylindrical object while hiking, a deep learning model would require you to analyze various features and classify the object as either a threat (such as a snake) or harmless (like a rope) before acting. In contrast, a human would assess the situation from a distance, continuously updating their understanding and deciding based on a broader range of past experiences and potential actions. This nuanced methodology emphasizes the exploration of alternatives over rigid predictions—suggesting that achieving AGI may depend more on enhancing our “what if” reasoning capacity than on pure prediction.

Decision-Making Under Deep Uncertainty: A Path Forward

Innovative frameworks like Decision-Making under Deep Uncertainty (DMDU) offer promising strategies for AGI. DMDU approaches, such as Robust Decision-Making, assess how alternative decisions might perform across various future scenarios without necessitating constant retraining. They focus on identifying key factors that determine decision outcomes, aiming to find robust solutions that deliver acceptable results across different contexts.

Unlike conventional deep learning solutions that prioritize optimization, which can fail under unpredictable conditions (as evidenced by supply chain disruptions during COVID-19), DMDU methods seek resilient alternatives that can adapt to a range of environments, providing a valuable foundation for AI capable of navigating real-world uncertainties.

Robust Decisioning in Autonomous Vehicles

The development of fully autonomous vehicles (AVs) serves as a practical example of this methodology. AVs must maneuver through diverse and unpredictable conditions, closely mirroring human decision-making in traffic. Despite heavy investments in deep learning for full autonomy, these systems often struggle in uncertain scenarios. The inherent limitations in modeling every possible situation necessitate ongoing efforts to address unexpected challenges in AV technology.

One potential solution involves employing a robust decision-making framework. AV sensors would collect real-time data to evaluate various decisions—such as accelerating, changing lanes, or braking—in specific traffic scenarios. If there are doubts about standard algorithmic responses, the system could analyze the vulnerabilities of different choices within that context, reducing reliance on extensive data retraining and improving adaptability to real-world uncertainties. This shift could enhance AV performance by prioritizing decision-making flexibility over the pursuit of perfect predictions.

Emphasizing Decision Context for AGI Advancement

As AI technology continues to evolve, it may be essential to move away from the deep learning paradigm and focus on decision context to facilitate progress toward AGI. While deep learning has proven effective in numerous applications, it falls short when it comes to AGI realization.

DMDU methodologies could pave the way for a more robust, decision-driven AI approach that effectively addresses real-world uncertainties.

Swaptik Chowdhury is a Ph.D. student at the Pardee RAND Graduate School and an assistant policy researcher at the RAND Corporation.

Steven Popper is an adjunct senior economist at the RAND Corporation and a professor of decision sciences at Tecnológico de Monterrey.

Most people like

Find AI tools in YBX