Nvidia's Jensen Huang States AI Hallucinations Can Be Solved; Predicts Artificial General Intelligence in 5 Years

Understanding Artificial General Intelligence: Insights from Nvidia's CEO Jensen Huang

Artificial general intelligence (AGI) — often called “strong AI,” “full AI,” or “human-level AI” — marks a transformative leap in the realm of artificial intelligence. Unlike narrow AI, which excels at specific tasks like detecting product defects or drafting news summaries, AGI is designed to handle a wide array of cognitive challenges at or above human capability. Speaking at Nvidia’s annual GTC developer conference this week, CEO Jensen Huang expressed his weariness with the topic, noting that he is frequently misquoted.

The recurring questions about AGI are understandable, as they touch on profound existential concerns regarding humanity's position and authority in a world where machines might surpass human intellect and abilities. The crux of this apprehension revolves around the unpredictable nature of AGI’s decision-making, which may not align with human values or priorities — a theme that has intrigued science fiction since the 1940s. Many worry that once AGI achieves a significant level of autonomy, it could become uncontrollable, resulting in outcomes that are unpredictable or irreversible.

When sensationalist media inquire about timelines, they often aim to provoke AI experts into predicting catastrophic scenarios for humanity — which understandably makes CEOs hesitant to engage in such discussions.

In his address, Huang clarified his perspective on AGI, emphasizing that predicting its emergence hinges on how one defines it. He likened the process to recognizing when you've arrived at a destination: just as you know it's New Year’s when 2025 arrives or can identify when you've reached the San Jose Convention Center by its prominent banners, the measurement of AGI's arrival should be clear.

“If we define AGI with specific criteria, such as the ability to outperform most individuals by a margin — maybe 8% — I believe we will achieve that within five years," Huang stated. Potential benchmarks he mentioned include passing a legal bar exam, logic evaluations, or pre-med tests. He stressed that until questioners can articulate a clear definition of AGI, he remains cautious in making predictions.

Addressing AI Hallucinations

During a recent Q&A, Huang addressed concerns about AI hallucinations, a phenomenon where AI generates plausible-sounding but inaccurate responses. He appeared frustrated and emphasized that these issues are solvable through diligent research.

“Implement a rule: Every answer must be validated through research first,” Huang suggested, describing this method as “retrieval-augmented generation.” This approach emphasizes fundamental media literacy: scrutinizing sources and validating facts against established truths. If an AI's source contains inaccuracies, it should be disregarded, and the search for reliable information should continue. “An AI should not just respond; it must first ascertain which answers are most reliable.”

For critical inquiries, such as health-related questions, Huang proposed checking multiple verified sources for reliability. Importantly, this means AI models must have the capacity to admit uncertainty, responding with phrases like, “I’m not certain about that,” or “I can’t find a consensus on the answer,” and even acknowledging when specific outcomes (like the Super Bowl winner) cannot be predicted.

Stay Updated on Nvidia's GTC 2024:

- Nvidia introduces NIM for seamless AI model deployment.

- Surprising revelations from Nvidia's keynote at GTC.

- Why the AI community is gathering at Nvidia’s GTC 2024 event.

- Nvidia partners with leading names in humanoid robotics for the new AI platform, GR00T.

Most people like

Find AI tools in YBX