Are We at the Pinnacle of Human Evolution?

Two weeks ago, Ilya Sutskever, OpenAI’s former chief scientist, secured $1 billion for his new venture, Safe Superintelligence (SSI). This startup aims to develop AI systems that surpass human cognitive abilities. Earlier, Elon Musk’s company, xAI, raised $6 billion with a similar goal, predicting superintelligence within five to six years. These substantial funding rounds add to the billions already invested in firms like OpenAI and Anthropic that are racing to achieve superintelligence.

As a long-time researcher in this domain, I share Musk’s belief that superintelligence could arrive in years, not decades. However, I am cautious about its safe development. Instead, I view this advancement as an “evolutionary pressure point” for humanity—a stage where our survival is challenged by superior intelligences that may have conflicting interests.

I liken this milestone to meeting an advanced alien species, highlighting the "Arrival Mind Paradox." Most people fear superior alien intelligence more than the advanced intelligences we are creating here on Earth. This misconception stems from the belief that we are building AI to emulate humans. In reality, we are designing AI systems to be adept at mimicking human behavior and understanding us, but their cognitive frameworks are fundamentally different from ours.

Despite these concerns, the push for superintelligence continues. 2024 may mark a pivotal moment: the year when AI systems begin to outperform over half of human adults cognitively. After reaching this threshold, we may gradually lose our cognitive advantages until AI systems can surpass all individuals, even the most gifted among us.

Traditionally, humans outperformed powerful AI systems in basic reasoning tasks. Journalist Maxim Lott has tested major large language models (LLMs) using a standardized Mensa IQ test. Recently, OpenAI's new “o1” system scored a 120 IQ, surpassing the median human IQ score of 100. However, this does not definitively mean AI has outperformed most humans.

Standard IQ tests are not entirely valid for AI since the training data includes answers to these tests. To counter this, Lott created a custom IQ test free from any training data for the o1 model, which scored 95. Despite being below the average, this result still surpasses 37% of adult scores, showing rapid improvement; previously, OpenAI's GPT-4 was outperformed by 98% of adults on the same test. If this trend continues, AI models are likely to outthink 50% of human adults in IQ tests this year.

Does this imply we will reach "peak human" in 2024? Yes and no. I predict at least one foundational AI model will be released this year that surpasses 50% of adult reasoning capabilities. From this outlook, we will exceed my definition of peak human, entering a rapid decline towards a time when an AI can outperform all individuals entirely.

However, humans possess another vital asset: collective intelligence. Human groups can demonstrate greater cognitive abilities than individuals, and with over 8 billion of us, there’s tremendous potential. My research for the last decade has concentrated on harnessing AI to connect individuals into real-time systems that can amplify our collective intelligence, a concept I term “collective superintelligence.” This approach could help humanity remain cognitively competitive even as AI surpasses individual reasoning ability, pushing us towards what I envision as “peak humanity.”

In 2019, my team at Unanimous AI conducted experiments allowing groups to take IQ tests collectively through AI-mediated systems. Our early technology, known as “Swarm AI,” enabled small groups to achieve a collective IQ score of 114 during deliberations. While promising, it fell short of achieving collective superintelligence.

Recently, we introduced “conversational swarm intelligence” (CSI), allowing larger groups to engage in real-time deliberations that enhance collective intelligence. A study conducted in collaboration with Carnegie Mellon University revealed that groups of 35 individuals, participating as AI-facilitated conversational swarms, averaged IQ scores of 128, rating in the 97th percentile. This initial success indicates we are only beginning to explore the potential for human intelligence when leveraging AI to think in larger groups.

Pursuing collective superintelligence excites me because it can enhance our cognitive capabilities while being rooted in human values and interests. The question remains: how long can we stay ahead of purely digital AI systems? This will hinge on whether AI development continues at its current pace or faces a slowdown. Regardless, amplifying our collective intelligence may provide the leverage we need to ensure our cognitive relevance as AI evolves.

Many argue that human intelligence encompasses more than logic and reasoning measured by IQ tests, which I wholeheartedly agree with. However, when examining creativity and artistry—often seen as the most human traits—AI systems are rapidly closing the gap. Recent estimates suggest that generative AI produces 15 billion images annually, with increasing speed.

Even more striking, a recent study indicated that AI chatbots are outperforming humans on creativity tests. The findings suggest that AI has reached or possibly surpassed the average human’s creative ideation abilities. While I remain skeptical of these conclusions, it’s evident that it’s only a matter of time before such statements hold true.

Whether we welcome it or not, our position as the most intelligent and creative species on Earth is likely to be challenged soon. We can debate the implications for humanity, but there is an urgent need to prepare and protect ourselves from being outmatched.

Louis Rosenberg is a computer scientist and entrepreneur specializing in AI and mixed reality. His new book, Our Next Reality, explores the implications of AI and spatial computing for humanity.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles