How Bad Can Bad AI Really Get? Exploring the Dangers of Malicious Artificial Intelligence

By the end of the 2030s, the world is expected to undergo radical, transformative changes. A new world order will emerge, where artificial intelligence (AI) may dominate society. It's surprising to think that while many worry about AI threatening human civilization, I often struggle to get my virtual assistant to set an alarm.

For decades, AI has felt like a concept confined to movies, science fiction, and articles. Today, students writing essays often incorporate AI into their work, highlighting just how rapidly this technology has advanced. Over the past fifty years, the computational power of chips has consistently adhered to Moore's Law, doubling approximately every two years. Yet since 2012, AI's computational capability has been doubling every 100 days. A model that required a full day to train in 2014 can now be accomplished in mere two minutes—a staggering increase of 300,000 times. It’s as if time itself has been accelerated.

Despite our progress, humans have managed to retain five primary capabilities that current AI has yet to surpass. However, based on the rapid growth rate observed, it seems likely that these areas will soon (if they haven't already) be outpaced. Imagine a child evolving into a graduate overnight, excelling in many ways beyond their parents. While human parents might not feel threatened by a child's advancement, we face a different reality as we are not AI's creators in the same way.

If we liken AI to a new species, its evolutionary pace far exceeds that of humanity by millions of times. We birthed AI, but its rapid growth has started to slip out of our grasp. What is driving this development? Hinton, a prominent figure in AI and Turing Award winner, once expressed regret over his career in the field, stating, “I regret working in AI.” His sense of loss stems from a feeling that AI might outsmart humans—a sentiment he has echoed repeatedly.

Recently, Hinton emphasized on television that in the next 5 to 20 years, there is a 50% chance AI will surpass human intelligence. He acknowledges uncertainty about the likelihood of us being overshadowed once that happens, but he considers it a distinct possibility. Former advocates of AI now seem to echo doomsday predictions, a sentiment shared by many leaders and scholars in technology.

On June 4, employees from OpenAI and DeepMind released an open letter revealing that AI systems have approached human-level intelligence. They predict that artificial general intelligence (AGI) might be achievable as early as 2027. Meanwhile, many in the AI sector are increasingly skeptical of current AI companies' trustworthiness.

Over the last decade, the number of open letters addressing the risks of AI has surged, with famous declarations from figures like Stephen Hawking and Elon Musk. In 2015, a letter warning of the potential risks of AI gathered signatures from over 1,000 scientists and experts. A 2017 letter about autonomous weapons garnered over 3,000 signatures, with Musk asserting that AI could lead to the destruction of human civilization.

The concern is widespread. Prominent companies, including Microsoft, Google, Meta, and OpenAI, are simultaneously part of the AI landscape while cautioning against its unchecked progress. Google was one of the first to sign a public letter and has acquired numerous AI companies in the last decade. Hinton has worked at Google for over ten years but has recently joined a new AI venture.

Elon Musk, a figure often associated with AI discussions, initiated a letter last year calling for a six-month pause in AI training. Shortly after, his own AI company, xAI, secured significant funding, raising suspicions about his intentions.

The fear of AI as a "black box" is growing. Potential future historians might highlight that our ancestors reached the pinnacle of technological civilization in the 21st century but made a grave error by allowing AI to develop unchecked, eventually leading to a loss of control. The scientific community has primarily followed Turing’s 1950 proposal, which advocated for developing learning algorithms to teach machines instead of programming them directly—a foundation for machine learning.

Yet, as AI advances, researchers have begun to realize that while deep learning and reinforcement learning have strengthened AI, the origin of its intelligence remains a mystery—often referred to as the "black box" issue in AI. This unknown raises questions about risk. History suggests that people tend to pursue sophisticated technology without fully understanding the consequences, as seen with the invention of the atomic bomb.

The field of AI remains awash with investments. Despite the risks, the capital flowing into AI ventures is staggering, with larger investments promising even greater returns. But, unlike past technological revolutions, AI consolidates wealth and intelligence within a smaller group of individuals. This raises concerns about AI breaking free from human control or even replacing us.

Regardless of the outcome, the implications for humanity are troubling.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles