A recent report from New Scientist highlights that scientists estimate a 5% chance of artificial intelligence (AI) leading to human extinction. Many AI researchers recognize the development of superintelligent AI as a potential existential risk, although views on the severity and certainty of this threat vary considerably. This insight emerges from a comprehensive survey of 2,700 AI researchers, the largest of its kind conducted at six leading AI conferences.
Researchers shared their perspectives on potential timelines for significant advancements in AI technology and the societal impacts—both positive and negative—of these developments. Nearly 58% believe the probability of human extinction or severe consequences related to AI is approximately 5%. Katja Grace, a co-author from the Machine Intelligence Research Institute, noted, “This is an important signal indicating that a majority of AI researchers do not dismiss the idea of advanced AI leading to human extinction.” She emphasized that this perception reflects a substantial level of concern beyond mere statistical risk assessments.
Despite these concerns, Emil Torres from Case Western Reserve University reassures that there is no immediate cause for alarm. He pointed out that AI experts have historically struggled to accurately predict the future of AI development. While Grace and her colleagues do not specialize in forecasting AI trajectories, they referenced their 2016 survey, which effectively predicted AI milestones. Compared to a similar survey from 2022, many researchers now expect AI to achieve key milestones sooner than initially anticipated, a shift fueled by advancements like ChatGPT and increasing interest in AI-driven technologies.
Survey participants forecast that within the next decade, AI systems will have a 50% or greater chance of successfully completing a range of tasks—such as composing music akin to Taylor Swift or developing a payment processing website. However, complex tasks like electrical wiring installation or resolving intricate mathematical problems are expected to take longer. By 2047, the probability of AI outperforming humans in various tasks is estimated to reach 50%, with a similar likelihood for full automation of human jobs by 2116. Notably, these predictions are 13 and 48 years earlier than those provided in last year’s survey.
While expectations run high, Torres warned that breakthroughs may not materialize as predicted. “Many of these innovations are challenging to foresee, and the AI field could very well face another downturn,” he cautioned, alluding to the funding and interest decline seen in the 1970s and 1980s.
In addition to the risks posed by superintelligent AI, immediate concerns loom larger. A significant majority of AI researchers express serious worries over scenarios driven by AI, such as deepfakes, manipulation of public opinion, genetic engineering weapons, authoritarian control, and increasing economic inequality. Torres emphasized the danger AI poses in spreading misinformation on critical issues like climate change and the decline of democratic governance.