OpenAI is establishing a dedicated team to address the risks associated with superintelligent artificial intelligence. Superintelligence refers to an AI model that surpasses human intelligence across various fields, rather than excelling in just one area, as seen in earlier models. OpenAI anticipates that such technology could emerge within the decade. “Superintelligence will be the most impactful technology humanity has ever invented, offering solutions to some of the world's most pressing problems,” the organization stated. However, they also warned that its immense power could pose significant dangers, potentially leading to the disempowerment of humanity or even extinction.
The new team will be co-led by OpenAI's Chief Scientist Ilya Sutskever and Jan Leike, the head of alignment at the research lab. OpenAI plans to allocate 20 percent of its current computational resources to this initiative, aiming to create an automated alignment researcher. This system is designed to help ensure that a superintelligent AI remains safe and aligned with human values. “Although this ambitious goal comes with no guarantees, we believe that a focused, collaborative effort can tackle this challenge,” OpenAI remarked. They noted that preliminary experiments have shown promise, and they have developed increasingly useful metrics to assess progress. The lab also intends to share its roadmap in the future.
This announcement coincides with global discussions on regulating the emerging AI industry. In the United States, OpenAI CEO Sam Altman has engaged with over 100 federal lawmakers recently. He has publicly emphasized that AI regulation is “essential” and that OpenAI is “eager” to collaborate with policymakers. However, it is crucial to approach such statements with caution. Discussions around the potential risks of AI should not distract from more immediate concerns, such as the implications of AI on labor markets, misinformation, and copyright issues that policymakers must address right now.