OpenAI Assembles Team to Analyze 'Catastrophic' AI Risks, Addressing Key Concerns like Nuclear Threats

OpenAI has revealed the formation of a new team dedicated to evaluating and analyzing AI models to mitigate what it refers to as “catastrophic risks.” This team, named Preparedness, will be led by Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning, who joined OpenAI in May as its head. Preparedness will focus on tracking, predicting, and safeguarding against potential dangers posed by future AI systems, which could range from their capacity to deceive humans (as seen in phishing attacks) to their potential to generate harmful code.

Some of the risk areas that Preparedness will explore may seem unconventional. For instance, OpenAI highlighted “chemical, biological, radiological, and nuclear” threats as significant concerns linked to AI models in a recent blog post. OpenAI CEO Sam Altman is known for his cautionary stance on AI, often expressing concerns that it could lead to human extinction. The company's commitment to investigating scenarios reminiscent of science fiction narratives surpasses what this writer anticipated.

In addition, OpenAI is interested in exploring “less obvious” and more realistic aspects of AI risks. To mark the launch of the Preparedness team, the company is inviting the community to propose ideas for risk studies, with a $25,000 prize and potential job opportunities within the team for the top ten submissions. One question posed in the contest challenges participants to consider the potential misuse of OpenAI’s advanced models—such as Whisper (transcription), Voice (text-to-speech), GPT-4V, and DALL·E 3—by malicious actors.

The Preparedness team will also develop a “risk-informed development policy,” outlining OpenAI's strategies for evaluating AI models, implementing risk mitigation measures, and establishing governance for oversight throughout the model development lifecycle. This initiative aims to enhance OpenAI's ongoing efforts in AI safety, focusing on both pre- and post-deployment stages of AI technology.

“We believe that advanced AI models have the potential to benefit humanity greatly,” stated OpenAI in its blog post. “However, they also pose significant risks… It’s crucial we establish the understanding and infrastructure necessary to ensure the safety of powerful AI systems.”

The announcement of Preparedness coincides with a significant U.K. government summit on AI safety and follows OpenAI’s decision to create a team focused on managing the emergence of “superintelligent” AI. Altman, along with Ilya Sutskever, OpenAI’s chief scientist and co-founder, believes that AI with greater intelligence than humans may emerge within the next decade, and that it requires thorough research to develop mechanisms for regulation and control.

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles