OpenAI Assembles Team to Analyze 'Catastrophic' AI Risks, Addressing Key Concerns like Nuclear Threats
OpenAI has revealed the formation of a new team dedicated to evaluating and analyzing AI models to mitigate what it refers to as “catastrophic risks.” This team, named Preparedness, will be led by Aleksander Madry, the director of MIT’s Center for Deployable Machine Learning, who joined OpenAI in May as its head. Preparedness will focus on tracking, predicting, and safeguarding against potential dangers posed by future AI systems, which could range from their capacity to deceive humans (as seen in phishing attacks) to their potential to generate harmful code.