As artificial intelligence research and adoption continue to accelerate, so too do the associated risks. To aid organizations in navigating this intricate landscape, researchers from MIT and other institutions have introduced the AI Risk Repository. This comprehensive database catalogs hundreds of documented risks related to AI systems, providing crucial insights for decision-makers in government, academia, and industry.
Organizing AI Risk Classification
Despite many organizations acknowledging the importance of assessing AI risks, efforts to document and classify these risks have been fragmented, resulting in conflicting classification systems.
“We initiated this project to understand how organizations respond to AI-related risks,” stated Peter Slattery, an incoming postdoc at MIT FutureTech and project lead. “We sought a comprehensive overview of AI risks to use as a checklist but discovered that existing classifications resembled jigsaw puzzle pieces—each interesting but incomplete.”
The AI Risk Repository addresses this challenge by consolidating insights from 43 existing taxonomies, including peer-reviewed articles, preprints, conference papers, and reports. This detailed curation has resulted in a database of over 700 unique AI risks.
The repository employs a two-dimensional classification system. First, risks are categorized by their causes, considering the responsible entity (human or AI), intent (intentional or unintentional), and timing (pre-deployment or post-deployment). This causal taxonomy illuminates how and when AI risks can emerge.
Second, risks are grouped into seven domains, including discrimination and toxicity, privacy and security, misinformation, and misuse.
The AI Risk Repository is a dynamic, publicly accessible database. Organizations can download it for their own use, and the research team plans to regularly update it with new risks, findings, and emerging trends.
Evaluating AI Risks for Enterprises
The AI Risk Repository serves as a practical tool for organizations across various sectors. For those developing or deploying AI systems, the repository provides a vital checklist for risk assessment and mitigation.
“Organizations utilizing AI could benefit from applying the AI Risk Database and its taxonomies as a foundation for thoroughly assessing their risk exposure and management,” researchers note. “These taxonomies may help identify specific actions necessary to mitigate particular risks.”
For instance, an organization developing an AI-powered hiring solution can reference the repository to pinpoint potential risks related to discrimination and bias. Similarly, a company using AI for content moderation can explore the "Misinformation" domain to understand risks associated with AI-generated content and establish safeguards.
While the repository offers a comprehensive foundation, organizations must adapt their risk assessment and mitigation strategies to their unique circumstances. However, having a centralized and structured repository minimizes the risk of overlooking critical threats.
“We anticipate the repository will become increasingly beneficial for enterprises over time,” remarked Neil Thompson, head of the MIT FutureTech Lab. “In future phases, we plan to add new risks and documents, inviting experts to review the repository and identify any omissions. After our next research phase, we aim to provide vital insights into which risks experts prioritize and which are most pertinent to specific actors, like AI developers and large AI users.”
Advancing Future AI Risk Research
In addition to its practical applications, the AI Risk Repository serves as a valuable asset for AI risk researchers. The organized database and taxonomies facilitate information synthesis, identify research gaps, and guide future studies.
“This database offers a solid foundation for more specific research,” Slattery explained. “Previously, researchers faced two choices: spend considerable time reviewing scattered literature for a comprehensive overview or rely on limited existing frameworks that might overlook relevant risks. Our repository streamlines this process, saving time and enhancing oversight as new risks and documents are added.”
The research team intends to utilize the AI Risk Repository as a baseline for their future investigations.
“We will explore potential gaps in how organizations address risks,” Thompson stated. “For instance, we may assess if certain risk categories receive disproportionate attention while others of equal importance are neglected.”
As the AI risk landscape evolves, the research team will continuously update the AI Risk Repository, ensuring it remains a valuable resource for researchers, policymakers, and industry professionals focused on AI risks and mitigation strategies.