MIT Researchers Unveil Comprehensive Repository of AI Risks for Enhanced Safety

What Risks Should Individuals, Companies, and Governments Consider When Using AI Systems?

Identifying the specific risks associated with AI systems, whether for individuals, organizations, or governments, is a complex challenge. While an AI system overseeing critical infrastructure presents clear risks to human safety, other applications—like AI scoring exams, filtering job applications, or verifying travel documents at border controls—introduce distinct yet equally significant risks.

As policymakers work to create regulations governing AI use, such as the EU AI Act and California’s SB 1047, reaching a consensus on which risks to address has proven difficult. To aid in this effort, researchers at MIT have developed an innovative "AI risk repository," a comprehensive database that categorizes and analyzes AI-related risks.

“Our goal was to rigorously compile and evaluate AI risks into a publicly accessible database that anyone can utilize and which will remain updated over time,” explained Peter Slattery, a researcher from MIT's FutureTech group and the project lead for the AI risk repository. “We recognized the need for this resource not only for our project but also for many others in the field.”

This AI risk repository catalogs over 700 identified risks, classified by factors like intentionality and domains such as discrimination and misinformation. Slattery noted that its creation stemmed from a need to clarify the overlaps and gaps in existing AI safety research. While other frameworks exist, they often cover only a limited range of risks compared to the repository, which could have serious implications for AI development, usage, and regulation.

“People might assume there's a unified understanding of AI risks, but our research indicates otherwise,” Slattery pointed out. “We discovered that most frameworks accounted for just 34% of the 23 identified risk subdomains, and nearly a quarter covered less than 20%. No single overview addressed all 23 subdomains, with even the most extensive framework covering only 70%. This fragmentation suggests we can’t presume consensus on these risks.”

To construct this repository, MIT researchers collaborated with scholars from the University of Queensland, the nonprofit Future of Life Institute, KU Leuven, and AI startup Harmony Intelligence, meticulously reviewing academic databases and gathering thousands of documents related to AI risk evaluations.

The findings revealed disparities in how frequently certain risks were acknowledged across different frameworks. Notably, over 70% of frameworks considered privacy and security implications, while only 44% discussed misinformation. Additionally, while more than 50% examined discrimination risks, merely 12% addressed the issue of “pollution of the information ecosystem,” which refers to the rising prevalence of AI-generated spam.

“This database can serve as a foundational resource for researchers, policymakers, and others involved in addressing AI risks,” said Slattery. “Previously, stakeholders faced the choice of either investing considerable time in navigating scattered literature or relying on limited frameworks that overlooked key risks. Our repository aims to streamline this process, enhancing oversight and efficiency.”

However, a pressing question remains: will stakeholders utilize this resource? Today’s global AI regulatory landscape is a patchwork of varying approaches, each with its own objectives. Would the existence of a consolidated AI risk repository have made a difference had it been available earlier? That remains uncertain.

Another critical consideration is whether simply aligning on identifying AI risks is sufficient to catalyze effective regulation. Many safety evaluations for AI systems come with notable limitations, and a risk database alone may not address these challenges.

MIT researchers are determined to explore these issues further. Neil Thompson, head of the FutureTech lab, shared, “In our next research phase, we plan to leverage the repository to assess how effectively various AI risks are managed. This will allow us to pinpoint areas where organizations may be focusing too heavily on specific risks while neglecting others of equal relevance.”

Keywords: AI risks, AI safety, regulatory frameworks, AI governance, risk management

Most people like

Find AI tools in YBX

Related Articles
Refresh Articles