Exploring artificial intelligence (AI) potential risks, the Massachusetts Institute of Technology’s Computer Science & Artificial Intelligence Laboratory (CSAIL) launches a catalog of 700+ risks associated with AI technology in a public database, aimed at helping policymakers, researchers, developers, and IT professionals regulate AI on their own.
Mandating laws or benchmarking AI have become crucial topic regarding emerging tools that utilize it. This is largely due to the numerous instances where AI technology exhibits faulty behavior, prompting people to question whether companies have adequare policies to take action on such issues and if developers are thorough with evaluating AI through their limited risk frameworks.
A Broad Spectrum of Risks
The AI Risk Repository, developed by MIT’s FutureTech group within CSAIL, catalogs 777 risks derived from 43 different taxonomies. It offers a systematic overview of the potential dangers posed by advanced AI systems, categorized by domains such as Discrimination & Toxicity, Privacy & Security, Misinformation, Malicious Actors & Misuse, Human-Computer Interaction, Socioeconomic & Environmental Harms, and AI System Safety, Failures, and Limitations.
Dr. Peter Slattery, the project lead, describes the repository as “an attempt to rigorously curate and analyze AI risks into a publicly accessible, comprehensive, extensible, and categorized database.”
The team’s analysis of over 1,000 AI risk assessment frameworks revealed that, on average, only 34% of the identified risk subdomains were addressed, with some frameworks covering less than 20%. The most comprehensive framework covered just 70% of the risks.
Slattery stated in a news alert on the CSAIL page, “Since the AI risk literature is scattered across peer-reviewed journals, preprints, and industry reports, and quite varied, I worry that decision-makers may unwittingly consult incomplete overviews, miss important concerns, and develop collective blind spots.”
The database also showcased insights on the ritual of releasing AI products, revealing that the majority of AI risks are identified only after a model is deployed. Neil Thompson, director of MIT FutureTech and one of the database’s creators, pointed out, “What our database is saying is that the range of risks is substantial, not all of which can be checked ahead of time.”
Centralized Risk Database for AI Governance
A plethora of discussions depict AI posing many new risks to organizations and exacerbating existing ones. The AI Risk Repository is viewed as a significant tool for enhancing AI governance, especially for leaders who are working to establish AI policies within their organizations.
Slattery emphasized that the database created could serve as a foundation for researchers and policymakers to build upon when conducting more specific work related to AI risks. The new repository aims to save time and increase oversight by providing a more comprehensive database of AI risks.
Bart Willemsen, VP analyst at Gartner, suggested that as the repository grows, it would be beneficial to include potential mitigating measures, laying the groundwork for best practices. He noted, “The time for ‘running fast and ignorantly breaking whatever stands in the way’ ought to be over.”
While the AI Risk Repository represents a significant step forward, the researchers acknowledge that the database has limitations, including potential biases in risk extraction and coding. The database alone is not a solution to the problem but a tool to support better risk management.
Looking ahead, the MIT team plans to use the database to evaluate how well different AI risks are being addressed in practice.
Thompson explained, “We plan to use this to identify shortcomings in organizational responses. For instance, if everyone focuses on one type of risk while overlooking others of similar importance, that’s something we should notice and address.”