MIT's AI Risk Repository Report represents the most systematic effort to date to categorize and understand the full spectrum of AI-related risks. Unlike traditional risk frameworks that focus on specific sectors or applications, this repository takes a dual-taxonomy approach that cuts across all domains and causation patterns. The Causal Taxonomy dissects risks by who causes them (human vs AI systems), whether they're intentional or accidental, and when they emerge in the development lifecycle. The Domain Taxonomy organizes these same risks into seven thematic areas, creating a comprehensive grid for risk identification and management. This isn't just another risk list—it's a structured knowledge base that helps organizations identify blind spots and develop more complete risk management strategies.
What sets MIT's repository apart is its two-lens approach to risk categorization. The Causal Taxonomy asks fundamental questions about risk origins:
This creates eight distinct causal categories that help teams understand not just what risks exist, but why and when they occur.
The Domain Taxonomy complements this by organizing risks into seven thematic areas, allowing organizations to focus on risks most relevant to their sector or use case. This dual approach means you can analyze the same risk from multiple angles—understanding both its causal mechanics and its domain-specific implications.
The repository shines in three key scenarios:
Risk Assessment Planning: Use the taxonomies as checklists during risk assessment phases. The systematic categorization helps ensure you're not missing entire classes of risks that might not be obvious in your specific context.
Team Communication: The standardized taxonomy provides a common vocabulary for discussing risks across different stakeholders. Technical teams, legal counsel, and business leaders can reference the same risk categories and understand each other's concerns more clearly.
Regulatory Preparation: As AI regulations evolve globally, the comprehensive nature of this repository helps organizations prepare for compliance requirements they might not have anticipated. Many regulatory frameworks draw from academic risk taxonomies like this one.
Risk managers and compliance officers will find this invaluable for building comprehensive risk registers and ensuring nothing falls through the cracks. The systematic approach helps translate between technical risks and business impact assessments.
AI development teams can use this during design and testing phases to identify potential failure modes they might not have considered. The causal taxonomy is particularly useful for understanding how different development choices might introduce or mitigate risks.
Policy makers and regulators will appreciate the global, domain-agnostic approach that provides a foundation for creating comprehensive AI governance frameworks without being locked into specific technological implementations.
Academic researchers and consultants working on AI safety and governance can leverage this as a foundational framework for more specialized research or client-specific risk assessments.
Start by mapping your current risk management practices against both taxonomies. You'll likely find gaps—risks you're monitoring that don't fit neatly into your current categories, or categories in the repository where you haven't identified specific risks yet.
Use the causal taxonomy to improve your risk monitoring systems. Risks with different causal patterns often require different detection and mitigation strategies. Intentional human-caused risks need different controls than unintentional AI-system-caused risks.
The domain taxonomy works best when combined with your specific industry context. Map the seven domains to your business operations to identify which areas deserve the most attention and resources.
Consider this a living framework rather than a static checklist. As your AI systems evolve and new risks emerge, the taxonomies provide a structure for categorizing and understanding new threats in relation to your existing risk management approaches.
Published
2025
Jurisdiction
Global
Category
Risk taxonomies
Access
Public access
US Executive Order on Safe, Secure, and Trustworthy AI
Regulations and laws • White House
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Regulations and laws • U.S. Government
Highlights of the 2023 Executive Order on Artificial Intelligence
Regulations and laws • Congressional Research Service
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.