Everything related to failures and accountability.
17 resources
A comprehensive database cataloging AI incidents and harms. The database enables researchers and practitioners to learn from past AI failures, identify patterns, and develop preventive measures.
The AI, Algorithmic, and Automation Incidents and Controversies repository tracks incidents involving AI and automated systems. It provides detailed case studies with timelines, stakeholders, and outcomes.
The EU AI Act establishes mandatory incident reporting requirements for high-risk AI systems. Providers must report serious incidents and malfunctions to relevant authorities within specified timeframes.
Proposed US legislation requiring companies to conduct impact assessments on automated decision systems. The bill would establish accountability requirements for high-risk algorithmic systems affecting critical decisions.
A comprehensive tracking tool that provides detailed analysis of AI incidents from 2015-2024. The tracker utilizes Causal and Domain Taxonomies from the MIT AI Risk Repository to categorize and analyze how AI incidents have evolved over time.
The AI Incident Database is a comprehensive collection of over 1,200 documented cases where AI systems have caused safety, fairness, or other real-world problems. It serves as a tool to help stakeholders better understand, anticipate, and mitigate AI-related risks through systematic incident documentation and analysis.
A resource covering two major AI incident tracking initiatives: the OECD AI Incidents and Hazards Monitor (AIM) and the AIAAIC Repository. These efforts focus on documenting real-world AI incidents to enhance transparency and inform governance decisions.
The AIAAIC Repository is an open, public interest resource that documents incidents and controversies related to artificial intelligence, algorithms, and automation. It provides tools and metrics designed to track and analyze AI-related incidents for accountability and governance purposes.
A comprehensive accountability framework developed by the U.S. GAO to help federal agencies and other entities ensure responsible AI implementation. The framework is organized around four complementary principles addressing governance, data, performance, and monitoring to promote accountability in AI systems.
A comprehensive accountability framework developed by the Information Technology Industry Council that delineates responsibility sharing between different actors in AI systems development and deployment. The framework specifically addresses roles of various stakeholders including integrators and defines how accountability should be distributed based on their unique functions in the AI ecosystem.
A policy report by NTIA examining AI accountability frameworks and their implementation. The report references and builds upon NIST's AI Risk Management Framework, focusing on developing trustworthy and responsible AI systems within federal governance structures.
This research examines various forms of resistance and refusal to algorithmic harms through different 'knowledge projects'. The work builds on investigative journalism like Machine Bias that revealed how algorithmic systems can replicate and amplify racial biases in criminal justice and other domains where algorithmic decision systems are deployed.
This research paper presents a scoping review and taxonomy of sociotechnical harms caused by algorithmic systems. The study uses reflexive thematic analysis of computing research to categorize different types of harms and provides a framework for harm reduction in algorithmic systems.
This research paper examines the concept of moral repair as a response to algorithmic harm, moving beyond traditional offender-centric approaches to focus on what victims actually need. Using the Ofqual grading controversy as a case study, it argues for algorithmic imprint awareness and emphasizes the importance of addressing the extended consequences of algorithmic failures through victim-centered moral repair processes.
This resource provides guidance on AI-driven incident response systems that offer structured decision-making frameworks for cybersecurity threats. It focuses on how AI can deliver data-backed insights and suggested actions based on analysis of threat environments and historical incidents.
A comprehensive framework developed by the Coalition for Secure AI that provides security teams with structured approaches, tools, and knowledge to protect AI systems from emerging threats. The framework offers guidance for incident response specifically tailored to the unique challenges of AI technology deployments.
A practical guide providing checklists and best practices for developing AI incident response plans. The resource covers key elements including assigning response coordinators, establishing communication channels, and documenting procedures for detecting, assessing, containing, and recovering from AI-related incidents.