Back to library

Incident and accountability

Everything related to failures and accountability.

17 resources

Type:
17 resources found
datasetResponsible AI Collaborative • 2021

Partnership on AI Incident Database

A comprehensive database cataloging AI incidents and harms. The database enables researchers and practitioners to learn from past AI failures, identify patterns, and develop preventive measures.

AI incident databases
datasetAIAAIC • 2020

AIAAIC Repository

The AI, Algorithmic, and Automation Incidents and Controversies repository tracks incidents involving AI and automated systems. It provides detailed case studies with timelines, stakeholders, and outcomes.

AI incident databases
regulationEuropean Union • 2024

EU AI Act Incident Reporting Requirements

The EU AI Act establishes mandatory incident reporting requirements for high-risk AI systems. Providers must report serious incidents and malfunctions to relevant authorities within specified timeframes.

Incident reporting frameworksEU
lawUS Congress • 2023

US Algorithmic Accountability Act (Proposed)

Proposed US legislation requiring companies to conduct impact assessments on automated decision systems. The bill would establish accountability requirements for high-risk algorithmic systems affecting critical decisions.

Accountability mechanismsUS
toolMIT • 2024

AI Incident Tracker

A comprehensive tracking tool that provides detailed analysis of AI incidents from 2015-2024. The tracker utilizes Causal and Domain Taxonomies from the MIT AI Risk Repository to categorize and analyze how AI incidents have evolved over time.

Incident reporting frameworks
datasetPartnership on AI • 2020

AI Incident Database

The AI Incident Database is a comprehensive collection of over 1,200 documented cases where AI systems have caused safety, fairness, or other real-world problems. It serves as a tool to help stakeholders better understand, anticipate, and mitigate AI-related risks through systematic incident documentation and analysis.

Incident reporting frameworks
repositoryEthicAI • 2024

Tracking AI incidents: OECD AIM and AIAIAC Repository

A resource covering two major AI incident tracking initiatives: the OECD AI Incidents and Hazards Monitor (AIM) and the AIAAIC Repository. These efforts focus on documenting real-world AI incidents to enhance transparency and inform governance decisions.

Incident reporting frameworks
repositoryOECD • 2024

AIAAIC Repository

The AIAAIC Repository is an open, public interest resource that documents incidents and controversies related to artificial intelligence, algorithms, and automation. It provides tools and metrics designed to track and analyze AI-related incidents for accountability and governance purposes.

Incident reporting frameworks
frameworkU.S. Government Accountability Office • 2021

Artificial Intelligence: An Accountability Framework for Federal Agencies and Other Entities

A comprehensive accountability framework developed by the U.S. GAO to help federal agencies and other entities ensure responsible AI implementation. The framework is organized around four complementary principles addressing governance, data, performance, and monitoring to promote accountability in AI systems.

AI incident databasesUS
frameworkInformation Technology Industry Council • 2024

ITI's AI Accountability Framework

A comprehensive accountability framework developed by the Information Technology Industry Council that delineates responsibility sharing between different actors in AI systems development and deployment. The framework specifically addresses roles of various stakeholders including integrators and defines how accountability should be distributed based on their unique functions in the AI ecosystem.

AI incident databasesUS
reportNational Telecommunications and Information Administration • 2023

Artificial Intelligence Accountability Policy Report

A policy report by NTIA examining AI accountability frameworks and their implementation. The report references and builds upon NIST's AI Risk Management Framework, focusing on developing trustworthy and responsible AI systems within federal governance structures.

AI incident databasesUS
researchSAGE Publications • 2022

Resistance and refusal to algorithmic harms: Varieties of 'knowledge projects'

This research examines various forms of resistance and refusal to algorithmic harms through different 'knowledge projects'. The work builds on investigative journalism like Machine Bias that revealed how algorithmic systems can replicate and amplify racial biases in criminal justice and other domains where algorithmic decision systems are deployed.

Incident reporting frameworks
researchACM • 2023

Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction

This research paper presents a scoping review and taxonomy of sociotechnical harms caused by algorithmic systems. The study uses reflexive thematic analysis of computing research to categorize different types of harms and provides a framework for harm reduction in algorithmic systems.

Incident reporting frameworks
researchPMC • 2022

After Harm: A Plea for Moral Repair after Algorithms Have Failed

This research paper examines the concept of moral repair as a response to algorithmic harm, moving beyond traditional offender-centric approaches to focus on what victims actually need. Using the Ofqual grading controversy as a case study, it argues for algorithmic imprint awareness and emphasizes the importance of addressing the extended consequences of algorithmic failures through victim-centered moral repair processes.

Incident reporting frameworks
guidelineRadiant Security • 2024

AI-Driven Incident Response: Definition and Components

This resource provides guidance on AI-driven incident response systems that offer structured decision-making frameworks for cybersecurity threats. It focuses on how AI can deliver data-backed insights and suggested actions based on analysis of threat environments and historical incidents.

Incident reporting frameworks
frameworkCoalition for Secure AI • 2024

AI Incident Response Framework, Version 1.0

A comprehensive framework developed by the Coalition for Secure AI that provides security teams with structured approaches, tools, and knowledge to protect AI systems from emerging threats. The framework offers guidance for incident response specifically tailored to the unique challenges of AI technology deployments.

Incident reporting frameworks
guidelineCimphony • 2024

AI Incident Response Plans: Checklist & Best Practices

A practical guide providing checklists and best practices for developing AI incident response plans. The resource covers key elements including assigning response coordinators, establishing communication channels, and documenting procedures for detecting, assessing, containing, and recovering from AI-related incidents.

Incident reporting frameworks
Incident and accountability | AI Governance Library | VerifyWise