ACM
View original resourceThis ACM research paper tackles one of AI governance's most pressing challenges: creating a systematic way to identify, categorize, and mitigate harms from algorithmic systems. Through a comprehensive scoping review of computing literature, researchers developed a taxonomy that goes beyond technical failures to examine the complex sociotechnical interactions where most algorithmic harms actually occur. What sets this work apart is its focus on prevention through classification—rather than just documenting harms after they happen, it provides a structured framework for anticipating and reducing them before deployment.
The research identifies six distinct categories of sociotechnical harm:
Individual Harms - Direct impacts on specific people, including discrimination, privacy violations, and autonomy reduction. These are often the most visible but represent just one layer of algorithmic impact.
Interpersonal Harms - Damage to relationships and social connections, such as algorithmic systems that erode trust between individuals or communities.
Institutional Harms - Effects on organizations, governance structures, and formal institutions, including democratic processes and institutional credibility.
Informational Harms - Distortion of information ecosystems through biased data, misinformation amplification, or knowledge manipulation.
Societal Harms - Broad impacts on social structures, cultural norms, and collective well-being that emerge from widespread algorithmic deployment.
Environmental Harms - Often overlooked consequences including energy consumption, resource depletion, and ecological impacts of large-scale algorithmic systems.
Traditional approaches to AI safety often focus on technical metrics or individual bias detection. This taxonomy reveals why that's insufficient—most real-world harms emerge from the complex interactions between algorithms, social systems, and institutional contexts. The framework's strength lies in its recognition that technical solutions alone cannot address sociotechnical problems.
The taxonomy also provides a common language for interdisciplinary teams. Product managers, ethicists, engineers, and policymakers can use these categories to systematically evaluate potential harms across different domains and stakeholder groups.
Pre-deployment Risk Assessment: Use the six categories as a checklist during system design to identify potential harm vectors before they manifest.
Incident Response Planning: Structure your incident response procedures around these harm categories to ensure comprehensive coverage when issues arise.
Stakeholder Engagement: Map different stakeholder concerns to specific harm categories to ensure your consultation processes address all relevant perspectives.
Documentation and Reporting: Organize your algorithmic impact assessments and transparency reports using this taxonomy to improve consistency and completeness.
The taxonomy is based on existing computing literature, which may not capture all emerging forms of harm or perspectives from affected communities. Consider supplementing this framework with direct stakeholder input and community-based harm definitions.
The categories can overlap in practice—real incidents often span multiple harm types simultaneously. Don't treat them as mutually exclusive when conducting assessments.
This is a research paper, not an implementation guide. You'll need to adapt the taxonomy to your specific context, industry, and regulatory environment.
Published
2023
Jurisdiction
Global
Category
Incident and accountability
Access
Registration required
US Executive Order on Safe, Secure, and Trustworthy AI
Regulations and laws • White House
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Regulations and laws • U.S. Government
Highlights of the 2023 Executive Order on Artificial Intelligence
Regulations and laws • Congressional Research Service
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.