ACM
researchactive

Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction

ACM

View original resource

Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction

Summary

This ACM research paper tackles one of AI governance's most pressing challenges: creating a systematic way to identify, categorize, and mitigate harms from algorithmic systems. Through a comprehensive scoping review of computing literature, researchers developed a taxonomy that goes beyond technical failures to examine the complex sociotechnical interactions where most algorithmic harms actually occur. What sets this work apart is its focus on prevention through classification—rather than just documenting harms after they happen, it provides a structured framework for anticipating and reducing them before deployment.

The Core Taxonomy: Six Categories of Algorithmic Harm

The research identifies six distinct categories of sociotechnical harm:

Individual Harms - Direct impacts on specific people, including discrimination, privacy violations, and autonomy reduction. These are often the most visible but represent just one layer of algorithmic impact.

Interpersonal Harms - Damage to relationships and social connections, such as algorithmic systems that erode trust between individuals or communities.

Institutional Harms - Effects on organizations, governance structures, and formal institutions, including democratic processes and institutional credibility.

Informational Harms - Distortion of information ecosystems through biased data, misinformation amplification, or knowledge manipulation.

Societal Harms - Broad impacts on social structures, cultural norms, and collective well-being that emerge from widespread algorithmic deployment.

Environmental Harms - Often overlooked consequences including energy consumption, resource depletion, and ecological impacts of large-scale algorithmic systems.

Why This Framework Changes the Game

Traditional approaches to AI safety often focus on technical metrics or individual bias detection. This taxonomy reveals why that's insufficient—most real-world harms emerge from the complex interactions between algorithms, social systems, and institutional contexts. The framework's strength lies in its recognition that technical solutions alone cannot address sociotechnical problems.

The taxonomy also provides a common language for interdisciplinary teams. Product managers, ethicists, engineers, and policymakers can use these categories to systematically evaluate potential harms across different domains and stakeholder groups.

Practical Applications for Harm Reduction

Pre-deployment Risk Assessment: Use the six categories as a checklist during system design to identify potential harm vectors before they manifest.

Incident Response Planning: Structure your incident response procedures around these harm categories to ensure comprehensive coverage when issues arise.

Stakeholder Engagement: Map different stakeholder concerns to specific harm categories to ensure your consultation processes address all relevant perspectives.

Documentation and Reporting: Organize your algorithmic impact assessments and transparency reports using this taxonomy to improve consistency and completeness.

Who This Resource Is For

  • AI product teams developing consumer-facing algorithms who need systematic approaches to harm identification
  • Risk and compliance professionals in organizations deploying AI systems who must assess potential negative impacts
  • Researchers and academics studying AI safety, fairness, or sociotechnical systems who want a comprehensive harm classification framework
  • Policy professionals developing AI governance frameworks who need evidence-based taxonomies for regulatory guidance
  • Civil society organizations monitoring AI deployment who want structured approaches to harm documentation and advocacy

Watch Out For

The taxonomy is based on existing computing literature, which may not capture all emerging forms of harm or perspectives from affected communities. Consider supplementing this framework with direct stakeholder input and community-based harm definitions.

The categories can overlap in practice—real incidents often span multiple harm types simultaneously. Don't treat them as mutually exclusive when conducting assessments.

This is a research paper, not an implementation guide. You'll need to adapt the taxonomy to your specific context, industry, and regulatory environment.

Tags

algorithmic harmtaxonomyharm reductionsociotechnical systemsAI safetyincident reporting

At a glance

Published

2023

Jurisdiction

Global

Category

Incident and accountability

Access

Registration required

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

Sociotechnical Harms of Algorithmic Systems: Scoping a Taxonomy for Harm Reduction | AI Governance Library | VerifyWise