arXiv
researchactive

A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms

arXiv

View original resource

A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms

Summary

Most AI harm taxonomies are built by technologists for technologists, creating blind spots that leave real-world impacts on communities underexplored. This 2024 research paper breaks that pattern by presenting a taxonomy developed through collaborative workshops with diverse stakeholders—including those most affected by AI systems. The result is a more nuanced framework that captures harms often invisible to traditional risk assessments, from cultural erasure to community displacement. Rather than another checklist for compliance teams, this taxonomy offers a lens for understanding how AI systems create ripple effects across social, economic, and cultural dimensions.

What Makes This Different

Unlike top-down taxonomies created in boardrooms, this framework emerged from grassroots collaboration. The researchers conducted workshops with community advocates, affected individuals, civil society organizations, and domain experts—not just AI practitioners and policymakers. This approach reveals harm categories that purely technical assessments miss:

  • Collective and community-level impacts that extend beyond individual harm
  • Cultural and identity-based harms affecting marginalized communities
  • Systemic and structural effects that compound over time
  • Intersectional perspectives showing how multiple identities create unique vulnerabilities

The taxonomy explicitly challenges the assumption that harms can be neatly categorized, instead embracing the messy reality of how AI impacts interconnect across domains and communities.

Core Insights and Methodology

The research identifies critical gaps in existing taxonomies through both literature review and participatory workshops. Key findings include:

The Participation Gap: Most taxonomies reflect the perspectives of those who build and regulate AI systems, not those who experience their consequences daily. This creates systematic blind spots around community-level and cultural impacts.

Beyond Individual Harm: Traditional frameworks focus on individual rights violations but miss how AI systems can harm collective identities, disrupt social structures, or erode community resources.

Intersectionality Matters: The taxonomy demonstrates how identity intersections—race, gender, class, disability status—create unique harm patterns that single-axis analyses overlook.

The methodology itself is instructive: structured workshops that prioritized lived experience alongside technical expertise, creating space for perspectives typically excluded from AI governance conversations.

Who This Resource Is For

Community Organizations and Advocates will find language and frameworks to articulate AI harms affecting their constituencies, moving beyond individual complaints to systemic analysis.

Policy Researchers and Government Officials can use this taxonomy to identify regulatory gaps and ensure harm assessments capture community-level impacts often missing from corporate risk reports.

Corporate AI Ethics Teams seeking genuinely inclusive harm assessment will discover categories and perspectives to expand beyond technical auditing toward community-engaged evaluation.

Academic Researchers in AI ethics, digital rights, and critical technology studies will find both methodological approaches and substantive findings to build upon.

Civil Society Organizations working on digital rights, algorithmic accountability, or social justice can leverage this framework to connect AI governance to broader equity concerns.

Practical Applications

This taxonomy serves multiple functions depending on your role:

  • Risk Assessment Enhancement: Use the framework to audit existing harm taxonomies for blind spots, particularly around collective and cultural impacts
  • Stakeholder Engagement Strategy: The participatory methodology provides a template for inclusive AI governance processes
  • Policy Gap Analysis: Compare current regulations against the taxonomy to identify areas where community perspectives are underrepresented
  • Research Framework: Apply the categories to analyze AI impacts in specific domains or communities
  • Advocacy Tool: Leverage the academic credibility and comprehensive scope to advocate for broader harm recognition in AI governance

The taxonomy isn't meant as a final checklist but as a living framework that evolves with community input and emerging AI applications.

Limitations to Consider

As a research paper rather than implementation guide, this resource requires translation work to become actionable. The collaborative methodology, while valuable, is resource-intensive and may not scale easily across different contexts.

The taxonomy's strength—its comprehensive, intersectional approach—can also make it challenging to operationalize within existing risk management frameworks designed for simpler categorizations. Organizations will need to determine how to integrate these insights with existing compliance requirements.

Additionally, while the paper demonstrates the value of community participation in taxonomy development, it provides limited guidance on how to implement such processes within corporate or governmental constraints.

Tags

AI governancerisk taxonomyalgorithmic harmhuman-centered designAI safetyrisk classification

At a glance

Published

2024

Jurisdiction

Global

Category

Risk taxonomies

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

A Collaborative, Human-Centred Taxonomy of AI, Algorithmic, and Automation Harms | AI Governance Library | VerifyWise