arXiv
View original resourceMost AI harm taxonomies are built by technologists for technologists, creating blind spots that leave real-world impacts on communities underexplored. This 2024 research paper breaks that pattern by presenting a taxonomy developed through collaborative workshops with diverse stakeholders—including those most affected by AI systems. The result is a more nuanced framework that captures harms often invisible to traditional risk assessments, from cultural erasure to community displacement. Rather than another checklist for compliance teams, this taxonomy offers a lens for understanding how AI systems create ripple effects across social, economic, and cultural dimensions.
Unlike top-down taxonomies created in boardrooms, this framework emerged from grassroots collaboration. The researchers conducted workshops with community advocates, affected individuals, civil society organizations, and domain experts—not just AI practitioners and policymakers. This approach reveals harm categories that purely technical assessments miss:
The taxonomy explicitly challenges the assumption that harms can be neatly categorized, instead embracing the messy reality of how AI impacts interconnect across domains and communities.
The research identifies critical gaps in existing taxonomies through both literature review and participatory workshops. Key findings include:
The Participation Gap: Most taxonomies reflect the perspectives of those who build and regulate AI systems, not those who experience their consequences daily. This creates systematic blind spots around community-level and cultural impacts.
Beyond Individual Harm: Traditional frameworks focus on individual rights violations but miss how AI systems can harm collective identities, disrupt social structures, or erode community resources.
Intersectionality Matters: The taxonomy demonstrates how identity intersections—race, gender, class, disability status—create unique harm patterns that single-axis analyses overlook.
The methodology itself is instructive: structured workshops that prioritized lived experience alongside technical expertise, creating space for perspectives typically excluded from AI governance conversations.
Community Organizations and Advocates will find language and frameworks to articulate AI harms affecting their constituencies, moving beyond individual complaints to systemic analysis.
Policy Researchers and Government Officials can use this taxonomy to identify regulatory gaps and ensure harm assessments capture community-level impacts often missing from corporate risk reports.
Corporate AI Ethics Teams seeking genuinely inclusive harm assessment will discover categories and perspectives to expand beyond technical auditing toward community-engaged evaluation.
Academic Researchers in AI ethics, digital rights, and critical technology studies will find both methodological approaches and substantive findings to build upon.
Civil Society Organizations working on digital rights, algorithmic accountability, or social justice can leverage this framework to connect AI governance to broader equity concerns.
This taxonomy serves multiple functions depending on your role:
The taxonomy isn't meant as a final checklist but as a living framework that evolves with community input and emerging AI applications.
As a research paper rather than implementation guide, this resource requires translation work to become actionable. The collaborative methodology, while valuable, is resource-intensive and may not scale easily across different contexts.
The taxonomy's strength—its comprehensive, intersectional approach—can also make it challenging to operationalize within existing risk management frameworks designed for simpler categorizations. Organizations will need to determine how to integrate these insights with existing compliance requirements.
Additionally, while the paper demonstrates the value of community participation in taxonomy development, it provides limited guidance on how to implement such processes within corporate or governmental constraints.
Published
2024
Jurisdiction
Global
Category
Risk taxonomies
Access
Public access
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.