SAGE Publications
researchactive

Resistance and refusal to algorithmic harms: Varieties of 'knowledge projects'

SAGE Publications

View original resource

Resistance and refusal to algorithmic harms: Varieties of 'knowledge projects'

Summary

This SAGE Publications research dives deep into how communities, researchers, and activists are fighting back against algorithmic harms through different types of "knowledge projects" - systematic efforts to document, expose, and challenge biased AI systems. Building on landmark investigations like ProPublica's Machine Bias series that revealed racial bias in criminal justice risk assessment tools, this work maps out the diverse ways people are creating counter-narratives to the tech industry's claims of algorithmic neutrality. Rather than just documenting problems, it examines how affected communities are generating their own forms of evidence and expertise to challenge harmful AI deployments.

The backstory: From Machine Bias to movement

The research situates itself within the wave of algorithmic accountability work that emerged after ProPublica's 2016 Machine Bias investigation exposed how COMPAS risk assessment tools were twice as likely to falsely flag Black defendants as future criminals. But rather than treating such revelations as isolated journalistic victories, this work examines how they've spawned entire ecosystems of resistance - from community-led auditing projects to academic research programs that center affected communities' experiences.

Core knowledge project types uncovered

Community-centered documentation efforts: Grassroots initiatives where people directly harmed by algorithmic systems create their own evidence bases, often challenging the metrics and definitions used by system developers.

Critical technical investigations: Research that combines technical auditing with social analysis, going beyond identifying bias to examine how it connects to broader systems of oppression.

Policy intervention projects: Efforts that translate community experiences and technical findings into concrete policy proposals, often bridging the gap between affected communities and regulatory bodies.

Counter-expertise development: Initiatives that build alternative forms of technical knowledge, challenging who gets to be considered an "expert" on algorithmic systems and their impacts.

Who this resource is for

  • Community organizers working on tech justice issues who want to understand how documentation and research can support advocacy efforts
  • Journalists and researchers looking to ground their algorithmic accountability work in community needs rather than just technical metrics
  • Policy advocates seeking frameworks for translating community experiences of algorithmic harm into actionable governance proposals
  • Academic researchers interested in participatory approaches to AI auditing that center affected communities
  • Legal advocates building cases around algorithmic discrimination who need to understand how community knowledge can complement technical evidence

What makes this research different

Unlike typical algorithmic bias research that focuses on technical detection methods, this work examines resistance as knowledge production. It takes seriously the expertise of people experiencing algorithmic harms, rather than treating them simply as subjects to be studied. The research also moves beyond individual bias incidents to examine how communities are building sustained capacity to challenge algorithmic systems - creating what the authors call "infrastructures of refusal."

Key implications for practice

For advocates: Community-generated knowledge can be just as powerful as technical audits in challenging harmful systems - but it requires different forms of support and validation.

For researchers: Effective algorithmic accountability work requires ongoing relationships with affected communities, not just one-off studies.

For policymakers: Governance frameworks need to create space for community expertise, not just technical and legal perspectives.

For technologists: Understanding resistance helps identify where algorithmic systems are causing real-world harm, beyond what traditional fairness metrics capture.

Tags

algorithmic biasAI accountabilitycriminal justiceinvestigative journalismracial biasalgorithmic harms

At a glance

Published

2022

Jurisdiction

Global

Category

Incident and accountability

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

Resistance and refusal to algorithmic harms: Varieties of 'knowledge projects' | AI Governance Library | VerifyWise