SAGE Publications
View original resourceThis SAGE Publications research dives deep into how communities, researchers, and activists are fighting back against algorithmic harms through different types of "knowledge projects" - systematic efforts to document, expose, and challenge biased AI systems. Building on landmark investigations like ProPublica's Machine Bias series that revealed racial bias in criminal justice risk assessment tools, this work maps out the diverse ways people are creating counter-narratives to the tech industry's claims of algorithmic neutrality. Rather than just documenting problems, it examines how affected communities are generating their own forms of evidence and expertise to challenge harmful AI deployments.
The research situates itself within the wave of algorithmic accountability work that emerged after ProPublica's 2016 Machine Bias investigation exposed how COMPAS risk assessment tools were twice as likely to falsely flag Black defendants as future criminals. But rather than treating such revelations as isolated journalistic victories, this work examines how they've spawned entire ecosystems of resistance - from community-led auditing projects to academic research programs that center affected communities' experiences.
Community-centered documentation efforts: Grassroots initiatives where people directly harmed by algorithmic systems create their own evidence bases, often challenging the metrics and definitions used by system developers.
Critical technical investigations: Research that combines technical auditing with social analysis, going beyond identifying bias to examine how it connects to broader systems of oppression.
Policy intervention projects: Efforts that translate community experiences and technical findings into concrete policy proposals, often bridging the gap between affected communities and regulatory bodies.
Counter-expertise development: Initiatives that build alternative forms of technical knowledge, challenging who gets to be considered an "expert" on algorithmic systems and their impacts.
Unlike typical algorithmic bias research that focuses on technical detection methods, this work examines resistance as knowledge production. It takes seriously the expertise of people experiencing algorithmic harms, rather than treating them simply as subjects to be studied. The research also moves beyond individual bias incidents to examine how communities are building sustained capacity to challenge algorithmic systems - creating what the authors call "infrastructures of refusal."
For advocates: Community-generated knowledge can be just as powerful as technical audits in challenging harmful systems - but it requires different forms of support and validation.
For researchers: Effective algorithmic accountability work requires ongoing relationships with affected communities, not just one-off studies.
For policymakers: Governance frameworks need to create space for community expertise, not just technical and legal perspectives.
For technologists: Understanding resistance helps identify where algorithmic systems are causing real-world harm, beyond what traditional fairness metrics capture.
Published
2022
Jurisdiction
Global
Category
Incident and accountability
Access
Public access
Datasheet for Dataset Template
Transparency and documentation • Florida Atlantic University
AI System Disclosures
Transparency and documentation • National Telecommunications and Information Administration
AI in Hiring: Emerging Legal Developments and Compliance Guidance for 2026
Sector specific governance • HR Defense
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.