This comprehensive research paper provides the first systematic examination of algorithmic auditing practices across both industry and academic settings. Rather than prescribing theoretical frameworks, the authors conducted empirical analysis of real-world audit implementations to understand what actually works—and what doesn't—in practice. The study reveals significant gaps between auditing aspirations and reality, while identifying promising approaches that organizations can adopt. Published at ACM FAccT 2022, this research serves as both a reality check for the field and a practical guide for improving audit methodologies.
The survey examined dozens of algorithmic audits across different sectors and revealed several surprising insights:
The methodology gap is real: Most audits lack standardized approaches, making it difficult to compare findings or replicate results. Organizations are essentially making it up as they go along.
Internal vs. external audits show different patterns: Internal company audits tend to focus on technical metrics and performance, while external audits (by researchers or third parties) emphasize fairness and societal impact—but rarely do both well.
Documentation is inconsistent: Many audits fail to adequately document their scope, limitations, or methodology, undermining their credibility and usefulness for future work.
Industry-specific challenges emerged: Financial services audits face different constraints than healthcare or hiring algorithms, yet most current frameworks ignore these contextual differences.
The authors identified several audit approaches that show promise:
Pre-deployment audits work best when integrated into the development process rather than treated as a final checkpoint. Organizations that embed auditing into their ML pipeline report more actionable findings.
Stakeholder involvement significantly improves audit relevance, but most audits still rely too heavily on technical teams without sufficient input from affected communities.
Longitudinal monitoring proves more valuable than one-time audits, yet resource constraints often prevent organizations from implementing ongoing assessment.
Red-teaming approaches are gaining traction, particularly for identifying edge cases and adversarial vulnerabilities that traditional testing misses.
This survey arrives at a critical moment when regulatory pressure is mounting globally, but practical guidance remains scarce. The European Union's AI Act will require algorithmic impact assessments, California is considering algorithmic accountability legislation, and companies are scrambling to implement audit practices before requirements become mandatory.
The research provides evidence-based insights into what actually works rather than theoretical ideals. For organizations building audit capabilities today, this paper offers concrete examples of successful approaches and common failure modes to avoid.
Chief Technology Officers and AI Ethics Teams implementing audit programs will find practical insights into resource allocation, methodology selection, and stakeholder engagement strategies.
Regulatory compliance professionals can use the survey findings to benchmark their organization's practices against industry norms and identify gaps before audits become mandatory.
Academic researchers studying algorithmic accountability will discover methodological approaches that have proven effective in practice, plus identification of areas needing further research.
Policy makers developing algorithmic accountability requirements can ground their regulations in evidence about what audit approaches actually deliver meaningful results.
Third-party auditors and consultants will gain insights into client needs, effective methodologies, and how to position their services in a rapidly evolving market.
The research acknowledges several constraints that affect its applicability:
Publication bias likely affects which audits were available for analysis—organizations rarely publicize audits that reveal serious problems or methodological failures.
Temporal snapshot: The paper captures practices as of 2022, but the field is evolving rapidly as new tools and regulations emerge.
Access limitations meant some industry practices couldn't be fully analyzed due to proprietary concerns or legal restrictions.
Geographic scope skews toward North American and European practices, with limited representation from other regions where algorithmic deployment is accelerating.
Published
2022
Jurisdiction
Global
Category
Research and academic references
Access
Public access
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.