arXiv
researchactive

AI Ethics: An Empirical Study on the Views of Practitioners and Lawmakers

arXiv

View original resource

AI Ethics: An Empirical Study on the Views of Practitioners and Lawmakers

Summary

This groundbreaking empirical study bridges the often-cited gap between AI theory and practice by surveying both AI practitioners and lawmakers about their priorities in AI ethics. Unlike most AI ethics literature that presents philosophical frameworks, this research provides concrete data on what the people actually building and regulating AI systems consider most important. The findings reveal surprising alignment between two groups often seen as adversaries: both practitioners and lawmakers prioritize transparency, accountability, and privacy above other ethical principles, offering a data-driven foundation for AI governance discussions.

What makes this research different

Most AI ethics resources fall into two camps: academic theories disconnected from practice, or industry guidelines created in isolation from regulatory perspectives. This study is unique because it:

  • Compares perspectives directly - Side-by-side analysis of what practitioners vs. lawmakers actually think, not what they say publicly
  • Uses empirical data - Real survey responses rather than assumed priorities or theoretical frameworks
  • Focuses on implementation gaps - Identifies where good intentions meet practical constraints
  • Reveals unexpected consensus - Shows that the "us vs. them" narrative between tech and regulators may be overblown

The methodology involved structured surveys with AI developers, researchers, policy makers, and legal experts across multiple jurisdictions, making this one of the most comprehensive cross-stakeholder studies on AI ethics priorities.

Key findings that challenge conventional wisdom

The "Big Three" consensus: Both groups consistently ranked transparency, accountability, and privacy as their top three AI ethics principles, despite having very different day-to-day concerns and incentives.

Fairness isn't #1: While fairness dominates AI ethics discussions, both practitioners and lawmakers ranked it lower than expected, suggesting other concerns may be more pressing in practice.

Implementation anxiety: Practitioners expressed significantly more concern about the how of implementing ethics principles, while lawmakers focused more on the what should be regulated.

Regional variations matter: The study found notable differences in priority rankings based on jurisdiction, with European respondents placing higher emphasis on privacy (unsurprising given GDPR) and Asian respondents prioritizing accountability mechanisms.

The "ethics theater" problem: Many practitioners admitted to supporting ethics principles publicly while privately doubting their feasibility - a gap that lawmakers were largely unaware of.

Who this resource is for

  • AI governance professionals developing ethics frameworks who need evidence-based insights into what actually matters to implementers
  • Policy researchers looking for empirical data to support or challenge existing regulatory approaches
  • Chief AI Officers and ethics teams seeking to align their priorities with both industry best practices and regulatory expectations
  • Legal professionals advising tech companies who need to understand the practitioner perspective on compliance challenges
  • Academic researchers studying the gap between AI ethics theory and practice
  • Standards developers working on AI governance frameworks who want stakeholder input data

This is particularly valuable for anyone tired of opinion pieces and looking for actual data on AI ethics priorities.

How to use these insights

For building ethics programs: Use the transparency-accountability-privacy triad as your foundation, then add jurisdiction-specific priorities based on the regional data.

For policy development: The study's finding of practitioner-lawmaker alignment suggests more collaborative approaches to regulation may be feasible than previously assumed.

For academic research: The methodology and survey instruments provide a replicable framework for conducting similar studies in specific domains or regions.

For stakeholder engagement: Use the data to ground discussions in evidence rather than assumptions about what different groups prioritize.

Limitations to keep in mind

  • Sample size constraints - While comprehensive, the survey may not capture all stakeholder perspectives, particularly from smaller organizations or developing economies
  • Temporal snapshot - Views on AI ethics are rapidly evolving; findings reflect 2022 perspectives
  • Self-selection bias - Respondents who participated in an AI ethics study may already be more ethically engaged than the broader population
  • Implementation vs. aspiration - The study captures stated preferences, which may differ from actual behavior under pressure

The research acknowledges these limitations and provides suggestions for follow-up studies that could address them.

Tags

AI ethicsempirical researchtransparencyaccountabilityprivacygovernance studies

At a glance

Published

2022

Jurisdiction

Global

Category

Research and academic references

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

AI Ethics: An Empirical Study on the Views of Practitioners and Lawmakers | AI Governance Library | VerifyWise