arXiv
View original resourceThis groundbreaking empirical study bridges the often-cited gap between AI theory and practice by surveying both AI practitioners and lawmakers about their priorities in AI ethics. Unlike most AI ethics literature that presents philosophical frameworks, this research provides concrete data on what the people actually building and regulating AI systems consider most important. The findings reveal surprising alignment between two groups often seen as adversaries: both practitioners and lawmakers prioritize transparency, accountability, and privacy above other ethical principles, offering a data-driven foundation for AI governance discussions.
Most AI ethics resources fall into two camps: academic theories disconnected from practice, or industry guidelines created in isolation from regulatory perspectives. This study is unique because it:
The methodology involved structured surveys with AI developers, researchers, policy makers, and legal experts across multiple jurisdictions, making this one of the most comprehensive cross-stakeholder studies on AI ethics priorities.
The "Big Three" consensus: Both groups consistently ranked transparency, accountability, and privacy as their top three AI ethics principles, despite having very different day-to-day concerns and incentives.
Fairness isn't #1: While fairness dominates AI ethics discussions, both practitioners and lawmakers ranked it lower than expected, suggesting other concerns may be more pressing in practice.
Implementation anxiety: Practitioners expressed significantly more concern about the how of implementing ethics principles, while lawmakers focused more on the what should be regulated.
Regional variations matter: The study found notable differences in priority rankings based on jurisdiction, with European respondents placing higher emphasis on privacy (unsurprising given GDPR) and Asian respondents prioritizing accountability mechanisms.
The "ethics theater" problem: Many practitioners admitted to supporting ethics principles publicly while privately doubting their feasibility - a gap that lawmakers were largely unaware of.
This is particularly valuable for anyone tired of opinion pieces and looking for actual data on AI ethics priorities.
For building ethics programs: Use the transparency-accountability-privacy triad as your foundation, then add jurisdiction-specific priorities based on the regional data.
For policy development: The study's finding of practitioner-lawmaker alignment suggests more collaborative approaches to regulation may be feasible than previously assumed.
For academic research: The methodology and survey instruments provide a replicable framework for conducting similar studies in specific domains or regions.
For stakeholder engagement: Use the data to ground discussions in evidence rather than assumptions about what different groups prioritize.
The research acknowledges these limitations and provides suggestions for follow-up studies that could address them.
Published
2022
Jurisdiction
Global
Category
Research and academic references
Access
Public access
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.