European Commission HLEG
guidelineactive

EU Ethics Guidelines for Trustworthy AI

European Commission HLEG

View original resource

EU Ethics Guidelines for Trustworthy AI

Summary

The European Commission's High-Level Expert Group on AI (HLEG) created these guidelines as the foundational ethical framework that paved the way for Europe's regulatory approach to AI. Released in 2019, this document establishes seven concrete requirements for trustworthy AI systems and introduces a practical assessment list with over 60 specific questions. Unlike abstract ethical principles, these guidelines provide actionable criteria that organizations can use to evaluate their AI systems before deployment.

The Seven Pillars of Trustworthy AI

Human Agency and Oversight: AI systems should support human decision-making, not replace human judgment entirely. This includes meaningful human control and the right to human review of AI decisions.

Technical Robustness and Safety: Systems must be reliable, secure, and safe throughout their lifecycle, with fallback plans and accuracy appropriate to their context.

Privacy and Data Governance: Strong data protection measures, purpose limitation, and data quality assurance must be embedded from the design phase.

Transparency: AI systems should be explainable, with clear communication about capabilities, limitations, and decision-making processes to relevant stakeholders.

Diversity, Non-discrimination and Fairness: Systems must avoid unfair bias, ensure accessibility, and involve diverse stakeholders in their development and deployment.

Societal and Environmental Well-being: Consider broader impacts on society, democracy, and the environment, including sustainability and social consequences.

Accountability: Clear governance structures, auditability, risk assessment, and mechanisms for redress must be established.

Why This Document Still Matters in 2024

While the EU AI Act now provides legally binding requirements, these ethics guidelines remain highly relevant because they:

  • Inform regulatory interpretation: The EU AI Act references these principles, making them crucial for compliance strategies
  • Go beyond legal minimums: They address ethical considerations that regulations may not cover
  • Provide practical tools: The assessment list offers concrete questions for ethical AI audits
  • Bridge technical and ethical domains: Written for both technologists and ethicists, creating common language
  • Influence global standards: Referenced by organizations worldwide as a benchmark for responsible AI

Who This Resource Is For

AI Product Managers and Developers building systems that may be deployed in Europe or globally, who need concrete criteria for ethical design decisions.

Compliance and Risk Teams preparing for EU AI Act requirements, as these guidelines inform the regulatory framework and provide additional ethical context.

Ethics Committees and Review Boards seeking structured approaches to AI ethics assessment, with ready-to-use evaluation criteria.

Consultants and Auditors conducting AI ethics assessments who need comprehensive frameworks with specific, measurable requirements.

Academic Researchers studying AI governance, as this represents one of the most influential policy documents in the field.

Getting Started: The Assessment List Approach

The guidelines include a detailed assessment list organized by the seven requirements. Start by:

  1. Download the full document and navigate to the assessment list (pages 26-33)
  2. Choose one AI system for pilot assessment rather than trying to evaluate everything at once
  3. Assemble a cross-functional team including technical, legal, and business stakeholders
  4. Work through each requirement systematically, documenting current practices and gaps
  5. Prioritize improvements based on risk level and feasibility

The assessment questions are designed to be answerable by teams with mixed technical backgrounds, making this more accessible than purely technical auditing frameworks.

What Makes This Different from Other AI Ethics Frameworks

Unlike high-level principles documents, this provides specific, actionable criteria with measurable outcomes. The guidelines explicitly connect ethical principles to technical requirements and business processes.

The multi-stakeholder development process involved experts from industry, academia, civil society, and government, creating unusual consensus on contentious issues.

Most importantly, these guidelines were designed with regulatory implementation in mind from the start, making them more legally-informed than purely academic frameworks.

The assessment list format makes this immediately practical – teams can start using it today without additional interpretation or tool development.

Tags

EUethicstrustworthy AIHLEG

At a glance

Published

2019

Jurisdiction

European Union

Category

Ethics and principles

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

EU Ethics Guidelines for Trustworthy AI | AI Governance Library | VerifyWise