European Commission HLEG
View original resourceThe European Commission's High-Level Expert Group on AI (HLEG) created these guidelines as the foundational ethical framework that paved the way for Europe's regulatory approach to AI. Released in 2019, this document establishes seven concrete requirements for trustworthy AI systems and introduces a practical assessment list with over 60 specific questions. Unlike abstract ethical principles, these guidelines provide actionable criteria that organizations can use to evaluate their AI systems before deployment.
Human Agency and Oversight: AI systems should support human decision-making, not replace human judgment entirely. This includes meaningful human control and the right to human review of AI decisions.
Technical Robustness and Safety: Systems must be reliable, secure, and safe throughout their lifecycle, with fallback plans and accuracy appropriate to their context.
Privacy and Data Governance: Strong data protection measures, purpose limitation, and data quality assurance must be embedded from the design phase.
Transparency: AI systems should be explainable, with clear communication about capabilities, limitations, and decision-making processes to relevant stakeholders.
Diversity, Non-discrimination and Fairness: Systems must avoid unfair bias, ensure accessibility, and involve diverse stakeholders in their development and deployment.
Societal and Environmental Well-being: Consider broader impacts on society, democracy, and the environment, including sustainability and social consequences.
Accountability: Clear governance structures, auditability, risk assessment, and mechanisms for redress must be established.
While the EU AI Act now provides legally binding requirements, these ethics guidelines remain highly relevant because they:
AI Product Managers and Developers building systems that may be deployed in Europe or globally, who need concrete criteria for ethical design decisions.
Compliance and Risk Teams preparing for EU AI Act requirements, as these guidelines inform the regulatory framework and provide additional ethical context.
Ethics Committees and Review Boards seeking structured approaches to AI ethics assessment, with ready-to-use evaluation criteria.
Consultants and Auditors conducting AI ethics assessments who need comprehensive frameworks with specific, measurable requirements.
Academic Researchers studying AI governance, as this represents one of the most influential policy documents in the field.
The guidelines include a detailed assessment list organized by the seven requirements. Start by:
The assessment questions are designed to be answerable by teams with mixed technical backgrounds, making this more accessible than purely technical auditing frameworks.
Unlike high-level principles documents, this provides specific, actionable criteria with measurable outcomes. The guidelines explicitly connect ethical principles to technical requirements and business processes.
The multi-stakeholder development process involved experts from industry, academia, civil society, and government, creating unusual consensus on contentious issues.
Most importantly, these guidelines were designed with regulatory implementation in mind from the start, making them more legally-informed than purely academic frameworks.
The assessment list format makes this immediately practical – teams can start using it today without additional interpretation or tool development.
Published
2019
Jurisdiction
European Union
Category
Ethics and principles
Access
Public access
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.