Université de Montréal
View original resourceBorn from a unique collaborative process involving citizens, researchers, and industry leaders, the Montreal Declaration represents one of the first attempts to democratize AI ethics. Released in 2017 following extensive public consultations in Montreal, this framework doesn't just list ethical principles—it grounds them in real human concerns about AI's impact on society. What sets it apart is its emphasis on human dignity and social justice, moving beyond typical corporate ethics frameworks to address fundamental questions about AI's role in democracy, equality, and human flourishing.
The Montreal Declaration emerged from an innovative approach to AI governance: actually asking the public what they thought. In 2017, Université de Montréal organized citizen panels and public forums where ordinary people discussed their hopes and fears about AI alongside researchers and policymakers. This wasn't just an academic exercise—it was a deliberate attempt to ensure that AI ethics reflected broader social values, not just technical or business considerations. The result was a declaration that grounds abstract ethical principles in concrete social concerns, making it particularly relevant for organizations serving diverse communities.
The Declaration centers around ten principles that prioritize human welfare and social justice:
Well-being: AI should increase individual and collective well-being by promoting human fulfillment Respect for autonomy: People must maintain meaningful control over AI systems that affect them Protection of privacy and intimacy: AI shouldn't erode the private sphere essential to human development Solidarity: AI benefits should be shared fairly across society, not concentrated among elites Democratic participation: Citizens should have a voice in AI development and deployment decisions Equity: AI must not perpetuate or amplify discrimination and should promote equal opportunities Diversity inclusion: AI development should involve diverse perspectives and protect cultural diversity Prudence: AI deployment requires careful consideration of risks and unintended consequences Responsibility: Clear accountability mechanisms must exist for AI decisions and outcomes Sustainable development: AI should contribute to environmental sustainability and long-term human prosperity
Public sector leaders designing AI policies for diverse constituencies who need frameworks grounded in democratic values rather than purely technical considerations
Community organizations and NGOs working on social justice issues who want to engage with AI governance from a human rights perspective
Academic institutions developing AI programs who need ethical frameworks that connect technical development to broader social responsibilities
Consulting firms and policy advisors helping organizations implement responsible AI practices with strong community engagement components
International organizations working on AI governance in contexts where citizen participation and cultural diversity are priorities
Unlike many AI ethics frameworks developed by tech companies or standards bodies, the Montreal Declaration explicitly positions itself as a counterweight to purely market-driven AI development. It's unapologetically political in the best sense—acknowledging that AI governance is fundamentally about power, justice, and democracy. The Declaration doesn't shy away from difficult questions about inequality, surveillance, or corporate responsibility. It also uniquely emphasizes the importance of cultural diversity and democratic participation in AI governance, making it particularly valuable for organizations operating in multicultural contexts or democratic institutions.
Start by using the Declaration's principles as a lens for examining your current AI initiatives. Ask: Does this project enhance human well-being beyond just efficiency? Are we meaningfully including affected communities in decision-making? The Declaration works best as a values check rather than a compliance checklist.
For policy development, the citizen engagement model that created the Declaration offers a template for inclusive AI governance processes. Consider organizing similar consultations with your stakeholders before implementing major AI systems.
The principles translate well into procurement criteria, hiring practices, and vendor evaluation processes. Organizations have successfully used the Declaration's framework to develop AI ethics review boards that include community representatives alongside technical experts.
How does this relate to other AI ethics frameworks? The Montreal Declaration complements technical frameworks like NIST's AI RMF by providing the social and political context for responsible AI. While NIST focuses on risk management processes, Montreal focuses on the values that should guide those processes.
Is this legally binding? No, it's a voluntary framework. However, several organizations and municipalities have formally adopted its principles, and it has influenced AI policy development in Quebec and other jurisdictions.
How do I measure compliance with these principles? The Declaration is intentionally values-focused rather than metrics-focused. Success is measured through ongoing dialogue with affected communities, regular ethical reviews, and demonstrated commitment to the principles in decision-making processes rather than through quantitative KPIs.
Published
2017
Jurisdiction
Canada
Category
Ethics and principles
Access
Public access
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.