Amnesty International & Access Now
View original resourceThe Toronto Declaration stands out as the first major civil society statement specifically addressing human rights in the age of machine learning. Born from a collaboration between Amnesty International and Access Now, this 2018 declaration doesn't just raise concerns—it provides a concrete framework for ensuring AI systems respect fundamental human rights. Unlike government-led initiatives or industry self-regulation, this declaration emerges from civil society, offering an advocacy-oriented perspective that emphasizes accountability and justice over technical compliance.
What makes the Toronto Declaration unique is its origin and approach. While most AI governance frameworks come from governments, standards bodies, or tech companies, this declaration represents the collective voice of human rights organizations worldwide. It's not concerned with competitive advantage or regulatory compliance—it's focused squarely on protecting people from algorithmic harm.
The declaration takes a rights-based approach, grounding its recommendations in established international human rights law rather than creating new principles from scratch. This connection to existing legal frameworks gives it both moral authority and practical grounding in decades of human rights jurisprudence.
The declaration identifies four fundamental rights under threat from machine learning systems:
Right to equality and non-discrimination: Goes beyond simple bias detection to demand proactive measures ensuring ML systems don't perpetuate or amplify existing societal inequalities. This includes intersectional discrimination affecting people with multiple marginalized identities.
Right to privacy: Addresses both data privacy and the broader concept of informational self-determination. The declaration argues that privacy isn't just about data protection—it's about maintaining human autonomy in an algorithmic world.
Right to due process: Emphasizes that people affected by algorithmic decisions must have meaningful opportunities to understand, challenge, and appeal those decisions. This goes well beyond simple "explainability" requirements.
Right to an effective remedy: Insists that when ML systems cause harm, people must have access to meaningful redress—not just technical fixes, but genuine accountability and compensation where appropriate.
Civil society organizations advocating for responsible AI will find this declaration provides both philosophical grounding and practical talking points for engaging with policymakers and tech companies.
Policy makers can use this framework to understand how existing human rights obligations apply to new AI systems, helping bridge the gap between traditional rights frameworks and emerging technologies.
Ethics professionals in tech companies will benefit from the declaration's outside perspective on AI impacts, especially when building internal advocacy for stronger human rights protections.
Legal professionals working on AI governance can leverage the declaration's grounding in established human rights law to strengthen arguments about AI accountability and remedy.
Researchers studying AI impacts on society will find the declaration useful for understanding how human rights frameworks can be applied to evaluate algorithmic systems.
Start by using the declaration's framework to audit existing AI systems. For each system, ask: Does this respect equality? Does it protect privacy? Can people meaningfully challenge decisions? Is there effective remedy for harm?
The declaration works particularly well as an advocacy tool. Its civil society origins and human rights grounding give it credibility when engaging with skeptical stakeholders who might dismiss industry self-regulation or academic frameworks.
Consider pairing the Toronto Declaration with more technical frameworks like the NIST AI RMF or algorithmic impact assessments. The declaration provides the "why" (human rights protection) while technical frameworks provide the "how" (specific implementation steps).
Use the declaration to engage with affected communities. Its emphasis on meaningful participation and remedy makes it a natural starting point for inclusive AI governance processes.
Is this legally binding? No, it's a civil society declaration, not a law or regulation. However, it's grounded in existing human rights law, which is legally binding in many jurisdictions. Think of it as an interpretation of how existing rights apply to AI systems.
How does this relate to the EU AI Act or other AI regulations? The declaration complements regulatory frameworks by providing human rights grounding for policy choices. Many provisions in the EU AI Act and similar regulations can be traced back to principles articulated in declarations like this one.
Can companies use this for compliance purposes? While not a compliance checklist, companies can use the declaration's framework to identify human rights risks in their AI systems and develop appropriate safeguards. It's particularly valuable for stakeholder engagement and demonstrating commitment to responsible AI beyond legal minimums.
How do I measure success using this framework? Look for concrete outcomes: Are fewer people experiencing discrimination from AI systems? Do people have meaningful ways to challenge algorithmic decisions? Are communities most at risk from AI harm meaningfully involved in governance decisions? The declaration emphasizes substantive equality and remedy, not just procedural compliance.
Published
2018
Jurisdiction
Global
Category
Ethics and principles
Access
Public access
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.