United Nations
View original resourceThe UN Secretary-General's High-Level Advisory Body on AI represents the most ambitious attempt yet to create a truly global consensus on AI governance. Launched in 2023 and delivering its final report in 2024, this 39-member body brought together an unprecedented coalition of government officials, tech leaders, civil society advocates, and academics from across the globe. Unlike typical UN initiatives that focus on broad principles, this body was specifically tasked with producing actionable recommendations for international AI coordination—addressing everything from safety standards to equitable access to AI benefits.
Creating international agreement on AI governance faces unique obstacles that this advisory body was designed to tackle head-on. Unlike climate change or nuclear weapons, AI development is happening in real-time across dozens of countries with vastly different regulatory philosophies, technical capabilities, and economic interests. The body's approach recognized that effective AI governance requires more than just government-to-government agreements—it needs buy-in from the private sector developing the technology, civil society groups representing affected communities, and technical experts who understand the rapidly evolving capabilities.
The advisory body's composition reflects this reality, with members including former government ministers, current tech executives, leading AI researchers, and representatives from developing nations often left out of AI governance discussions dominated by the US, EU, and China.
The advisory body's final report centers on seven core recommendations that collectively form what they call a "Global Digital Compact" for AI:
Institutional Architecture: Establishment of an International Scientific Panel on AI (similar to the IPCC for climate) and a Global AI Governance Framework that countries can adapt to their specific contexts while maintaining international compatibility.
Safety and Risk Management: Development of shared standards for AI safety testing, incident reporting mechanisms, and coordinated responses to AI-related risks that cross borders.
Inclusive Development: Concrete measures to ensure developing nations can participate in and benefit from AI advancement, including technology transfer mechanisms and capacity-building programs.
Human Rights Integration: Embedding human rights considerations into AI development and deployment processes, with particular attention to protecting vulnerable populations.
Economic Governance: Framework for addressing AI's impact on global labor markets, trade, and economic inequality between nations with different AI capabilities.
The recommendations notably avoid prescriptive technical standards, instead focusing on governance mechanisms that can adapt as AI technology evolves.
Unlike regional approaches like the EU AI Act or national strategies, this initiative explicitly designed its recommendations to work across different political systems, economic development levels, and cultural contexts. The advisory body spent significant effort on what they call "operationalizable universality"—creating principles concrete enough to guide real policy decisions while flexible enough to be implemented in dozens of different national contexts.
The body also distinguished itself by directly engaging with ongoing AI safety research and incorporating technical experts' input on emerging risks like advanced AI systems' potential for unexpected capabilities or misuse.
Government Officials developing national AI strategies will find detailed guidance on international coordination mechanisms and templates for cross-border cooperation agreements.
International Organizations working on technology governance can use the framework as a foundation for their own AI-related initiatives and understand how to align with broader UN digital governance efforts.
Policy Researchers and Think Tanks studying AI governance will benefit from the extensive stakeholder consultation process documentation and comparative analysis of different regulatory approaches.
Private Sector Leaders in AI development companies can understand emerging international expectations for responsible AI development and anticipate future regulatory coordination.
Civil Society Organizations advocating for responsible AI can reference the human rights framework and inclusive development recommendations in their own policy advocacy.
The advisory body didn't just make recommendations—they provided a specific timeline for implementation. The immediate next steps (2024-2025) focus on establishing the International Scientific Panel on AI and beginning pilot programs for the technology transfer mechanisms. Medium-term goals (2025-2027) include operationalizing the Global AI Governance Framework through bilateral and multilateral agreements. The longer-term vision (2027-2030) envisions a functioning international coordination mechanism that can adapt to whatever AI developments emerge.
Importantly, the roadmap acknowledges that not all countries will move at the same pace and provides mechanisms for early adopters to move forward while keeping doors open for others to join later.
Published
2024
Jurisdiction
Global
Category
International initiatives
Access
Public access
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.