Government of Japan
View original resourceThe Hiroshima AI Process represents a historic milestone in international AI governance—the first multilateral framework specifically designed for generative AI systems. Born from Japan's G7 presidency, this initiative has united the world's leading democracies around shared principles for governing AI technologies that can create text, images, code, and other content. Unlike previous AI governance efforts that focused broadly on artificial intelligence, this process zeroes in on the unique challenges and opportunities presented by generative AI systems like large language models and multimodal AI systems.
The initiative emerged from Japan's 2023 G7 presidency in Hiroshima, where leaders recognized that generative AI's rapid advancement demanded immediate international coordination. What started as urgent discussions among G7 digital ministers evolved into a comprehensive process involving multiple stakeholders—from tech companies to civil society organizations. The framework achieved something unprecedented: binding commitments from major AI developers alongside government policy alignment across different regulatory approaches.
The timing wasn't coincidental. As ChatGPT and similar systems captured global attention in late 2022 and early 2023, policymakers faced a governance gap. Existing AI principles were too general, while emerging regulations like the EU AI Act were still in development. The Hiroshima AI Process filled this void with targeted, implementable guidance.
Most international AI agreements stop at high-level principles. The Hiroshima AI Process goes further by establishing concrete commitments and implementation pathways:
Developer commitments with teeth: The process secured voluntary but public commitments from leading AI companies to implement specific safety measures, transparency requirements, and risk assessment protocols. These aren't just aspirational—they include timelines and reporting mechanisms.
Cross-jurisdictional coordination: Rather than creating another competing standard, the framework explicitly bridges different regulatory approaches. It acknowledges that the EU's rights-based approach, the US's innovation-focused strategy, and Japan's human-centric vision can coexist and reinforce each other.
Focus on inclusion: The "inclusive governance" framing isn't marketing speak—it reflects genuine efforts to involve developing countries, SMEs, and civil society in shaping AI governance. This includes capacity-building programs and technical assistance for implementation.
Risk-based governance: Tailored approaches based on AI system capabilities and deployment contexts, with special attention to frontier models that pose systemic risks.
Transparency and explainability: Requirements for AI system documentation, capability disclosure, and clear communication about AI-generated content.
Safety and security: Comprehensive testing protocols, incident reporting systems, and cybersecurity measures throughout the AI lifecycle.
Human rights and democratic values: Protection of fundamental rights, prevention of discriminatory outcomes, and preservation of human agency in AI-assisted decisions.
Innovation enablement: Regulatory approaches that foster continued innovation while managing risks, including support for research and development.
Government officials and policymakers developing national AI strategies or participating in international AI governance discussions will find practical guidance for implementation and coordination mechanisms.
AI developers and technology companies, especially those working on generative AI systems, need to understand these emerging international expectations and voluntary commitment frameworks.
International organizations and multilateral institutions can use this as a template for AI governance coordination and a foundation for expanding participation beyond G7 countries.
Researchers and academics studying AI governance will find valuable insights into how international cooperation on emerging technologies can move from principles to practice.
Civil society organizations engaged in AI policy advocacy can leverage the framework's inclusive governance mechanisms and human rights protections.
The framework provides multiple entry points depending on your role. Governments can begin with policy alignment assessments and stakeholder consultation processes. Companies can start by reviewing the voluntary commitments and conducting gap analyses against current practices. International organizations can explore partnership opportunities and capacity-building programs.
The process includes regular review mechanisms and progress reporting, making it a living framework that evolves with technological developments and implementation experience.
Published
2024
Jurisdiction
Global
Category
International initiatives
Access
Public access
China Interim Measures for Generative AI Services
Regulations and laws • Cyberspace Administration of China
Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence
Regulations and laws • U.S. Government
EU Artificial Intelligence Act - Official Text
Regulations and laws • European Union
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.