The G7 Hiroshima AI Process represents the first major international framework specifically designed to govern advanced AI systems and generative AI technologies. Born from urgent discussions at the 2023 Hiroshima Summit amid the rapid rise of large language models, this framework establishes both high-level principles for AI governance and a practical code of conduct that AI developers are expected to follow. Unlike previous AI governance efforts that focused on general AI ethics, this process directly addresses the unique challenges posed by frontier AI systems capable of generating human-like content across multiple domains.
The timing of this framework wasn't coincidental. By May 2023, ChatGPT and similar systems had fundamentally shifted global perceptions of AI capabilities and risks. The G7 leaders, meeting in Hiroshima with its symbolic significance around technology's dual potential, recognized that existing governance approaches were insufficient for AI systems that could generate convincing text, images, code, and other content at scale.
The framework emerged from a recognition that while individual nations were developing their own AI regulations, the global nature of AI development—with models trained across borders and deployed worldwide—demanded coordinated international action. The G7 chose to focus specifically on "advanced AI systems," acknowledging that not all AI requires the same level of governance attention.
Dual Structure: The framework operates on two levels—broad guiding principles for governments and specific conduct expectations for AI developers. This allows for both policy flexibility and operational clarity.
Generative AI Focus: Unlike comprehensive AI governance frameworks, this specifically targets generative AI and foundation models, acknowledging their unique risks around misinformation, content authenticity, and societal impact.
Developer-Centric Approach: The code of conduct directly addresses AI developers and deployers, creating expectations around risk assessment, safety testing, incident reporting, and transparency—making it more actionable than principles-only frameworks.
Voluntary but Influential: While not legally binding, the G7's economic and technological influence means this framework carries significant soft power, especially for companies seeking to operate in G7 markets.
Guiding Principles for Organizations:
Developer Code of Conduct Highlights:
AI Company Leadership: CTOs, Chief AI Officers, and compliance teams at organizations developing or deploying advanced AI systems, particularly those operating in or serving G7 markets.
Government AI Policy Teams: Officials developing national AI strategies who need to align with international frameworks and coordinate with G7 partners on AI governance approaches.
Enterprise AI Adopters: Organizations implementing generative AI solutions who want to understand international best practices and prepare for evolving regulatory expectations.
AI Researchers and Developers: Technical teams working on foundation models or advanced AI systems who need practical guidance on safety, testing, and responsible development practices.
Legal and Compliance Professionals: Those advising on AI governance who need to understand how this international framework might influence future regulations and enforcement priorities.
What This Framework Does: Provides clear expectations for responsible AI development, creates a foundation for future bilateral and multilateral agreements, and establishes common language for international AI governance discussions.
What It Doesn't Do: This isn't legally enforceable, doesn't specify technical standards or testing methodologies, and lacks detailed guidance for specific AI applications or industry sectors.
Timeline Expectations: While the principles were established in 2023, full adoption and integration into national policies and corporate practices is expected to unfold over 2024-2025, with regular reviews and updates as AI technology evolves.
The framework explicitly acknowledges its living document nature, recognizing that AI governance must evolve alongside rapidly advancing technology capabilities.
Published
2023
Jurisdiction
Global
Category
International initiatives
Access
Public access
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.