Structured governance or risk frameworks, not legal texts.
26 resources
The NIST AI Risk Management Framework provides a structured approach to managing AI risks throughout the AI lifecycle. It consists of four core functions: Govern, Map, Measure, and Manage. The framework is voluntary and designed to be adaptable across sectors, use cases, and organizational contexts.
The AI RMF Playbook provides practical guidance for implementing the NIST AI Risk Management Framework. It includes suggested actions, documentation practices, and implementation examples for each subcategory across the Govern, Map, Measure, and Manage functions.
The OECD Principles on AI were the first intergovernmental standard on AI. They promote AI that is innovative and trustworthy and respects human rights and democratic values. The five principles cover inclusive growth, human-centered values, transparency, robustness, and accountability.
Singapore's Model AI Governance Framework provides detailed guidance on implementing responsible AI practices. It covers internal governance structures, decision-making models, operations management, and stakeholder interaction. The framework emphasizes human-centricity and builds trust through transparency.
Microsoft's Responsible AI Standard defines requirements for developing and deploying AI systems responsibly. It operationalizes Microsoft's AI principles into specific goals, requirements, and tools across accountability, transparency, fairness, reliability and safety, privacy and security, and inclusiveness.
The Model AI Governance Framework 2024 is Singapore's updated governance framework published by the Infocomm Media Development Authority (IMDA) that specifically addresses the latest developments in Generative AI. This framework builds upon Singapore's existing AI governance approaches and provides comprehensive guidance for organizations deploying generative AI systems, covering risk management, ethical considerations, and operational governance. The framework is designed for businesses, government agencies, and AI practitioners who need practical guidance to implement responsible AI practices while fostering innovation. As part of Singapore's strategic approach to AI regulation, this framework aims to balance trust and innovation by providing clear governance structures that organizations can adopt to ensure safe and ethical deployment of generative AI technologies.
Google Cloud's Responsible AI Framework is a comprehensive governance framework that outlines the company's approach to developing and deploying AI systems responsibly. The framework encompasses core AI principles, practical implementation practices, governance structures, and technical tools designed to help organizations build trustworthy AI applications. It covers key areas including fairness, accountability, transparency, privacy, and safety considerations throughout the AI lifecycle from development to deployment. This framework is particularly valuable for enterprise organizations using Google Cloud services who need structured guidance on implementing responsible AI practices, as well as for AI practitioners seeking industry-standard approaches to AI governance and risk management in cloud environments.
The Partnership on AI is a multi-stakeholder organization founded by major technology companies to develop best practices and guidelines for artificial intelligence development and deployment. The partnership focuses on exploring the intersections between AI and fundamental human values, establishing comprehensive guidelines for algorithmic equity, explainability, and responsibility in AI systems. It empowers communities by providing frameworks for co-creating AI solutions and fostering inclusivity throughout the research and design process. This collaborative initiative serves as a crucial resource for technology companies, researchers, policymakers, and civil society organizations seeking to implement responsible AI practices and ensure that AI development aligns with human values and societal benefit.
The NIST AI Risk Management Framework (AI RMF 1.0) provides a structured approach for managing artificial intelligence risks across organizations and sectors. It is designed as a living document that offers guidance for identifying, assessing, and mitigating AI-related risks in various applications and contexts.
The NIST AI Risk Management Framework provides comprehensive guidelines for identifying, assessing, and managing risks associated with artificial intelligence systems. It offers a structured approach for organizations to develop responsible AI practices and ensure AI systems are trustworthy and aligned with organizational values.
Singapore's comprehensive framework providing guidelines for the ethical and responsible deployment and development of AI technologies across industries. The framework establishes foundational principles and practical guidance for organizations implementing AI systems in Singapore.
A set of ethical guidelines developed by the Organisation for Economic Co-operation and Development (OECD) that promote the responsible development and use of artificial intelligence. Adopted in May 2019 and amended in 2024, these principles outline a comprehensive framework for AI governance and ethical AI implementation.
The OECD AI Principles represent the first intergovernmental organization standard designed to promote innovative and trustworthy artificial intelligence. The framework provides guidance for risk mitigation measures with emphasis on transparency and comparability across different implementations.
Microsoft's comprehensive responsible AI framework outlining principles and approaches for ethical AI development. The framework covers transparency, fairness, human-AI collaboration, privacy, security, and safety considerations in AI systems.
Microsoft's comprehensive framework for responsible AI development and deployment, outlining ethical policies and practical implementation strategies. The resource provides guidance on planning, strategizing, and scaling AI projects responsibly, contributing to broader industry standards through Partnership on AI collaboration.
Microsoft's Responsible AI Standard provides a framework for building AI systems based on six core principles: fairness, reliability and safety, privacy and security, inclusiveness, transparency, and accountability. This resource explains how to implement responsible AI practices within Azure Machine Learning services.
Google's AI Principles establish governance frameworks that guide responsible AI development and deployment across the company. The principles inform processes covering model development, application deployment, and post-launch monitoring to ensure responsible AI practices.
Google's responsible AI framework that outlines AI principles to guide ethical AI development. The framework emphasizes collaboration with researchers, academia, governments, and civil society to establish appropriate boundaries and policies for AI development and deployment.
IBM's responsible AI framework developed and guided by the IBM Responsible Technology Board over five years of AI evolution. The framework provides ethical guidelines for AI development and implementation to help organizations responsibly innovate with AI technology.
IBM's comprehensive AI ethics governance framework that guides their AI Ethics Board in reviewing AI use cases for alignment with company principles and regulatory requirements. The framework is integrated into IBM's product offerings to advance ethical AI governance practices.
IBM's guide to AI ethics that provides a framework for data scientists and researchers to build AI systems ethically. The resource explains fundamental concepts of AI ethics and how to implement ethical practices in AI development to benefit society.
The AI Governance Alliance is a World Economic Forum initiative that brings together stakeholders to develop governance frameworks for artificial intelligence. It serves as a platform for collaborative efforts in establishing risk management approaches and policy recommendations for AI governance on a global scale.
This briefing paper series provides guidance to stakeholders involved in AI governance and regulation. It establishes foundational principles for the World Economic Forum's AI Governance Alliance and its future initiatives focused on building resilient and inclusive AI governance frameworks.
This white paper provides policy-makers and regulators with implementable strategies for resilient generative AI governance through a comprehensive 360-degree framework. The framework addresses regulatory gaps and offers practical guidance for governing generative AI technologies at scale.
A comprehensive framework developed by the Partnership on AI that provides guidelines for model providers to responsibly develop and deploy AI models while promoting societal safety. The framework emphasizes collective action and adaptation to evolving AI capabilities and use cases.
A comprehensive AI use policy for public service organizations that draws from best practices of leading non-profit organizations including the Emerson Collective, Center for Democracy and Technology, MacArthur Foundation, and Bill & Melinda Gates Foundation. The policy focuses on understanding and managing the impact of artificial intelligence, particularly generative AI, in public sector contexts.