Foundational or influential research.
19 resources
The comprehensive annual report tracking AI progress across research, development, technical performance, ethics, economy, education, and policy. Provides data-driven insights into AI trends and governance developments.
A comprehensive academic survey of AI governance approaches, frameworks, and mechanisms. Reviews governance models across jurisdictions and analyzes effectiveness of different regulatory approaches.
Brookings Institution's collection of AI policy research covering governance, regulation, ethics, and societal impacts. Includes policy briefs, reports, and expert commentary on AI governance issues.
Empirical research examining algorithmic auditing practices across industry and academia. Analyzes audit methodologies, findings, and the effectiveness of different auditing approaches.
This scoping review synthesizes and critically reflects on research related to responsible AI governance. The authors developed a conceptual framework for responsible AI governance that encompasses structural and relational dimensions based on their comprehensive review of existing literature.
A systematic literature review that analyzes and synthesizes current AI governance solutions including frameworks, tools, models, and policies. The study provides a comprehensive analysis of 28 research papers to examine challenges in existing AI governance solutions and offers insights through four specific governance questions.
A research repository from the Centre for the Governance of AI containing studies on AI governance topics. Includes research on AI agents' capabilities and surveys of local US policymakers' views on AI governance issues.
This research paper examines algorithmic accountability in machine learning systems, focusing on how stakeholders can effectively demand accountability from algorithmic decision-making processes. The study explores the relationship between perceived system accountability and user trust, demonstrating how accountability measures can positively influence user satisfaction and enable proactive assessment of ML system impacts.
This report examines the challenges of algorithmic accountability and the need for new approaches to scrutinize algorithm performance and assess output reliability. It addresses accountability deficits arising from procedural ambiguity and lack of transparency in algorithmic decision-making systems.
A comprehensive toolkit developed by Amnesty International's Algorithmic Accountability Lab that synthesizes research and advocacy work on state use of automated systems. The resource focuses on issues related to algorithmic accountability from a human rights perspective, providing tools for researching and campaigning on automated decision-making systems used by governments.
An empirical research study examining the perspectives of AI practitioners and lawmakers on AI ethics principles. The study identifies transparency, accountability, and privacy as the most critical AI ethics principles according to both practitioner and lawmaker viewpoints.
An empirical research study examining the perspectives of AI practitioners and lawmakers on key ethical principles in artificial intelligence. The study confirms that transparency, accountability, and privacy are identified as the most critical AI ethics principles by both groups.
This research paper presents the first empirical study examining perceptions of AI ethics principles and challenges among 99 AI practitioners and lawmakers from twenty countries across five continents. The study provides cross-cultural insights into how different stakeholders view AI ethics implementation and governance challenges.
Stanford HAI is a research institute focused on human-centered artificial intelligence research and policy. The institute produces resources like the AI Index report to provide data and insights for policymakers, researchers, and the public to make informed decisions about AI development and governance.
Stanford Human-Centered AI Institute's research portal featuring studies on responsible AI practices and corporate adoption. The resource provides expanded coverage of AI's role in science and medicine, along with opportunities for faculty participation in AI governance research.
A section of Stanford HAI's 2025 AI Index Report focusing on responsible AI practices and evaluation methods. The report discusses the evolution of AI model evaluation benchmarks, particularly those aimed at assessing factuality and truthfulness, and highlights newer comprehensive evaluation frameworks that have emerged in response to limitations of earlier benchmarks.
The Brookings Institution's AI and Emerging Technology Initiative conducts forward-thinking research on artificial intelligence policy challenges. The initiative produces actionable recommendations through rigorous, interdisciplinary analysis to help leaders address pressing policy issues related to AI governance and emerging technologies.
A Georgetown University course guide for AI and National Security studies that compiles policy research resources. The guide references the Brookings Institution's AIET Initiative research on governance of transformative technologies, drawing from multiple research programs to identify governance approaches.
The Brookings Institution's artificial intelligence research hub featuring policy analysis and governance studies on AI regulation and oversight. The resource includes research from fellows in governance studies and technology innovation centers focusing on AI policy development and implementation.