Congressional Research Service
View original resourceThis Congressional Research Service report dissects Executive Order 14110, the most comprehensive federal AI policy framework to date. Released just after the Biden Administration's landmark October 2023 order, this analysis translates 60+ pages of executive policy into digestible insights for practitioners who need to understand what the order actually requires. Unlike the executive order itself, this CRS report organizes the sprawling policy into clear themes, identifies key implementation deadlines, and explains how different provisions interact across federal agencies. It's the authoritative government analysis of a policy that affects everything from federal AI procurement to national security applications.
The executive order establishes a multi-layered approach that fundamentally changes how AI is governed at the federal level. The report reveals how the order creates new oversight mechanisms, including mandatory safety evaluations for large AI models and standardized impact assessments for federal AI use. It introduces the concept of "dual-use foundation models" - AI systems that could pose national security risks - and subjects them to specific reporting requirements when they exceed defined computational thresholds.
The order doesn't just regulate - it accelerates AI development in strategic areas while imposing guardrails on high-risk applications. This includes fast-tracking AI talent visas, establishing AI safety institutes, and requiring watermarking for government-generated AI content.
One of the report's most valuable contributions is mapping out the cascade of deadlines and agency assignments. Within 90 days of the order, NIST must establish AI safety and security standards. The Department of Commerce gets 365 days to develop guidance on dual-use model evaluations. Meanwhile, OMB has 180 days to issue government-wide AI use policies.
The report clarifies which agencies lead on which aspects: DHS handles critical infrastructure protection, while the Department of Energy oversees AI's intersection with power systems. This division of labor matters for organizations trying to determine which agency guidelines will affect their operations.
Unlike sectoral approaches or voluntary frameworks, this executive order creates government-wide mandates with teeth. The report emphasizes how the order leverages existing authorities - particularly the Defense Production Act - to compel private sector compliance with safety evaluations and reporting requirements for the largest AI models.
The order also breaks new ground by addressing AI's intersection with civil rights, worker displacement, and algorithmic discrimination in a unified framework rather than leaving these issues to individual agencies to address piecemeal.
Government contractors and federal suppliers who need to understand new AI-related compliance requirements and procurement standards that will affect federal contracting opportunities.
Federal agency staff and government officials responsible for implementing AI policies within their organizations or advising leadership on AI governance requirements.
AI developers and technology companies whose models may trigger the executive order's reporting thresholds or who want to understand federal AI safety expectations.
Policy researchers and legal practitioners tracking the evolution of federal AI regulation and its relationship to state laws and international frameworks.
Risk and compliance professionals in organizations that either use AI systems or may be affected by federal AI policy implementations across sectors.
The report helps clarify that the executive order doesn't ban or restrict most AI development - it focuses on the most capable models and highest-risk applications. Many AI systems won't trigger any new requirements.
The order also doesn't override existing sector-specific regulations. Instead, it works alongside FDA device regulations, financial services rules, and other established frameworks. The CRS analysis explains how these layers interact rather than conflict.
Additionally, while the order creates new reporting requirements for some large AI models, it doesn't establish a pre-approval process for AI development or deployment in most cases.
Published
2023
Jurisdiction
United States
Category
Regulations and laws
Access
Public access
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.