The FDA's 2021 Action Plan represents a pivotal shift in how AI-enabled medical devices will be regulated in the United States. This comprehensive roadmap tackles the unique challenge of regulating software that learns and evolves after deployment—a fundamental departure from traditional medical device oversight. The plan introduces groundbreaking concepts like "predetermined change control plans" that allow AI systems to update within pre-approved parameters, and establishes "Good Machine Learning Practices" as the foundation for trustworthy AI development in healthcare.
Traditional medical device regulation assumes static products—once approved, a device remains unchanged. But AI/ML systems continuously learn and adapt, creating a regulatory paradox. How do you approve something that will inherently change after approval? This action plan emerged from years of FDA wrestling with this fundamental question, informed by real-world AI device submissions and extensive stakeholder engagement.
The plan builds on the FDA's 2019 discussion paper and incorporates lessons learned from early AI device approvals like IDx-DR for diabetic retinopathy screening and Viz.ai for stroke detection. It represents the FDA's most concrete steps toward creating a regulatory framework that can keep pace with rapidly evolving AI technology.
Predetermined Change Control Plans (PCCPs) These allow manufacturers to specify in advance what types of changes their AI system might make and how those changes will be controlled and validated. Think of it as getting pre-approval for a range of future modifications rather than seeking approval for each individual change.
Good Machine Learning Practices (GMLP) A quality system framework specifically designed for AI/ML development, covering everything from data management and feature engineering to human factors considerations and risk management throughout the AI lifecycle.
Patient-Centered Approach Emphasizes algorithm bias mitigation, real-world performance monitoring, and ensuring AI systems work equitably across diverse patient populations—not just the demographics represented in training data.
Regulatory Science Research Commitment to developing new evaluation methods for AI systems, including approaches for assessing algorithm performance, robustness, and potential bias.
Unlike broad AI ethics frameworks or general-purpose AI standards, this action plan addresses the specific technical and safety challenges of AI in life-critical medical applications. It's legally binding regulatory guidance, not voluntary best practices.
The plan uniquely addresses the "continuous learning" problem—how to maintain safety and efficacy oversight for systems that change over time. Most other AI governance focuses on static models, but medical AI often needs to adapt to new patient populations, evolving clinical practices, and emerging medical knowledge.
The FDA's approach also emphasizes post-market surveillance and real-world evidence collection in ways that general AI frameworks don't, recognizing that medical AI performance in controlled studies may not reflect real clinical performance.
The action plan outlines specific deliverables with target timeframes:
The FDA has been steadily delivering on these commitments, with GMLP guidance published in 2021 and multiple PCCP-enabled devices approved since 2022.
Data quality and representativeness: Many AI developers underestimate FDA expectations for diverse, well-characterized training data that reflects real-world patient populations.
Change control documentation: Creating PCCPs that are specific enough for FDA review but flexible enough to allow meaningful algorithm updates requires careful balance and extensive documentation.
Clinical validation requirements: The FDA expects clinical evidence for AI performance, not just technical validation—a higher bar than many developers anticipate.
Post-market monitoring capabilities: Companies need robust infrastructure to monitor real-world AI performance and detect potential issues or bias over time.
Published
2021
Jurisdiction
United States
Category
Sector specific governance
Access
Public access
VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.