FDA
guidelineactive

FDA AI/ML-Based Software as Medical Device Action Plan

FDA

View original resource

FDA AI/ML-Based Software as Medical Device Action Plan

Summary

The FDA's 2021 Action Plan represents a pivotal shift in how AI-enabled medical devices will be regulated in the United States. This comprehensive roadmap tackles the unique challenge of regulating software that learns and evolves after deployment—a fundamental departure from traditional medical device oversight. The plan introduces groundbreaking concepts like "predetermined change control plans" that allow AI systems to update within pre-approved parameters, and establishes "Good Machine Learning Practices" as the foundation for trustworthy AI development in healthcare.

The regulatory revolution behind this plan

Traditional medical device regulation assumes static products—once approved, a device remains unchanged. But AI/ML systems continuously learn and adapt, creating a regulatory paradox. How do you approve something that will inherently change after approval? This action plan emerged from years of FDA wrestling with this fundamental question, informed by real-world AI device submissions and extensive stakeholder engagement.

The plan builds on the FDA's 2019 discussion paper and incorporates lessons learned from early AI device approvals like IDx-DR for diabetic retinopathy screening and Viz.ai for stroke detection. It represents the FDA's most concrete steps toward creating a regulatory framework that can keep pace with rapidly evolving AI technology.

Core pillars of the FDA's approach

Predetermined Change Control Plans (PCCPs) These allow manufacturers to specify in advance what types of changes their AI system might make and how those changes will be controlled and validated. Think of it as getting pre-approval for a range of future modifications rather than seeking approval for each individual change.

Good Machine Learning Practices (GMLP) A quality system framework specifically designed for AI/ML development, covering everything from data management and feature engineering to human factors considerations and risk management throughout the AI lifecycle.

Patient-Centered Approach Emphasizes algorithm bias mitigation, real-world performance monitoring, and ensuring AI systems work equitably across diverse patient populations—not just the demographics represented in training data.

Regulatory Science Research Commitment to developing new evaluation methods for AI systems, including approaches for assessing algorithm performance, robustness, and potential bias.

Who this resource is for

  • Medical device manufacturers developing AI/ML-enabled products who need to understand FDA's regulatory expectations and pathway requirements
  • Healthcare AI startups planning their regulatory strategy and product development approach from the ground up
  • Quality assurance and regulatory affairs professionals in medtech companies who need to implement GMLP and prepare PCCP submissions
  • Healthcare AI researchers transitioning from academic or research settings to commercial product development
  • Clinical teams evaluating AI tools who want to understand what FDA oversight means for the products they're considering
  • Healthcare IT leaders responsible for AI procurement and implementation in clinical settings
  • Legal and compliance professionals advising healthcare AI companies on regulatory requirements

What makes this different from other AI guidance

Unlike broad AI ethics frameworks or general-purpose AI standards, this action plan addresses the specific technical and safety challenges of AI in life-critical medical applications. It's legally binding regulatory guidance, not voluntary best practices.

The plan uniquely addresses the "continuous learning" problem—how to maintain safety and efficacy oversight for systems that change over time. Most other AI governance focuses on static models, but medical AI often needs to adapt to new patient populations, evolving clinical practices, and emerging medical knowledge.

The FDA's approach also emphasizes post-market surveillance and real-world evidence collection in ways that general AI frameworks don't, recognizing that medical AI performance in controlled studies may not reflect real clinical performance.

Implementation roadmap and timeline

The action plan outlines specific deliverables with target timeframes:

  • 2021-2022: Publish draft GMLP guidance and begin PCCP pilot programs
  • 2022-2023: Finalize GMLP framework and issue first PCCP approvals
  • Ongoing: Develop regulatory science tools for AI evaluation and bias assessment

The FDA has been steadily delivering on these commitments, with GMLP guidance published in 2021 and multiple PCCP-enabled devices approved since 2022.

Common implementation challenges

Data quality and representativeness: Many AI developers underestimate FDA expectations for diverse, well-characterized training data that reflects real-world patient populations.

Change control documentation: Creating PCCPs that are specific enough for FDA review but flexible enough to allow meaningful algorithm updates requires careful balance and extensive documentation.

Clinical validation requirements: The FDA expects clinical evidence for AI performance, not just technical validation—a higher bar than many developers anticipate.

Post-market monitoring capabilities: Companies need robust infrastructure to monitor real-world AI performance and detect potential issues or bias over time.

Tags

FDAmedical deviceshealthcareSaMD

At a glance

Published

2021

Jurisdiction

United States

Category

Sector specific governance

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

FDA AI/ML-Based Software as Medical Device Action Plan | AI Governance Library | VerifyWise