FDA
guidelineactive

Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products

FDA

View original resource

FDA Guidance: AI for Drug and Biological Product Regulation

Summary

The FDA has issued its first comprehensive guidance on using artificial intelligence to support regulatory decision-making for drugs and biologics. This 2024 document establishes the agency's expectations for AI-generated data and information submitted in regulatory applications, covering everything from clinical trial design to manufacturing quality control. Unlike broader AI governance frameworks, this guidance gets granular about specific regulatory pathways, data integrity requirements, and the unique challenges of using AI in life-sciences contexts where patient safety is paramount.

Who this resource is for

Primary audiences:

  • Pharmaceutical and biotech companies developing AI-supported drug development processes
  • Clinical research organizations (CROs) implementing AI in trial design and analysis
  • Regulatory affairs professionals preparing FDA submissions with AI-generated evidence
  • AI vendors serving the pharmaceutical industry
  • FDA review staff evaluating AI-supported applications

Secondary audiences:

  • Healthcare AI researchers transitioning to regulatory contexts
  • Legal teams advising on pharmaceutical AI compliance
  • Quality assurance teams validating AI systems in GxP environments

The regulatory reality check

This guidance addresses a fundamental tension: AI can accelerate drug development and improve decision-making, but regulatory submissions require unprecedented levels of transparency and validation. The FDA doesn't prohibit AI use—instead, it demands that sponsors demonstrate their AI systems are fit-for-purpose within existing regulatory frameworks.

Key regulatory principles the guidance reinforces:

  • Burden of proof remains with sponsors: AI doesn't lower evidence standards
  • Transparency over innovation: Explainability often trumps cutting-edge performance
  • Risk-based approach: Higher-risk applications face stricter requirements
  • Existing frameworks apply: Good Manufacturing Practices (GMP), Good Clinical Practices (GCP), and 21 CFR Part 11 all remain relevant

AI applications the FDA is watching

The guidance categorizes AI use cases by regulatory impact, helping sponsors understand where they'll face the most scrutiny:

High-impact applications (extensive documentation required):

  • Primary endpoint analysis in pivotal trials
  • Safety signal detection and causality assessment
  • Manufacturing batch release decisions
  • Biomarker identification for patient stratification

Medium-impact applications (moderate documentation):

  • Protocol optimization and site selection
  • Adverse event coding and categorization
  • Supply chain optimization
  • Regulatory writing assistance

Lower-impact applications (basic documentation):

  • Literature reviews and data extraction
  • Project management and timeline optimization
  • Administrative process automation

Documentation expectations that matter

The FDA outlines specific documentation requirements that go beyond typical AI governance checklists:

Algorithm transparency package:

  • Training data sources, including any proprietary datasets
  • Model architecture decisions and hyperparameter selection rationale
  • Validation methodology with prospective performance data
  • Bias assessment across relevant demographic subgroups

Regulatory context documentation:

  • Clear mapping between AI outputs and regulatory endpoints
  • Human oversight procedures and intervention protocols
  • Change control processes for model updates
  • Integration with existing quality management systems

Ongoing monitoring commitments:

  • Post-market surveillance plans for AI performance
  • Drift detection and model retraining procedures
  • Adverse event reporting for AI-related issues

What the guidance doesn't solve

While comprehensive, this guidance leaves several questions for future clarification:

  • International harmonization: No clear pathway for leveraging AI work across global regulatory agencies
  • Real-world evidence: Limited guidance on using AI with post-market data sources
  • Combination products: Unclear how requirements apply to drug-device combinations with embedded AI
  • Breakthrough therapy: No expedited pathways for AI-enabled drug development programs

These gaps suggest sponsors should engage early with FDA through pre-submission meetings, especially for novel AI applications.

Getting ready for FDA review

Before you submit:

  1. Map your AI use cases to FDA risk categories
  2. Establish change control procedures that satisfy 21 CFR Part 11
  3. Document your bias assessment methodology
  4. Prepare clear explanations for non-technical reviewers

Red flags that delay review:

  • Black-box models for high-impact decisions without adequate justification
  • Training data that can't be made available to FDA upon request
  • Inadequate validation in populations relevant to your intended indication
  • Missing documentation of human oversight procedures

Tags

AI governancehealthcare regulationpharmaceutical industryregulatory compliancedrug safetybiological products

At a glance

Published

2024

Jurisdiction

United States

Category

Sector specific governance

Access

Public access

Build your AI governance program

VerifyWise helps you implement AI governance frameworks, track compliance, and manage risk across your AI systems.

Considerations for the Use of Artificial Intelligence To Support Regulatory Decision-Making for Drug and Biological Products | AI Governance Library | VerifyWise