Summary: The FDA has issued draft guidance to establish a framework for evaluating the credibility of artificial intelligence (AI) models used in drug and biological product development. This guidance aims to ensure AI applications meet safety, effectiveness, and quality standards, addressing the growing role of AI in regulatory decision-making. The recommendations include a risk-based approach for assessing AI models and emphasize early engagement between sponsors and the FDA to support regulatory submissions. Public comments on the draft guidance are invited within 90 days.
Key Takeaways:
- AI Framework for Drug Development: The FDA’s draft guidance provides recommendations on using AI models to support regulatory decisions for drug and biologic submissions, focusing on safety, effectiveness, and quality standards.
- Risk-Based Approach: The guidance emphasizes defining the context of use for AI models and establishing credibility through a risk-based framework to ensure reliable application in regulatory evaluations.
- Stakeholder Input Requested: The FDA is inviting public comments on the draft guidance within 90 days, seeking feedback on its alignment with industry practices and adequacy of engagement options for stakeholders.
The US Food and Drug Administration (FDA) has issued draft guidance to provide recommendations on the use of artificial intelligence (AI) intended to support a regulatory decision about a drug or biological product’s safety, effectiveness, or quality.
This is the first guidance the agency has issued on the use of AI for the development of drug and biological products.
“The FDA is committed to supporting innovative approaches for the development of medical products by providing an agile, risk-based framework that promotes innovation and ensures the agency’s robust scientific and regulatory standards are met,” says FDA commissioner Robert M. Califf, MD, in a release. “With the appropriate safeguards in place, artificial intelligence has transformative potential to advance clinical research and accelerate medical product development to improve patient care.”
Since 2016, the use of AI in drug development and in regulatory submissions has exponentially increased. AI can be used in various ways to produce data or information regarding the safety, effectiveness, or quality of a drug or biological product. For example, AI approaches can be used to predict patient outcomes, improve understanding of predictors of disease progression, and process and analyze large datasets (eg, real-world data sources or data from digital health technologies).
Establishing Credibility for AI Models
A key aspect to the appropriate application of AI modeling in drug development and regulatory evaluation is ensuring model credibility—trust in the performance of an AI model for a particular context of use. Context of use is defined as how an AI model is used to address a certain question of interest.
Given the range of potential applications of AI modeling, defining the model’s context of use is critical. For this reason, this guidance provides a risk-based framework for sponsors to assess and establish the credibility of an AI model for a particular context of use and determine the credibility activities needed to demonstrate that an AI model’s output is credible.
This approach is consistent with how FDA staff have been reviewing applications for drug and biological products with AI components. The FDA encourages sponsors to have early engagement with the agency about AI credibility assessment or the use of AI in human and animal drug development.
Collaborative Efforts to Shape the Guidance
The FDA’s human and animal medical product centers, along with the agency’s Office of Inspections and Investigations, Oncology Center of Excellence, and Office of Combination Products worked collaboratively on this draft guidance to ensure consistency across the agency. The agency notes that it recognizes that AI management requires a risk-based regulatory framework built on robust principles, standards and best practices.
In developing these recommendations, the FDA incorporated feedback from interested parties including sponsors, manufacturers, technology developers and suppliers, and academics. Specifically, this draft guidance was informed by feedback from an FDA-sponsored expert workshop convened by the Duke Margolis Institute for Health Policy in December 2022, more than 800 comments received from external parties on two discussion papers published in May 2023 on AI use in drug development and in manufacturing, and the FDA’s experience with more than 500 drug and biological product submissions with AI components since 2016.
The FDA is seeking public comment on the draft guidance, Considerations for the Use of Artificial Intelligence to Support Regulatory Decision-Making for Drug and Biological Products, within 90 days. In particular, the FDA is asking for feedback on how well this draft guidance aligns with industry experience and whether the options available for sponsors and other interested parties to engage with the FDA on the use of AI are sufficient. The agency will review and consider comments received before finalizing this guidance.
Notably, this announcement is specific to the use of AI to support the development of drug and biological products. The FDA also published draft guidance providing recommendations specific to AI-enabled medical devices.
ID 344509363 | Ai Drug Development © Dzmitry Auramchik | Dreamstime.com