Loading...

產業新聞

Home / News

US: FDA draft guidance addresses product lifecycle, risk

2025/01/08

The US Food and Drug Administration (FDA) has released a new draft guidance on the use of artificial intelligence (AI) that produces data or information that supports regulatory decision-making in submissions for drugs and biological products.

In the draft guidance, FDA proposes using a risk-based credibility assessment framework for AI use in product submissions based on context of use (COU). The framework would apply when AI produces information or data that would lead to a regulatory decision of a product’s safety, effectiveness, and/or quality. Activities to establish credibility, such as FDA oversight, stringency of the credibility assessment, risk mitigation, the sponsor, performance acceptance criteria, and documentation, are specific to the COU and should be appropriate based on the AI model risk.

“The Agency recognizes that the use of AI in drug development is broad and rapidly evolving. This draft guidance, when finalized, is expected to help ensure that AI models used to support regulatory decision-making are sufficiently credible for the COU,” FDA wrote in a related Federal Register notice. “The risk-based credibility assessment framework proposed in this draft guidance is intended to help sponsors and other interested parties plan, gather, organize, and document information to establish the credibility of AI model outputs.”

FDA noted that AI used in drug discovery, and for operational efficiencies unrelated to safety, quality, and reliability of a product are outside the scope of the guidance. The agency also emphasized it not endorsing any specific AI approach or technique.

They also acknowledged the unique challenges of AI, and said because of the variability in the datasets encountered in training models, data should be “fit for use,” including being relevant and reliable to the AI model. In addition, they noted methodological transparency is needed to examine how AI models are developed and reach conclusions, the model may be inaccurate in a way that is difficult to determine, and the model can change over time based on new data.

Risk-based credibility assessment framework

FDA created a risk-based credibility framework for AI models that considers the question of interest for the model, the COU, the AI model risk, a plan for establishing credibility of the model, execution of that plan, documenting the results of the plan, and determining the AI model’s adequacy within the context of the COU.

“The question of interest should describe the specific question, decision, or concern being addressed by the AI model,” FDA wrote. Developers of an AI model can use evidentiary sources like in vitro and in vivo animal testing, clinical trials, and manufacturing process validation studies together with AI-generated evidence to answer the question of interest, they noted.

Concerning the COU, its description should contain what is to be modeled, the model outputs, and a statement on what other relevant information will be used with model outputs to answer the question, FDA said.

Model risk is determined by both model influence (the likelihood a model output may lead to an incorrect decision and an adverse outcome) and decision consequence (the significance of the adverse event).

“The risk-based credibility assessment framework envisions interactive feedback from FDA concerning the assessment of the AI model risk (step 3) as well as the adequacy of the credibility assessment plan (step 4) based on the model risk and the COU,” FDA wrote. “Accordingly, FDA strongly encourages sponsors and other interested parties to engage early with FDA to discuss the AI model risk, the appropriate credibility assessment activities for the proposed model based on model risk and the COU.”

Lifecycle maintenance of AI systems, early engagement

Developers of AI models need to consider the maintenance of AI models across the lifecycle of a product, which includes the use of planned activities to maintain the model’s performance based on planned and unplanned changes to the AI models throughout the product’s lifecycle.

“Al-based models may be highly sensitive to variations or changes in model inputs, for example, because they are data-driven and can be self-evolving (i.e., capable of autonomously adapting without any human intervention),” FDA said. “Model performance metrics should be monitored on an ongoing basis to ensure that the model remains fit for use and appropriate changes are made to the model, as needed.”

Model oversight should take a risk-based approach considering the model risk and the COU, the agency noted. “Due to the evolving nature of Al models, sponsors should anticipate inherent, model-directed changes and the need to identify and evaluate those changes, as well as any intentional changes to the model over the drug product life cycle,” they wrote.

FDA said it “strongly encourages” early engagement in discussing use of AI models in drug and biologic products for expectation setting of the credibility assessment activities and for identifying challenges associated with those activities.

The agency provided a number of engagement options for discussion of AI in development programs, such as an Initial Targeted Engagement for Regulatory Advice (INTERACT) meeting for Center for Biologics Evaluation and Research (CBER) and Center for Drug Evaluation and Research (CDER) products, and a Pre-Investigational New Drug Application (Pre-IND) meeting. Depending on the intended use of the AI model, sponsors and other interested parties can request engagement options other than formal meetings, such as the Center for Clinical Trial Innovation (C3TI), Complex Innovative Trial Design Meeting Program (CID), Drug Development Tools (DDTs) and Innovative Science and Technology Approaches for New Drugs (ISTAND), Digital Health Technologies (DHTs) Program, Emerging Drug Safety Technology Program (EDSTP), CDER’s Emerging Technology Program (ETP) and CBER’s Advanced Technologies Team (CATT), Model-Informed Drug Development Paired Meeting Program (MIDD), and Real-World Evidence (RWE) Program.

FDA said it is requesting public feedback from sponsors and other stakeholders on whether industry believes the risk-based framework aligns with expectations, and if the opportunities to engage with FDA on AI are adequate.

To continue reading this article please go to RAPS .