According to Dr Tala Fakhouri, the FDA’s Associate Director for Policy Analysis, within the Center for Drug Evaluation and Research (CDER), 175 submissions for drug approval received recently included the use of AI and machine learning for drug development.
AI is used to discover new drugs but also in clinical research and even during the manufacturing process and post market safety surveillance. Of course, the most fascinating aspect of this is whether and how AI can contribute to drug development, considering this part deals usually with chemistry.
But the truth is that AI contributes to predict how specific proteins will interact with patients by analyzing vast amounts of available data on these proteins and their use, for instance when the same proteins are used in the treatment of other diseases.
FDA, in its role of controling the safety and efficiency of drugs, is positive about this innovative use of AI. However, in addition to reminding that many aspects of AI processing are out of its scope (like the legal aspects of intellectual property attached to data), the FDA is investigating the reliability of data processing.
How much and along what conditions, existing data can replace empirical studies with the same rate of reliability ? The theory is that you do not need to re-check or re-test something that has already been validated by a previous test. However, this is true only if the same level of epistemological assessment takes place and the evidence has the exact same value so as to act as a replacement of the empirical evidence
A key question here is to decide who knows, who assesses the evidence and how the same experiment or absence of experiment can be reproduced for validation purposes by the FDA independently of the sponsor. In other words, the FDA is responsible for avoiding situations where the sponsor is the only testimony of the evidence or the owner of a secret algorithm one cannot re-use in a different context.
Despite of the obvious technological innovation represented by AI, the FDA validation criteria remain the same : analyzing the risks and benefits of a new drug and detecting bias based on evidence. For this to happen, FDA needs full collaboration with the sponsor on the access to both data and algorithms so as to decide for both the absence of bias in the algorithm and the reliability of the data set. Maximum transparency on the process is required, and each stakeholder must ensure it has a good understanding of the whole process of outsourcing part of the drug testing to an AI-based protocol instead of a real-life laboratory experiment.
Now if we consider the clinical testing, the challenge is even more complex since the limit of the predictability of a population is one of the reasons that justify testing on humans. However, there are subsets of the clinical research that can still be outsourced to AI like dose optimization. It has been demonstrated that AI-models help pre-define the optimal molecule concentration. FDA may see this (but we are still in a discussion stage on the topic that does not provide any recommendations) as a progress leading to less risks for tested populations.
Gxp-Training is following this conversations between FDA and stakeholders of the drug development and clinical research industry. We are currently evaluating whether to build a series of courses on the topic and would appreciate your opinion on the topic.