Uncertainty surrounds the Food and Drug Administration, where personnel cuts are hampering drug approvals and mixed messages abound. It’s unclear whether FDA Commissioner Marty Makary wants to speed up approvals, slow them down, or an unsettling combination of the two.
Artificial intelligence has the potential to significantly bolster agency efficiency, and the FDA is poised to roll out AI quickly and aggressively.
On May 8, the agency “announced an aggressive timeline to scale use of artificial intelligence internally across all FDA centers by June 30, 2025, following the completion of a new generative AI pilot for scientific reviewers.” On June 2, the agency officially rolled out Elsa, its new AI tool for reviewers and investigators.”
However, AI is only as good as the models and suppositions it is trained on. Absent great care, a risk-averse agency training AI will surely lead to a risk-averse algorithm keen on rejecting life-saving treatments. Makary must ensure that agency AI is open to innovation and flexibility.
As former IBM CEO Ginni Rometty stated, “The key to success with AI is not just having the right data, but also asking the right questions.”
If the “right question” is how to green-light drugs that have few to no side effects and are produced without any manufacturing complications, the algorithm will leave out the majority of medications that have “only” moderately positive effects on disease or are hampered by periodic production issues. Unfortunately, the FDA has a tendency for unattainable perfectionism at the cost of tangible and incremental progress.
For example, in 2024, the FDA rejected a treatment for a rare hormonal disorder, acromegaly, despite ample evidence that the medication was well-tolerated and improved patients’ lives. Regulators dwelled not on safety nor efficacy but rather on “manufacturing facility-related deficiencies.” This is the latest instance in a worrying trend of game-changing drugs being delayed for months or even years because of one-off production concerns with easy fixes.
According to the FDA’s previous Center for Drug Evaluation and Research director Patrizia Cavazzoni, the agency’s approval standards are not getting stricter despite the uptick in manufacturing-related drug denials. Rather, the FDA is seeing “lesser quality in the facilities where these products are manufactured, and we really need to work on this.” While Cavazzoni has a point that tainted medications and contamination are a far-too-common fact, timely and targeted recalls are a far better alternative to banning an entire medication.
For example, imagine if the FDA withdrew approval for the diabetes drug metformin every time a manufacturing issue was identified. While patients would be somewhat safer from probable carcinogens such as N-nitrosodimethylamine, they’d face a far tougher time keeping blood sugar levels in check. Life expectancy may even be affected as a result of overly strict policies.
The FDA acknowledges this tradeoff and opts for a smarter and more tailored approach for drugs it has already approved. If only it could follow this approach for medications on the cusp of approval. Given that the FDA is ramping up scrutiny of foreign manufacturing facilities, this appears unlikely to happen anytime soon.
This resistance to change and risk-aversion could be detrimental to AI training. For FDA AI systems to be effective approval aids, they must be trained to see beyond temporary manufacturing difficulties and detect the long-term benefits of approving game-changing medications. A badly trained AI could transform a potentially helpful tool into a harmful barrier against innovation. Minute, inconsequential or context-specific imperfections that human reviewers normally deem acceptable or fixable could be flagged as irredeemable by an overly cautious algorithm. Thus, a tool intended to speed up approvals could lead to an increase in unreasonable denials.
There are other issues that regulators will have to remedy to ensure a smooth AI rollout. For example, the FDA handles an array of sensitive drug patent information, and the agency must ensure that any system handling this information has adequate guardrails to prevent unwanted dissemination. These measures will mean little, though, if the FDA cannot create an AI approvals system that doesn’t prioritize innovation and consumer choice. AI is only as good as the people behind the wheel. The stakes cannot be higher for an agency responsible for the lives of millions.
