A few weeks ago, I came across something that made me pause: the FDA is testing out OpenAI’s tech to speed up drug approvals. It’s not science fiction…it’s already happening.
I’ve worked in environments where compliance, regulation, and tech all collide. And I know firsthand how slow those systems can move. So the idea of the FDA pushing for speed, using generative AI no less, caught my attention. And raised some questions.
What’s happening?
The FDA met with OpenAI to explore how a custom version of ChatGPT, something they’re calling “cderGPT,” might assist reviewers inside its drug evaluation division.
What’s the goal of it? Cut down on the manual processes: scanning hundreds of thousands of pages of clinical trial data, checking for missing information, spotting red flags.
According to internal reports, the FDA already tested AI in at least one scientific review earlier this year. It reduced what used to take three days… to minutes.
Why now?
Drug approvals are notoriously slow. Even after clinical trials wrap, FDA review can take a year or more. Add in the full development timeline, from lab to market, and it’s a 10+ year journey.
FDA Commissioner Dr. Marty Makary said it outright: “Why does it take over 10 years for a new drug to come to market?” He’s making AI part of the answer.
This is part of a broader push inside the federal government to make agencies faster, leaner, and more tech-aware. The FDA doesn’t just want AI, it wants it embedded in how it works.
What could this change?
For pharmaceutical companies:
Faster approvals mean shorter timelines, less burn, and earlier revenue. That’s the upside. The downside? Their proprietary data might now be processed by AI. Understandably, they’re asking what that means for security and confidentiality.
For regulators:
The FDA could finally reduce review backlogs. But there’s a fine line between automation and abdication. Human reviewers will still sign off, but the pressure to trust machines is going up.
For patients:
Faster access to new treatments is the biggest win here. But speed can’t come at the cost of safety. If AI misses something, trust in the process could take a hit. Transparency is key.
My take
In theory, this is the right direction. We’ve been digitizing drug research and regulatory filing for decades. But the review process, the critical last mile, has barely moved.
I don’t think this is reckless. The FDA is treating AI as a support tool, not a decision-maker. The question will be whether reviewers lean too hard on it, or whether they use it to focus better on the parts that require real human judgment.
What comes next?
By the end of this summer, we’ll know a lot more. Will AI actually improve accuracy and efficiency at the FDA? Will pharma companies feel safe submitting sensitive data? Will patients trust a system that now involves a chatbot?
All good questions. And all worth watching closely.
Key takeaway:
AI might be the only way to bring speed to a system designed for caution. The FDA seems to understand that. The next few months will show whether it found the balance, or just added another layer of complexity.