FDA’s Use of AI to Fast-Track Drug Approvals is a Bellyflop
Move fast and break things. That should be the mantra of AI everywhere.
AI is both good and here to stay, but you need to validate anything that you do with AI. At least at this point in its lifecycle.
Imagine, just for one sad moment, you’re in a life-or-death situation. You’re terribly ill and you’re running out of time: if you’re not given a certain revolutionary drug soon, you’re going to die.
Thankfully, the FDA is here to help. Armed with its new generative AI tool, Elsa, the agency has sped up exhaustive drug approval processes, and the new cure reaches you while you’re still hanging in there.
Except that Elsa, like most AI, is prone to hallucinate. It tries to “approve” drugs based on non-existent drug studies.
While this is hypothetical right now, it could become hyper-real if the FDA doesn’t fix major problems with Elsa.
FDA insiders say the tool is unfit for clinical reviews because it misinterprets important data far too often.
But AI models can confidently spew nonsense. This is okay if you are looking for a good place to eat but not as good if you need to know if a drug is going to kill you or cure you.
Brooke Hartley Moy, CEO of AI fact-checking platform Infactory says:
The LLMs are incredibly poorly suited to things that require a high degree of precision, accuracy, and trust. It’s that mental misunderstanding and mismatch that has misled not just the FDA but almost every organization.
https://cybernews.com/news/brooke-hartley-interview-fda-generative-ai
As you start to integrate into your business processes you need to assess the risk to your business and to getting sued based on what you are using the AI for.
If you need assistance with this, please contact us.
Credit: Cybernews