This website uses cookies

Read our Privacy policy and Terms of use for more information.

Eric Topol, famed researcher and cardiologist, penned a missive this week highlighting a dissonance in the field of health-related artificial intelligence products. AI tools with a preponderance of evidence supporting their efficacy are not getting adopted while hospitals and humans are furiously using unproven large language models for care, he says. 

One example? AI for medical images, with extensive research dating back from a decade ago, are not being implemented. Meanwhile, millions of patients and a substantial number of doctors are already using LLMs for medical support.

Let’s fix this paradox of medical AI implementation,” he wrote. “It’s a two-fold and major undertaking. Amping up the use of medical AI where it’s proven and performing the clinical trials required to justify wide-scale adoption where pivotal evidence is lacking.”

The backdrop? Well, chatbots have no real hope of being regulated. Food and Drug Administrator Marty Makary has said he plans to steer clear of regulating what he calls “wellbeing” AI. Meanwhile, Congress is trying to get its arms around the issue. This week, Senator Josh Hawley’s AI-bill, Guidelines for User Age-verification and Responsible Dialogue Act of 2025, advanced out of the Senate Judiciary Committee. The bill would require people to verify their age before using an LLM, which would probably put a pause on their use as health assistants. However, President Donald Trump wants AI to be unregulated and the White House has pressed Congress to pass a bill that puts a moratorium on AI rules that pre-empts state laws.  

In spite of this lax regulatory environment, the salient point in Topol’s paradox remains:

It is really hard for regulated AI health products to get adopted, making it challenging for companies trying to do the right thing. Why is that? As a reminder, the FDA has authorized some 1,430 AI tools, many of them radiological and just a few are being taken up.

Reserve Your Spot for Upcoming Webinars!

Webinar Topic

Panelists’

Timing

Registration

What will AI do for employer healthcare and benefits?

Nick Reber
Ellen Kelsay, Christina Farr

May 19th, 2026
At 3:00 PM (ET)

Privacy AI and the future of HIPAA with the former founding director of ONC

Jodi Daniel, Christina Farr

June 3rd, 2026
At 12:00 PM (ET)

Not everyone can access the Top 1% of physicians. Will AI change that?

Daniel Stein
Christina Farr
Fred Thiele

June 23rd, 2026
At 12:00 PM (ET)

Old news: the money

The lesson of the last few years has been that FDA clearance is not a pathway to adoption, because it doesn’t automatically lead to reimbursement. Sometimes when companies do get cleared by regulators, there are no accompanying billing codes. Oftentimes, startups have to work with consultants to create the code set, and then there’s an extensive period of state-by-state market education. It’s not uncommon for companies to run out of money in the process.

A few examples:

  • Healthy.io’s Minuteful at-home kidney disease test should be available off the shelf in pharmacies, but instead the company works with various partners that get its product out in fits and starts to select populations. 

  • In 2023, Pear Therapeutics filed for bankruptcy despite good evidence its FDA-authorized digital therapeutic could help treat people suffering with opioid use disorder. 

  • This year, Kintsugi, a company that was using vocal signatures to screen people for mental health disorders had compelling data, but was forced to shutter earlier this year because of a lack of funding. The founder Grace Chang decided to open-source her work, much to the chagrin of her investors, in the hopes that someone else might carry the product forward. 

The Consumer Technology Association among other health tech industry groups has for years advocated for the Centers for Medicare and Medicaid Services to create a pathway such that FDA approved devices are automatically reimbursed for. The idea is that this would signal to private payers that they should be covering these technologies, and make this path more attractive. The alternative is that many companies are choosing “human in the loop” approaches, or making claims that it’s a wellness product and not clinical.   

In April, CMS and FDA announced a pathway for expediting coverage decisions for certain devices with a breakthrough designation. The Regulatory Alignment for Predictable and Immediate Device (RAPID) coverage pathway, will not deliver automatic reimbursement, but it may speed up the coverage process for some technologies. 

But the AI adoption hurdle may not be just about reimbursement. Startups underappreciate the less tangible costs of implementing AI products, says deputy general counsel at Akido Labs, Adam Rattner.     

“It creates more work, before it creates savings,” he says. Adding artificial intelligence into a clinical workflow requires training staff, technical implementation, cybersecurity changes, and administrative work. And even though AI is supposed to create more efficiency it can add further burden, requiring doctors to take more appointments, review more work, and do additional care coordination, says Rattner.  

“At first glance, it looks almost like a cost center, even when it improves care,” he says.

Liability

Another reason health systems may be avoiding clinical AI tools is because of liability, says Rattner. 

“What happens if the AI flags something and the physician doesn't see it or ignores it? The physician is responsible,” he says, adding that doctors and health systems are also on the hook if AI misses something and the doctor relies on it. 

And the liability issue is not unique to clinical AI, he says. Revenue cycle management, though less considered, holds the same risks. “If the AI was prompted to up-code and commit fraud, who's liable? Is it the health system that implemented the AI? Is it the programmers of the AI? It's novel,” he says. “As a lawyer, it scares the heck out of me.”

Already, The Federation of State Medical Boards, which advises state medical licensing bodies, has issued guidance saying that doctors are responsible for the AI they use as well as any related harm the technology causes.  

Interesting legal theories are emerging in the courts that may ultimately answer the question over how much liability sits with AI developers versus doctors. This week, Pennsylvania sued Character.ai for having a chatbot falsely represent itself as a licensed medical professional. Character.ai is a companion chatbot, not designed to be used in the clinical setting. But the lawsuit opens the door to a discussion about whether AI can and should be licensed as doctors are

Another lawsuit, against OpenAI, says that AI is a product and subject to product liability laws when its use leads to demonstrable harm.  

While these lawsuits make their way through the court, the general assumption is that doctors, the human in the loop, will be liable for the mistakes of AI. And that may be its own barrier to adoption.

Want to support Second Opinion?

Reply

Avatar

or to participate

Keep Reading