This website uses cookies

Read our Privacy policy and Terms of use for more information.

This week, editors at Nature demanded evidence that medical AI tools are creating value for patients. 

The adoption of artificial intelligence (AI)-powered tools is accelerating rapidly across all layers of healthcare systems,” they write. “Yet evidence that AI tools create value for patients, providers or health systems remains scarce.”

They say that focusing on AI performance (model calibration, sensitivity, and specificity) is all well and good, but it tells us nothing about whether AI is making patients better or worse. Pharmaceutical companies, for example, must prove their innovations improve patient health before they’re approved and reimbursed for. What about AI?

Putting medical AI through the same gauntlet of tests that drugs go through isn’t a practical option, but the authors say that doesn’t absolve the health care community from developing benchmarks for assessing clinical impact. In creating such a framework, they say health systems should require more robust prospective proof that a model improves the standard of care and that post-model-deployment monitoring should be non-negotiable. And, they say, not all algorithms require the same level of vetting: “the stronger the claim, the stronger the evidence needed to support it.” 

The Food and Drug Administration is thinking about monitoring the impact of medical AI once it hits the market. Last year, the agency took input from the public on measuring and evaluating medical AI in the real world. Those comments have not spurred new guidance or regulation yet. It is also possible that the agency needs more authority from Congress before it can develop a system of post-market review that adequately surveils AI medical products. 

Reserve Your Spot for Upcoming Webinars!

Webinar Topic

Panelists’

Timing

Registration

What will AI do for employer healthcare and benefits?

Nick Reber
Ellen Kelsay, Christina Farr

May 19th, 2026
At 3:00 PM (ET)

Privacy AI and the future of HIPAA with the former founding director of ONC

Jodi Daniel, Christina Farr

June 3rd, 2026
At 12:00 PM (ET)

Not everyone can access the Top 1% of physicians. Will AI change that?

Daniel Stein
Christina Farr
Fred Thiele

June 23rd, 2026
At 12:00 PM (ET)

This week on Lifers!

Christina Farr speaks with Josh Tauber, multi-time COO and founder, and Keaton Bedell, co-founder and CEO of Bridge, about Medvi, a cash-pay GLP-1 company valued at $1.8 billion. The founders discuss the end of the software-as-a-service moat and how to build a sustainable business focused on patient care.

At the same time, general-use AI is embedding itself in the continuum of care. One in four Americans have used publicly available LLMS for health care information or advice, according to a recent poll from Gallup and West Health. Of that group 14% couldn’t afford to visit a doctor, 16% couldn’t access a doctor, and 21% felt like past providers weren’t taking their concerns seriously. Patients on the lower end of the socio-economic scale, in particular, are starting to rely on these tools for care.    

Gallup and West Health estimate that some 14 million Americans may be skipping a trip to the doctor as a result of advice given by a publicly available LLM. That could be a good thing if it’s deterring an unnecessary trip to the ER. Conversely, it could also be not so good if AI has talked someone out of getting necessary care. Regardless of the direction of this impact, the regulators are apt to stay on the sidelines. The FDA indicated this year it’s not going to regulate general-use LLMs, even if they’re giving health care advice. 

So if AI is now truly the front door of health care, the Gallup West Health survey suggests, similarly to the Nature editorial, that we should have frameworks for understanding when and how it’s used and how that ultimately affects patient health

And now onto the news with Annalisa Merelli.   

NEWS:

DEALS:

EARNINGS

Want to support Second Opinion?

Reply

Avatar

or to participate

Keep Reading