How would you feel about a doctor telling you, “Hold on, let me confirm with ChatGPT about that”? 

It's already happening among health professionals via tools including OpenEvidence, Doximity and UpToDate, as well as (informally) ChatGPT. And so is the reverse: Patients are using ChatGPT to ask questions about their symptoms or disease before talking to a doctor, as well as analysis related to their lab work or imaging. 

The approach marks a shift in a world where physicians rolled their eyes when Dr. Google walked in the room, knowing the information could be wrong. At an event recently for clinicians in digital health, one panelist commented that it used to be considered lazy when a physician used UpToDate to look something up. Now, it’s considered lazy not to leverage AI. 

Dr. David Rhew, Global Chief Medical Officer at Microsoft, said that AI is forcing important questions about which doctors should be leveraging AI, and for what purposes. Doctors can make mistakes, but it’s early days for the technology – which remains prone to errors and fabrications. “We tend to forget that doctors are human. There is a huge variability in how doctors care for patients,” Rhew told Second Opinion. 

Now, as agentic AI tool pitches are finding their way into every corner of healthcare, from back end support and administrative tasks through patient care and maintenance, experts are thinking about how to approach patient-facing tools. And in its current form, the industry lacks a few key foundational needs: trust, a widely accepted framework, and training for all stakeholders. Recently, Stanford and Harvard Universities teamed up to help overcome one barrier: validating AI tools in healthcare. But there’s still a ways to go.

 “We’re out of early adolescence right now, with these tools, maybe even earlier than that,” said Harlan Krumholz, Director of the Yale New Haven Hospital Center for Outcomes Research and Evaluation. Krumholz noted that there’s still an adoption curve around how to correctly use the tools. “They’re kind of being used in the way you would use a search, but they are better than a search engine.” It’s also not necessarily the case yet that humans are being “augmented” by AI and that the combination of the two is the most accurate and effective. Studies have found that communication barriers, trust issues, ethical concerns and a lack of good coordination can play a role in hindering this collaboration. Although that’s starting to resolve as newer studies are finding in some specialties, but not all, AI plus human is the best combination. 

Another recent study showed that 66% of adults have low trust in their health systems to use AI responsibly, and more than half said they don’t trust health systems to use an AI tool that would not harm them.

So how can we overcome this trust gap – as a society and within the industry?

Second Opinion spoke to several other experts about what 5 guiding principles of patient-facing agentic AI looks like. Here’s what the experts had to say.

Guiding principles

Build trust 

The trust gap in healthcare for use of AI exists both on the patient side as well as the clinical side. And there is only one shot the industry has to get it right – because once trust is lost in the technology, it will be a hard fight to get it back. Especially with an already-dubious clinician population.

Amigo.ai CEO Ali Khokhar said companies have to do for healthcare what Waymo has done for autonomous driving. What was so effective about Waymo’s strategy wasn’t just that it was transparent. It was the incredibly careful approach the company took, including in countless simulated performances before the autonomous vehicle hit the streets. In healthcare, building that trust will take time - and a lot of research. Also along these lines, it was a physician – Dr. Jonathan Slotkin – who went viral in the past few weeks for digging into the data and determining that Waymo is doing more than an incremental improvement in safety. “It’s categorical.” In the same vein, physicians will adapt when they’re presented with compelling enough data. It’s a role that is grounded in the scientific method, but adaptation is also necessary in the face of massive physician shortages. 

“I think AI doctors are more similar to self-driving cars than any other analogy I could give you,” said Khokhar, noting that he started his own company to bring Waymo-like thinking to healthcare. “I think of self-driving cars as the only place where...the cost of failure is somebody can die,” Khokhar said.

Specialty-based training

Each tool needs to be able to address the variations in disease-specific and specialty-specific care. Having a one-size-fits-all approach is going to relegate any tool to the equivalent of a search engine, rather than the power of AI providing augmented support for doctors. Again, similarly to Waymo, that means endless hours of virtual simulation training, and ensuring there are no hallucinations or biases. Some experts worried the tools could reverse progress made in the real world, and amplify existing biases in medical care – which has taken the industry decades to only just begin to address in recent years.

“Medicine is deeply contextual. What’s relevant in radiology looks very different from what’s needed in cardiology,” said GE HealthCare Global Chief of Science and Technology Officer Taha Kass-Hout. He believes the long-term vision for the industry is to have a multi-agentic AI platform where the specialty-trained agents can collaborate to ensure the most valuable and appropriate clinical care is given.

Include physicians

Not just in the build-out of tools, but also in the ability for patients and physicians to be connected to the information being sought and shared. This can solve one of the largest gaps in digital health to-date: leaving the tools outside the exam room. The solutions should include integration with digital health records, which gives doctors visibility into the patients’ needs. And it helps the doctor stay actively involved with their patients’ care, rather than the current route of falling off the radar once they exit the clinic. Khokhar from Amigo.ai stressed that close integration and collaboration with physicians and their existing workflows is key. 

Manmeet Kaur, Executive in Residence at the Regenstrief Institute and a Healthcare Entrepreneur, said integration is everything. “A lot of the patient monitoring wave of digital technology limitations we’re facing is not being integrated with the physician’s care,” she explained. And without that – “It can become noise, it can become less relevant,” Kaur said.

Training for all stakeholders

Ensure that whoever is going to be using the tools, whether it be a patient or a health professional, is trained fully on the limitations and best ways to use the tools. This can prevent harm and also ensure that even hospital executives and other figureheads in the system who are not in touch with patients also understand how this tech is being used. There are some hospitals in other parts of the world that are building AI at the center of their health system rather than just tacking it on, and that requires a system-wide training. Similarly, health execs and professionals need to be thinking about how to integrate the product and maximize the information database.

Will Morris, a Physician and Executive, said it’s key that these tools are used to help inform patients, and help them make decisions. But not to be used for self-diagnosis. “Let’s not forget what we had before…the interweb. We had missing information. You had information asymmetry.”

When patients are empowered, and doctors are helped, with additional information at their fingertips, it can help strengthen the relationship and treatment process, Morris said. That can only become reality if the tools are trained on authoritative sources and provide the best information to all sides without hallucinations and biases, he explained. 

Regulations, malpractice, liability

There are mixed thoughts about how to create an industry-wide framework that helps regulate and monitor the build out of these tools. And should there be a validating body of sorts? And what does malpractice insurance and liability look like once medical professionals begin to rely on these tools? Those are questions that need answering. Some experts want a validating body, while others believe that it would create more red tape and slow down innovation, and possibly create more barriers.

Ultimately, the experts all agreed that there’s no going back. 

Stephen Klasko, Executive in Residence at General Catalyst and a former Health System Executive, told us that consumers have reached a “breaking point” with healthcare’s fragmentation. According to Klasko, AI can play a role in integrating the care they receive within the four walls of a health system, and outside of it:

“The pandemic accelerated digital health adoption,” he said. “But also exposed the chaos of all these Lego pieces, these digital health tools.” 

Are we missing any key guiding principles that should govern the use of patient-facing AI? Reach out to us and let us know. We’d love to hear from you!

Want to support Second Opinion?

Reply

or to participate