Doctors have always had mixed feelings about patients who show up informed. For the last two decades, as millions of people have turned to the Internet for answers, the worry has been “Dr. Google.” Now, it’s AI, particularly tools like ChatGPT.

I’ve seen a growing number of clinicians claim that patients who use AI tools before their appointments make visits longer and more difficult. One physician on LinkedIn went as far as to call it an “AI tax.” While some agreed with him, others noted that AI generally upskills patients, making conversations more focused rather than more difficult.

This concern is understandable, in a system where clinicians are overwhelmed and pressed for time. But the claim itself isn’t supported by evidence, and limiting patient access to AI, even implicitly, moves care in the wrong direction. 

The “AI tax” idea does not hold up when you look at the research, and withholding high-quality AI tools from patients only reinforces a more paternalistic model of medicine. As a longtime healthcare investor and public health advocate, I’d argue that a better approach is to use AI to help both clinicians and patients prepare, communicate, and make decisions together more effectively.

The “AI tax” has no evidence behind it

As of 2025, there are no peer-reviewed studies showing that AI-prepared patients extend visits or make them less productive. None.

What we do have are early evaluations of digital tools that help patients prepare before their visits: for example, electronic pre-visit questionnaires and intake systems that ask patients to list priorities or symptoms in advance. A systematic review of 49 studies found that 38 reported these tools as effective in improving patient-centered care and patient–provider communication. And a recent qualitative study found that patients who used a digital pre-visit tool felt better prepared for their appointments and believed it helped shape the conversation. While more rigorous research is needed, these findings challenge the idea that informed, prepared patients inherently take up extra clinician time.

If anything, patients who take the time to understand their condition are often easier to treat. They ask clearer questions. They have better recall of prior symptoms. They’re more likely to follow through on treatment. Clinicians who already embrace shared decision-making tend to welcome this preparation.

The idea of an “AI tax” is less about the technology and more about the long-standing pressure to fit meaningful care into too little time. A well-run primary care visit is expected to fit within a 15-minute window, including chart review, history, physical exam, documentation, counseling, ordering tests, updating medications, and answering questions. When that system is stretched to its limit, anything new can feel like a burden. But that is a problem with system design, not with informed patients.

AI could improve visit quality if patients had the right tools

On The Heart of Healthcare podcast, I recently interviewed Zach Ziegler, co-founder and CTO of OpenEvidence, which uses large language models to interpret medical studies. He explained why the company currently restricts access to clinicians: “We really don’t want to make physicians' lives harder,” he said. Many clinicians, he noted, get frustrated when they spend precious minutes unwinding misunderstandings from online research before they can start the visit.

I understand that concern. OpenEvidence is building its core user base and doesn’t want to alienate clinicians. But keeping high-quality, evidence-grounded AI tools out of patients' hands creates a different problem. If the best tools are locked behind clinician verification, patients will continue to use general chatbots that hallucinate, oversimplify, or have no guardrails. That deepens the information gap rather than shrinking it.

We should be aiming for the opposite. Patients deserve access to accurate, well-sourced information. And clinicians deserve visits where patients arrive prepared in a way that helps the conversation rather than derails it. AI can support both sides if we design it well.

Clinician concerns reflect system strain, not patient behavior

Even clinicians who are receptive to patient preparation recognize how the system shapes this tension. In another recent interview on my podcast, Dr. Holly Urban, VP of BD and Strategy at UpToDate, who helped launch its clinician-facing AI tool, described the discomfort some doctors feel when patients come in with their own research, calling it a very paternalistic approach. “AI has an opportunity to help democratize care,” she said. “If we can close the information gap between patient and physician and move closer to real shared decision-making, that’s a good thing.”

She added an important caveat: it depends on the quality of the information. Bad information makes visits harder. Good information makes it easier. The problem isn’t patients who ask questions. The problem is the uneven quality of what they find.

This is why access matters. Clinician-grade AI tools are trained to minimize hallucinations and cite evidence. Consumer AI tools have different objectives and fewer safeguards. If AI is going to support shared decision-making, patients should have access to tools designed for accuracy, not engagement.

The real risk is a two-tier system of medical information

AI is becoming the default interface for information-seeking. If we continue down a path where clinicians use one set of tools and patients are left with another, we risk creating a two-tier system:

  • Patients get general-purpose AI that may be unreliable.

  • Clinicians get high-quality, evidence-based tools that patients can’t access.

This divide can widen mistrust. And it undermines the idea that patients should be active participants in their own care.

Patients will always search for information between appointments. The question is not whether they will use AI, but which AI they will use. If we restrict access to the safest, most accurate tools, we shouldn’t be surprised when patients show up with printouts or phone screenshots that pull clinicians into a time-consuming back-and-forth.

We should give patients access, with thoughtful guardrails

Patient-facing AI does not need to (and should not) look like the tools clinicians use. Physicians spend years learning clinical terminology, evidence grading, and the shorthand that comes with medical training. Their tools are built for that environment. Patients need something different: clear explanations, plain language, and guidance that helps them ask better questions, not replicate the work of a clinician.

A patient tool should draw from the same level of rigor seen in provider-facing systems like Doximity, OpenEvidence, or UpToDate’s tools, but the output must be written for someone who has not spent a decade in medical school and residency. That means accessible language, practical context, and prompts that support preparation rather than overwhelm. It should also:

  • explain uncertainty in a way that is honest but not alarming

  • spell out tradeoffs that matter to real people

  • avoid final or authoritative diagnostic statements

  • encourage patients to share the information with their clinicians

  • cite its sources so both sides can see where the information came from

These expectations are no harder to apply for patients than they are for clinicians. The technology is ready for this kind of product. We’re already hearing incredible stories of patients using AI to navigate their cancer treatment path or to diagnose themselves with rare conditions. In one notable case, a mother used ChatGPT to diagnose her sons’ tethered cord syndrome, after watching him throw massive tantrums without daily and repeated use of Motrin. More than 17 doctors failed to recognize the problem, despite many visits to the ER. So this young boy’s mother turned to AI, and a neurosurgeon later confirmed the diagnosis. 

What we have not yet embraced is the idea that patients deserve information that is both accurate and understandable. Until we do, the divide between what clinicians have access to and what patients can use will only grow wider.

Stop blaming informed patients

Online health searching is already nearly universal: 79% of U.S. adults say they are likely to look online for answers to health questions, and most U.S. adults view AI-generated health information as useful and somewhat reliable.

The fear of an “AI tax” says more about the constraints of the current system than about patients' capabilities. When clinicians are burned out and visits are rushed, any extra minute feels expensive. But informed patients are not the cause of that problem, and restricting their access to reliable information won’t solve it.

Instead, we should reduce the documentation burden, expand team-based care, update reimbursement for communication, and give clinicians better tools. At the same time, we should support patients who want to play an active role in their care.

AI should not widen the gap between clinicians and patients. It should help close it.

Halle Tecco, MBA, MPH is a healthcare investor, professor at Columbia Business School, and author of the book Massively Better Healthcare

Want to support Second Opinion?

Reply

Avatar

or to participate

Keep Reading