• Second Opinion
  • Posts
  • Your guide to how policymakers are regulating AI in healthcare

Your guide to how policymakers are regulating AI in healthcare

We’ve compiled all the major U.S. laws and policies so you don't have to

This edition of the newsletter was made free for all readers by Transcarent, which is bringing generative AI to benefits navigation, clinical guidance and care delivery. Check out WayFinding, the next generation of navigation. Thanks team! 

If you had to summarize the job of state and federal regulators shaping AI policy for the healthcare sector in one sentence it would be this: Balancing all the promises of artificial intelligence, while preventing potential harm. As the American Hospital Association (AHA) has pointed out, one of the big potential problems with regulating AI is that it’s not monolithic, so regulation will be most effective when it’s specific to the unique risks posed by the technology in question. AHA urged Congress to regulate AI in a manner that’s similar to how it regulates software. At the same time, there are already some laws and regulations that govern aspects of AI, where new regulation may not be necessary or appropriate. We forget, given all the buzz about generative AI, that AI has been used in health care for more than a decade now without much oversight. 

All of this work is in motion as we speak across the government and its various agencies and offices. Each of these groups, ranging from Congress to the U.S. Food and Drug Administration (FDA) to state lawmakers and consumer protection agencies, are looking at the technology from different angles, some focusing exclusively on health care while others more broadly. That’s also what makes it complicated – there’s a lot of moving pieces to keep tabs on. 

Randi, a co-author of this piece, has a running tracker with a handful of colleagues at Manatt that she updates on a quarterly basis, plus an infographic that shows the states that have introduced or passed bills specifically addressing healthcare and AI. More than half of states have done one of those two things, which demonstrates how important it is to keep tabs on the laws and regulations as they exist now and provides a glimpse into where they are heading. 

A quick note before we dive in that this policy is still evolving in real-time. But we’re planning to do our best to consistently update the piece over time. So let’s start with where we see the health-tech industry building in AI, which should give us a strong sense for where healthcare policymakers are likely to focus. 

Where Startups are Building

Where we see the most activity is in a few areas, and each of this comes with a different level of risk:

  • Back office: Anything that touches on tasks that aren’t visible to consumers but make a provider office run smoother or enable service delivery, whether it’s processing claims, coding, billing, and so forth. Revenue cycle management is a particularly hot area for AI companies, and we’ve seen more than a dozen pop up in the past few years. 

  • Front office: Scheduling, messaging, and anything that touches customer/patient service. There’s definitely companies playing in this space, and a lot of need given that many provider’s of all types and sizes are looking for ways to trim costs. The most well-known companies in this space are supporting ambient documentation for clinicians. 

  • Testing and cleaning: There’s companies looking to use AI to “clean” data, replacing traditional methods that are slow, expensive, and narrowly focused, which includes standardizing and harmonizing data, and finding errors so that it the data-user knows what is in the data and have higher trust in the underlying data for research as well as clinical applications. Other companies are using AI to test other customer’s data sets and algorithms for bias and performance. Examples here include Dandelion AI and Cornerstone AI.

  • Aiding diagnosis: There’s the kind that supports clinicians with making diagnosis or treatment decisions. And then there’s patients taking these tools into their own hands. We’re already seeing some prominent examples of patients and their caregivers using large language models to come up with a differential diagnosis. 

  • Pharma/life sci: There’s been a lot of talk about how the pharmaceutical value chain will be disrupted by AI and its tremendous value if it can speed up the time drugs make it to the market. The key areas where we’re seeing activity include research and development, commercialization, operations and clinical trials. As Paul Hudson, the CEO of Sanofi put it in an opinion column for Stat News, the real promise lies in “better decision intelligence,” which in theory could translate into better medicines at the right time for patients. The true applications for AI in pharma are still in development, but the industry seems to be moving forward - with some degree of hesitation - to find out what’s possible in the confines of a highly regulated industry. We are inclined to agree with Stat reporter Casey Ross that it can speed up the development of new drugs, but probably won’t cure cancer anytime soon. 

A lot of the discussion about AI seems to indicate that it’s a “catch all solution” for every problem in health care. This is hyperbolic, in our opinion. Instead, we find that there are specific things that AI can do well, and areas where it’s less than reliable (at least right now - and don’t worry, we’ll get into that).  

Whether it’s Generative AI - the most advanced form of AI - or its foundations such as Deep Learning or Machine Learning (all of which are not one and the same), what the technology can do well involves identifying patterns within data, including imaging. It’s also good at summarizing data into shorter and accessible outputs; as well as predicting future events based on historical data. Based on this data, it can provide recommendations or suggestions, as well as translate data inputs into another data type for clinicians and patients alike. 

Clearcut examples of applications where we’ve already seen AI companies building include:

  • Reviewing an X-ray and identifying a nodule for further analysis by a radiologist;

  • Using historical heart failure readmission rates and clinical data to predict the risk of future hospital readmission; 

  • Converting audio dictation into a structured summary, as well as a plain language interpretation for a patient. 

As we previously mentioned, venture-backed companies are currently at work (and raising gobs of money in the process) building the next set of applications they can sell to large hospital systems to perform many of these tasks, even as the regulatory future remains uncertain.  

Second Opinion recently discussed how Abridge and Nuance, which turn doctor/patient conversations into structured clinical notes, are both selling well within academic medical centers, as well as a handful of their other competitors. One CMIO told Chrissy on a call recently that he suspected it would cut down on clinicians’ documentation time by as much as 50 percent. That’s an ideal pain-point to be solving right now, given the massive clinician burnout problem that we’ve been seeing for several decades now. And that explains the traction, given how hard it is to sell into health systems. 

Beyond that, we’re seeing companies like Hippocratic AI raise tens of millions to build a staffing marketplace where companies can hire generative AI agents to respond to non-clinical questions that come in from patients. And there’s an increasing number of companies building tools to screen for disease using super subtle signals that are often missed by clinicians. 

What the research tells us

There’s a preponderance of evidence that AI can do a decent job summarizing clinical information and spitting out a potential diagnosis that a human clinician may not have considered. There’s real value in that, so we do see a future where an AI chatbot could be a tool that clinicians use for complex cases. One notable case involved a young boy who saw 17 doctors over a three-year-period, with no answers. As his symptoms accelerated, his frustrated mother dumped all the information she had at her disposal into ChatGPT and it made a suggestion that turned out to be the correct one: A rare condition known as Tethered Cord

That case made headlines and we believe we’ll see a lot more like this one. Missed and delayed diagnoses remain a major problem, harmful for patients as well as a major cause of medical malpractice lawsuits. Patient safety studies have indicated that diagnostic errors are implicated in 1 out of 10 patient deaths - and it’s possible the number is even higher. Doctors can be the very best in their specialty or craft, and still find value in a tool like AI that can detect potential clues in information they may have glossed over. 

But, we don’t see a path ahead where AI replaces a human physician. There are also recent studies that reveal that the diagnosis generated by AI or the response to a patient inquiry was wrong or incomplete and could cause serious patient harm. A Mount Sinai study found that LLMs did a subpar job of mapping patient illnesses to diagnostic codes. Likewise, a Mass General Brigham study from April of this year from safety errors in an LLM that was tasked with responding to patient questions. And, AI does not always make physicians more efficient, a University of California, San Diego study found that the use of an LLM did not save clinicians’ time. 

Again, that’s why we see AI as a tool to supplement clinical practice, and not a wholesale replacement of clinicians. And even as it assists clinicians, we still have work to do to ensure it doesn’t add yet more work to their collective plates. 

We do not think an empathy gap allegedly solved by AI is quite as good as it sounds, despite some of the recent research demonstrating otherwise. One widely-discussed study indicated that patients preferred responses generated by AI chatbots over human clinicians 4 to 1. But as the physician Jennifer Lycette pointed out, much of that comes down to the fact that the health care system doesn’t provide much time for human clinicians to respond to the deluge of messages they get. 

So the responses that patients receive now may be scrawled down in the few minutes they have to eat their lunch, while sitting in front of a computer. Humans are capable of far more empathy, but that means the system needs to make time for them (and finds a way for them to get paid) to respond to patients who reach out. It also means clinician’s “free” time should not be filled with merely seeing more patients per hour, as the AI writes out responses for them. 

What regulators are doing so far - and plan to do in the future

Let’s first touch on activity at the federal level. It’s an exhaustive list so we’ll highlight some of the most relevant for health-tech insiders, which is worth keeping tabs on as this work progresses. Consider this your handy worksheet and summary: 

The White House

The Biden Administration issued an Executive Order about nine months ago, asking agencies to complete a series of actions to maintain AI safety to protect Americans. In April, the White House issued an update on that, outlining some of the work that had been completed thus far. It ranges from protecting national security by identifying potential security vulnerabilities to clarifying nondiscrimination requirements in health programs and activities. Much of the work related to health care involves strategies to maximize safety and effectiveness, while protecting patients from the potential harms. There would need to be congressional action for us to see more comprehensive law around AI. There is currently a roadmap for AI policy in the Senate led by the Bipartisan Senate AI Working Group, but this is not specific to healthcare. We’re also seeing the White House encourage agencies to provide funding into AI-related research - and we have heard from academic medical centers that funding is critical as AI-driven work is expensive and resource intensive. If elected, how Trump would encourage AI development and govern AI use is still an open question, but the candidate has signaled an intention to keep regulation to a minimum.

CMS and DOJ

There’s recently been a lot of concern about the potential for using AI to make coverage decisions in Medicare Advantage (MA), a Medicare plan providing Medicare benefits offered by  private medical insurers for seniors. It’s worth reading this investigation from Stat News on the topic of how AI algorithms can be used to deny medically necessary care to patients in need. Last year, CMS issued a final rule in which they commented on how MA plans may use AI to make coverage decisions and underscored this recently in an FAQ. There’s also pending litigation from the DOJ over the alleged use of AI to deny MA claims. 

OCR

Where OCR is most focused (at least right now) is around ensuring that uses of AI in healthcare do not discriminate. That includes taking steps to prohibit “covered entities,” including doctor’s offices, health systems and digital health companies that accept Medicare and Medicaid, from using AI in a manner that discriminates on the basis of race, age and sex, amongst other factors. The final rule defines “patient care decision support tools,” with a very broad-reaching definition, that include advanced AI technologies; covered entities using those must identify risks of discrimination and mitigate those risks. 

FDA

The FDA has been the agency most vocal about regulating AI - ahead of other agencies - including by issuing several non-binding guidance documents and informal statements. How much authority the FDA may exercise over AI is dependent on whether AI technology is itself considered a "device", is incorporated into a device's operation, or is being used to develop or evaluate the efficacy of drugs and biological products. Most recently, the FDA stated its commitment to coordinate among its various internal divisions (see Artificial Intelligence & Medical Products: How CBER, CDER, CDRH, and OCP are Working Together) and to work together with other federal agencies to develop and implement a harmonized approach to AI regulation. To date, the FDA has taken several notable actions to articulate its approach to AI regulation. For instance, the FDA recently issued a discussion paper for its “Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD),” which builds on its 2021 “AI/ML SaMD Action Plan.” Together, these documents show demonstrate the Agency’s longstanding and continuing commitment to support innovative regulatory frameworks governing medical device software and other digital health technologies. The FDA has also published five additional documents addressing machine learning and AI and plans to release AI drug development guidance later in 2024. This guidance will likely expound on the FDA’s plan to develop and implement a "flexible risk-based regulatory framework" for drug development, as it initially indicated in guidance.

The FDA has also signaled that it is prioritizing promoting transparency of AI-enabled medical devices to ensure that the public is aware of how AI is developed and used, how AI performs, and the underlying logic of decisions and actions informed by AI in such medical devices; it has proposed guiding principles to enhance transparency of AI-enabled medical devices. Relatedly, the FDA is actively exploring ways to measure and mitigate bias and discrimination in the development and use of AI. Absent additional federal legislation expressly authorizing the FDA’s role in regulating AI, the FDA Commissioner has proposed (with mixed industry, political, and public reaction) looking outside normal regulatory channels to establish “assurance labs” through which well-established third parties (e.g., academic institutions and health systems) can vet and monitor AI tools in partnership with developers, deployers, and regulators.

ONC

The ONC’s final rule includes first-of-its-kind federal requirements for AI technology in health care used in patient decision support tools. To be clear – this final rule only applies to AI technology that is part of a certified health information technology (think of an off-the-shelf electronic medical record system sold by the major players, like Epic and athenahealth) and any technology company desiring in the future to become ONC-certified.The rule primarily requires transparency and risk management from developers of AI technology, including in documenting source attribution information. The rule doesn’t apply to homegrown provider AI tools, but in theory will create a certain level of expectations on the industry, motivating them in turn to share that information. 

What are states doing 

There’s major variation in how the states are thinking about AI - a lot of states are simply creating task forces or other groups to study AI to decide how to address it (as of March 31, 2024 there were about 46 bills on this topic, and as of today, 10 bills have passed). Randi notes that there are two states that passed AI laws this year and they are quite different: Utah - which introduced a consumer protection law to impose restrictions on, among other companies and individuals, “regulated occupations,” which includes 30 different health care providers – is the first state law to be directed specifically at AI. That means regulated occupations must disclose the use of generative AI in communication with the end user (a.k.a., the patient). There’s also an important Colorado law passed later this year  that imposes requirements on developers of “high risk” AI ( many current healthcare AI use cases), which will go into effect in 2026, if not further modified. In an unusual move, the Colorado governor seems to have begrudgingly signed this bill, noting strong objections to its current form as it could stifle innovation in the state. That law requires companies that develop and those that use AI to inform people when an AI system is being used, post information on their website and report to the Attorney General if they know there was discrimination. We’re also keeping tabs on several  proposed California laws, including AB 3030, which would require physicians and other provider types to disclose use of AI.

There is also activity at the state agency level - for instance, a number of state insurance departments have adopted NAIC AI Model Bulletin.

What are the downsides to AI?

There are plenty of reasons to be concerned, which policymakers are aware of. Here’s our shortlist of just a few:

  • Bias: There’s already been studies showing the problem of bias in AI-based models, and the impact it’s already having. This bias exists across health care, whether or not we’re using AI-based tools. And obviously that bias is going to exist in the data generated by health care services that is used to train the AI. There’s also often a lack of diversity in the data that is being used to build the models. This has been a problem long before the recent explosion of generative AI. But the difference here is that generative AI can continue to reinforce the bias through “learning the bias” inherent in the dataset. That’s  the major reason why it could get worse without thoughtful mitigation strategies - which may be simply disclosing the bias to the end-user so they can proceed accordingly or in other cases re-doing the model (or not using it at all) to eliminate or reduce the bias.

  • Manipulation: It’s looking more and more possible that AI agents will become more prevalent across our lives. There’s been increasing concern amongst legal experts and technologists that these agents will absorb intimate details about our lives, including our health care, for the purposes of targeted manipulation. We’re about to enter into a new era of tracking. 

  • Fabricated results: We know that AI will occasionally make stuff up, which can appear to be extremely real, even for those that consider themselves knowledgeable. This is particularly concerning for health care in cases where health care workers may not be aware that the AI technology is prone to these hallucinations, and may be likely to rely on the output without questioning. In part, because we keep being told that AI is smarter than humans. There have even been cases where the technology will provide thorough papers filled with citations, which seems legitimate. But through a thorough fact-check it was determined that the titles were entirely fabricated. But, if we have to fact check or question the AI, doesn’t that actually slow us down and make us less efficient in some cases?

  • Overestimation of how powerful the technology is: One of Chrissy’s favorite quotes from a health system CMIO who’s been closely watching the space is this: “AI is being overhyped when it comes to its five-year trajectory, but underhyped if you consider the impact over the next decade or longer.” This is spot on, and also makes it challenging to regulate this area because it’s still evolving and we have yet to see AI’s true potential at work. 

  • Loss of jobs: There are more than 20 million people working health care jobs currently in the U.S. alone, making it one of the largest employers in the country. Certain jobs will almost certainly not be replaced by AI, because they require specialized knowledge or a set of hands on the patient. We’re also skeptical that AI will replace physicians. But we are also seeing estimates from banks like Goldman Sachs that AI will replace some 300 million jobs by 2030, and undoubtedly some of those will be in health care, and may be more likely to reduce the headcount of back office or front office staff. 

Bottom line: 

Health care is a super regulated industry, which makes it even more challenging for AI companies to find footing. There aren’t yet many concrete laws in place specific to health AI, in part because regulators are still in learning mode (see the number of taskforces above and study the bills discussed). States should consider authorizing the creation of guidances that can be more flexible to respond to the evolution of AI. 

One of the problems we keep hearing about again and again - which is covered in this article - is that existing regulation isn’t well suited for all types of AI. For instance, it requires medical device companies to go through a reauthorization process if they change or develop in some novel way. But AI is designed to learn on its own and evolve based on new data. It reminds us of some of the early issues with regulating software, but it’s even more complex with AI. 

The regulation simply wasn’t developed for such a fast-moving technology, which is why it’s such a work in progress and challenging to track.

As a reminder, we plan to update this piece once new policies and laws are introduced. So please consider this your first installment, as this topic will continue to be a high priority for Second Opinion in the future. And thanks again to Transcarent, our sponsor! If you’re interested in sponsoring a piece to benefit the health-tech community, reach out to [email protected] for more information.