News Roundup: What you shouldn’t miss from the last week
With Annalisa Merelli and Meredith Nolan
Rest in peace, Alex Pretti. What so many of us witnessed was a heartbreaking tragedy. Nurses represent the best of us. Pretti took care of critically ill veterans in the intensive care unit. His last words were “Are you okay?” Halle Tecco, an entrepreneur and investor in our space, said it best this morning when she reminded us to honor Alex by refusing to let our compassion be extinguished.
Stay safe out there. Take a moment. And be kind to each other.
With that, here’s a rundown of what made the headlines in healthcare this week.
OpenEvidence wants to develop medical superintelligence
The news: The growth of OpenEvidence, an AI platform helping doctors find clinical references, has been exponential. The company is now valued at $12 billion (see more in Deals).
Some numbers: In December, the company supported 18 million queries from doctors. A year before, it was 3 million.
Isn’t Big AI doing health now? "I respect the hustle. Our view is that healthcare can't be a side hustle," Nadler said. "OpenAI has the unseat Google division, the unseat Apple division, the unseat Nvidia division, and the unseat OpenEvidence division. We have one division. We wake up every morning thinking about healthcare. We go to sleep thinking about healthcare. A bet on OpenEvidence is a bet that focus wins," said Daniel Nadler, founder and CEO of OpenEvidence, told Fierce Healthcare.
POV: This is the power play, and it’s one that several of the ambient scribing and documentation companies will also be building towards. This will be a race to the finish line, but there will be multiple winners. Doximity is the other big player in this space, and is also singularly focused on healthcare.
One Medical launched an AI assistant
The news: Amazon’s primary care venture is now granting its patients access to an AI-enabled assistant with access to test results, medical history, and more. It can help book appointments or provide medical information — for instance, explain lab results.
The idea: While others take a "trust me, I’m AI!" approach, One Medical takes a "trust us, we’re clinicians" approach, One Medical says.
The deets: Complex matters are escalated to a human team, while simpler issues are handled by AI. The product is part of One Medical, which costs $99/year for Amazon Prime members and $199 for other members.
POV: This is the middle ground position that many companies will adopt in the years to come. The human remains in the loop, but the AI starts to take on more and more tasks under supervision. Full autonomy for AI in care delivery will take some time - outside of the odd pilot project.
A reminder on upcoming webinars: |
Webinar Topic | Timing | Registration |
|---|---|---|
Unpacking the Data on the Telehealth Visits Patients Flocked to This Year | Jan 28, 2026 12 PM ET / 3 PM PT | Anyone can sign up here |
Breaking Point: How Soaring Healthcare Costs are Reshaping Employer Strategies | Feb 9, 2026 11 AM ET / 2 PM PT | Subscribers can sign up here |
Second Opinion x TytoCare: Unpacking CMS' $50B Investment into Rural Healthcare | Feb 5, 2026 12 PM ET / 3 PM PT | Anyone can sign up here |
Hospitals are demanding better data safety
The news: Following the lawsuit initiated by Epic against some clients who allegedly exploited access to health data, hospitals are asking officials running national health exchanges to step up their protections. In January, Epic filed a lawsuit against Health Gorilla, a health-tech startup, alleging its network was used to fraudulently access and sell patient records to law firms.
What Epic said: The letter was a “collaboration of the Epic community and was coordinated through the Epic Health Policy Workgroup, which is an informal group of organizations using Epic that meets to develop solutions to policy challenges,” a company spokesperson told STAT.
Who’s involved: At least 63 health systems have signed the letter, all of which are either Epic customers or affiliated with Epic customers.
POV: There are some intriguing legal and policy questions at stake here. Providers are allowed to access health information via the exchanges, but the definition of the term “provider” is getting more and more murky. In an official statement, Health Gorilla said it has been working constructively with Epic, and that the lawsuit represents “monopolistic practices in health information exchange.”
A helpful tool
Peterson Health Technology Institute launched a guide to performance-based contracts for purchasers and digital health companies. PHTI views performance-based contracting as the future, particularly with CMS’s new ACCESS model. But the devil is in the details. What outcomes matter most? And should we trust vendors to measure them? This report focuses on the employer market, where more and more vendors are jumping on the bandwagon.
Funding, Deals, and Launches
$110 million for Zerminali Pediatrics: The multispecialty pediatric group closed a Series A led by Healthier Capital.
$13 million for BrightInsight: The digital health solution company announced an investment with contributions from Eclipse, General Catalyst, Insight Partners and others. The funds will advance the company’s AI-enabled medication persistence and adherence solutions.
$29.5 million for Vitsa AI: The automated MRI scanning software closed a Series B round with new health system investors, including Cedars-Sinai Health System, Intermountain Health, University of Utah Hospital System.
$250 million for OpenEvidence: The maker of the chatbot that supports medical research for clinical evidence closed a Series D led by Thrive Capital and DST Global, which doubled the valuation of the company, now standing at $12 billion.
$220 million to start Healthier Capital: The investment firm, funded in 2023, surpassed its funding expectations as it closed its first fund.
AnswersNow raises $40 million: The maker of the autism care platform closed a Series B to expand its AI-enabled product.
Four Questions with Graham Walker, MD

Graham Walker, MD is an Emergency Medicine Physician and Serial Entrepreneur
1) I recently resurfaced the question of whether AI would replace doctors on LinkedIn (originally posed by Vinod Khosla). As you might expect, I got dozens of responses. You responded that you view AI as built for sycophancy, meaning it tells patients too often what they want to hear versus what they need to hear. Do you view that as something that continues to be a key flaw with AI? And why does that matter so much in medicine?
This is a Goldilocks problem — I think the level of sycophancy is a key flaw for everyone in healthcare - both patients AND doctors.
Fundamental to the practice of medicine is questioning oneself — is my diagnosis right? What could I be missing? Is this the right treatment? The right dosage? Are there any contraindications to my plan?
The problem I’ve seen is that the foundational models right now are way too likely to cheer you on, tell you you’re right, and try to move you along immediately to helpfully suggest “the next step,” not pushing back or questioning you.
The dance of medicine is productive disagreement.
If patients think it’s bad that a doctor interrupts you within 15 seconds — because we’re immediately trying to narrow our differential diagnosis and rule things in or out — GenAI doesn’t even try to interrupt you. It doesn’t question or disagree or redirect (unless you specifically prompt it to behave like that).
And similarly with physicians — how much should GenAI agree with the doctor, versus push back and ask them, “Have you thought about an alternative diagnosis?” This is particularly central to my field of emergency medicine: we are almost always thinking about 5 steps ahead in the timeline: okay if the platelets come back as this, I’m going to do that, and if the CT scan is normal, then the next step will be to do an MRI. Or send the patient home. Or do a lumbar puncture, etc.
So yes, the sycophancy is a real issue, and it’s not just in some superficial way like “patients lie” or “doctors aren’t always right.” Agreeableness and disagreeableness are both important facets of medicine, and they’re often important at different times and for different reasons.
2) Pushing on that further, imagine that regulatory and privacy is set aside because we have the right framework for it. What do you think AI can do effectively in care delivery fully autonomously?
I’m talking with Byron Crowe from Doctronic tomorrow for my podcast, so I’ll be specifically asking him this exact same question.
With the right prompting and framework — and fixing the sycophancy piece above — I could see AI helping markedly with what we might call “medical or nursing advice” or “home care advice/coaching” that is unfortunately very lacking for many Americans.
This might be a mix of basic first aid plus high school Health class health literacy, plus basic medical care and care navigation — all of these things seemingly drive a bunch of ER visits. A few examples:
How to manage the flu or a sore throat at home, and reasons I’d want you to come to the ER
When, how, and why to take over-the-counter pain relievers for you, your children, etc and when to make an appointment with your doctor
What types of things the ER may be able to help with, what we might not, and what we definitely will not provide (real examples of requests I’ve gotten include “a PET scan,” “a dose of chemotherapy because I’m visiting out of town,” and “couples therapy on a Friday night because my wife and I are getting divorced” and “a work note putting me on disability for 6 months.”)
What’s the normal recovery for an ankle sprain
How to manage a child with a stomach virus
I don’t have a great description for these. They’re not necessarily “nursing advice,” and others aren’t “medical advice.” But they’re all in a category of knowledge that is completely intuitive and obvious to doctors and nurses, and is often very lacking in the general population.
Could it potentially do more autonomously? Probably eventually, but I’m not ready to hedge on that topic because in medicine, there’s an exception to and an example that proves every rule. But those are all context-dependent, and what these LLM tools are really bad at right now is requiring sufficient context in order to then take the right action.
3) Everyone in our industry says AI will cut costs down, but they also resist change. How do we ensure AI doesn’t just drive up costs, such as with all this upcoding?
I actually think the rate of change is the reason that costs will not go down. I am hopeful that we humans will be doing better work — higher quality, higher value — thanks to AI augmentation, but I’m not naive enough to think that it’s a guarantee.
Geoffrey Hinton (whom I deeply respect) famously said 10 years ago that we should stop training radiologists because of AI. What he did not understand or appreciate was the rate of change of our population: older patients often don’t just need twice as much imaging as younger patients; they often need 3 or 4 times more. Our entire population is even undergoing a cultural and expectation shift as well: turn over every rock, do every test, get every scan.
So while I think AI can certainly assist with and perhaps even automate and take over some chunk of work from human medical professionals, I think that other things are actually changing even faster, and so costs aren’t dropping any time soon.
4) The day that I shared these questions, you were up at 5 am thinking about the ICU team at the Minneapolis VA. Can you tell us more about that from the perspective of an emergency medicine physician?
Healthcare is a team sport. This is nowhere more true than in the ER, but I think the ICU is a close second. Many of us feel like our healthcare teams are a second family because of how tight our bonds build over time — and how much shared grief, suffering, and strife we all face together as part of our work. I work with nurses who know more about my personal life than some of my friends. And I know more about their own lives and bodies and health in an intimate, deeply personal way because it just comes up over time, and trust, and life.
The physical space was also what I was thinking about a lot. The ICU doctors and nurses in Minneapolis have to go back to that place where they probably spent years together with Alex. I had that same experience when my colleague Brian Lin died suddenly last year. I have memories with Brian in almost every single room of my ER, and it was hard to go to work in that same space and know that we’ll never share it again together.
I definitely can’t do my own job without nurses and pharmacists and social workers and techs and consultants. I literally can’t do much without other people all working together. When one of us is hurt, we are all hurt.
Graham Walker, MD is an emergency physician who practices in San Francisco. He is the co-director of Advanced Development at The Permanente Medical Group, which delivers care for Kaiser Permanente’s 4 million members in Northern California. As a clinical informaticist, he also leads emergency and urgent care strategy for KP’s electronic medical record. He completed his residency training in emergency medicine at St. Luke’s-Roosevelt Hospital Center in Manhattan, and attended medical school at the Stanford University School of Medicine.
Graham is also a software developer and entrepreneur. He created MDCalc and Offcall, two online resources dedicated to helping physicians across the world. In his free time Graham writes about the intersection of AI, technology, and medicine, and created The Physicians’ Charter for Responsible AI, a practical guide to implementing safe, accurate, and fair AI in healthcare settings and enjoys writing about the intersection of healthcare, technology, AI, and policy on LinkedIn.
Want to support Second Opinion?
🌟 Leave a review for the Second Opinion Podcast
📧 Share this email with other friends in the healthcare space!
💵 Become a paid subscriber!
📢 Become a sponsor

