Happy weekend! I write to you as I’m returning from the West Coast, where I witnessed the renaissance of San Francisco - in large part due to a boom in AI. A few friends in tech shared with me that the new trend isn’t just to have an AI therapist, but also an AI boyfriend or girlfriend. There’s something very alluring to people about someone who agrees with everything they say, and a relationship without drama or pushback or conflict. Meanwhile, I saw the headlines that Australia has taken a very different step and is restricting social media usage for teenagers. Part of me wonders if we are racing towards a very dystopian future, or the opposite. Are we searching for new ways to reclaim our humanity and connection to each other?
Another topic I’ve been mulling, both as I wear my investor hat and writer hat: How do we make it cool again to practice medicine? My physician friends have noted the rise of side hustles and concierge practices and their friends leaving the profession altogether for fields like startups and consulting. We rightfully blame the system. But what education and tools do we need to provide to the next generation of clinicians to thrive within the system? Or better yet, advocate to change it? On that note, our primer piece on revenue cycle management (how the money flows in healthcare) is out this week.
Thank you to all those who volunteered to help me with the market map. Stay tuned!
Four questions with Aartik Sarma, Assistant Professor of Medicine at the University of California, San Francisco

Aartik Sarma, MD MAS is an Assistant Professor of Medicine at the University of California, San Francisco, where he takes care of patients in the intensive care unit at UCSF Parnassus. He is an NIH-funded translational scientist who uses AI/ML and multiomic tools for computational precision medicine in the ICU.
1. Everyone has gone gag for LLMs. You've used lots of different technologies in your research that may be cheaper and faster for research teams to use, including machine learning. What else is out there that the healthcare industry should know about?
Language models are great at integrating vast amounts of information, tying together concepts, and recognizing patterns that you might not have seen. I’m very excited about the potential for AI to accelerate medical research and improve how we deliver health care. But they aren’t always the right tool for a job. LLMs are data and computing intensive approaches to a problem, and they work better with some kinds of data than others. There’s a lot of interesting research on LLM interpretability, but they still function mostly as black boxes and generate non-deterministic results, which can be challenging in research and clinical settings. Ironically, they often struggle with purely quantitative questions, and LLMs now often call other programs to do things like arithmetic.
A lot of my machine learning research asks questions around which patients might respond when it comes to a specific treatment. The art of computational medical research is understanding what the right tool is to use for a specific question. More focused tools can often get you to an answer faster and in a more explainable fashion. Smaller models also maybe able to run on your phone instead of relying on a remote server.
There’s a wide range of statistical learning tools. Sometimes we use a supervised approach, where provide the computer with examples of data from patients with and without disease. For example, you can provide a computer with thousands of skin lesions that are labeled by experts as cancerous or non-cancerous based on biopsy data. You can then train a convolutional neural network to learn which image features predict cancer. Alternatively, you might want to figure out if there are subtypes of a disease that we haven’t yet discovered - in that case, you could use unsupervised learning tools like k-means clustering that can learn patterns within your data. Each approach has its strengths and weaknesses. Sometimes a simple linear regression model is all you need.
2. You're a pulmonologist and critical care medicine doctor. How did you learn all of this? I can't imagine it was in medical school.
Traditional medical education would not get you the necessary background, you're right. I studied engineering before medical school, and then completed a masters in clinical research and an NIH-funded postdoctoral fellowship after I finished my clinical training.
That said, I don’t think you necessarily need to have a quantitative research background to help shape medical AI. There’s a lot of value in deep domain expertise, and a typical ML engineer isn’t going to be able to realize when a model output doesn’t make sense. There are many companies trying recruit physicians to help label training data. I think that’s a piece of the puzzle for improving model performance, but I also think there’s a lot of work to be done to develop better model evaluations that fully capture the complexity of clinical medicine.
3. Do we need to be investing our VC dollars outside of LLMs? You're involved with Scrub Capital and seeing a lot of deals, but the lion's share of the interest and venture dollars these days seems to be flowing to generative AI.
Yes. LLM are exciting, but I think there are also some other really interesting applications of AI/ML for drug development and precision medicine - some of these use tools that fall under of the umbrella of “generative AI” but I think they’re a little different that what people have in mind with LLMs.
Only 10% of drugs selected for clinical testing manage to get FDA approval. This means drugs take longer to get to patients, and, since trials cost billions of dollars to complete, it also contributes to the cost of medications. There are many companies working on tools to improve the drug discovery pipeline from pre-clinical chemistry to identifying biomarkers that predict a good response to treatment. For example, Google DeepMind’s Alpha Fold can predict protein structures, which makes it possible to predict how a drug will interact with the protein. There are also companies like Xaira and Tempus that are building platforms for precision medicine and drug discovery. Oncology has generally set the pace for developing precision treatments. I’m biased, but I think there are many opportunities to help patients with lung diseases, which are common and have a huge impact on quality of life, and I hope to see more investment and innovation there.
There are some big challenges as you move away from digital health to bio/pharma. You need investors around the table who see the value of biotech innovation. The feedback loops are much slower, so everyone needs to be patient. You also need data, and the ecosystem is so fragmented. There's hospitals with clinical information on their patients, pharma companies with biobanks, and small academic labs with their own data. Those silos can make it challenging to apply the same tools that have worked so well in other domains for medical research.
It’s not an impossible problem to solve. I work with a team of critical care data scientists across the country that’s facilitating multi-center collaboration by creating a shared data standard, and it’s letting us tackle some big and challenging problems in the ICU. There are also some companies like Truveta and Picnic that are working on aggregating and harmonizing data.
4. How would you see this reach patients in practice?
I’m optimistic that we’ll start seeing more tools implemented in the next few years. We’re already seeing AI roll out across the spectrum of health care. Google just announced a personal health coach powered by Gemini. I think of patients mostly in the context of the hospital, but so much of health care happens outside clinical settings. The same technology that powers these consumer health applications may also enable better chronic disease management at home.
I suspect we’ll also start seeing more AI embedded into the EMR. One potentially interesting application is co-pilots that are a backstop for clinicians. One of my colleagues from residency, Rob Korom, worked with the team at OpenAI to build a system to provide recommendations for clinicians in a health network in Nairobi, and they found that it significantly decreased the rate of diagnostic and treatment errors. While that’s an encouraging result, I think clinicians have also been burned by numerous false positive alerts from poorly calibrated algorithms, so it’s important to rigorously test and evaluate these tools as they are deployed.
In the longer term, I think AI/ML tools will help to identify patients that respond to targeted treatments. Hospital systems are already working on embedding disease subtyping algorithms into the medical record systems. At the end of the day, these are diagnostic tools and we’re going to need to run clinical trials to determine if they really improve outcomes. That’s going to take time and resources. It’s also going to require a nimble regulatory framework from the government that can balance risks and benefits. My hope is that, by the end of my career, we will be able to deliver much more personalized and effective care to every patient.
News
by Annalisa Merelli
The FDA removed the “black box” warning on hormone replacement therapy
The news: The warning for cardiovascular disease, stroke, and breast cancer that has accompanied estrogen for hormone replacement therapy in menopause since 2003 is coming off.
The history: The warning came after the hugely influential and popular Women’s Health Initiative study, whose long tail continued long after subsequent studies found the actual risk for disease following estrogen treatment to be much lower than previously believed. Because of the study’s popularity, many doctors continue to advise against hormonal treatment during menopause, and women don’t know they have safe options to treat their symptoms.
The moment: Menopause continues to be in the zeitgeist, and this FDA move may unlock more treatment offerings.
Why it matters: Half the battle is the science. The other half is the communication of the science. Women may still continue to be swayed against safe therapies by social media influencers and platforms with an agenda. We need healthcare professionals and other experts to flood the zone to explain in simple terms what the science shows, and what it does not.
The battle for Metsera is over
The news: Novo Nordisk withdrew from the bidding battle to acquire weight-loss maker Metsera. Pfizer will go ahead with the acquisition for $10 billion, more than double its initial offer.
What the market said: Novo’s stock rose 3.8%; Pfizer stock rose 1%; Metsera… tumbled, losing 16%.
What Novo said: “We will return to work and focus on our own promising pipeline,” said Mike Doustdar, Novo’s CEO.
Why it matters: Industry experts don’t seem surprised by the outcome given that Pfizer may have had the upper hand from the start with Novo facing antitrust concerns.
United Healthcare will no longer cover remote patient monitoring
The news: Starting in January, United will stop paying physicians who review health data collected remotely on patients with chronic conditions, STAT reported.
What United Healthcare said: RPM (remote patient monitoring) “is not reasonable and necessary due to insufficient evidence of efficacy” for a wide swath of conditions including high blood pressure, chronic obstructive pulmonary disease, depression, diabetes, and more,” according to the policy.
The exceptions: Heart failure monitoring and hypertension in pregnancy.
Why it matters: It’s a tough moment for remote patient monitoring — a space that attracted ample venture investment in light of the theoretical cost savings opportunities.
OpenAI is eyeing consumer health
The news rumor: AI is considering building its own health tools, such as a health assistant, or a health data aggregator.
A key hire: Nate Gross, co-founder of Doximity, was brought on to lead its AI health plans.
The potential: At HLTH, Gross said a large portion of Open AI’s 800 million weekly users comes with questions about health.
Why it matters: The company has been hiring consumer talent as well as health care industry experts, so this move makes a lot of sense. The big question is monetization. Will consumers pay for a service like this? Or will there be a health system play?
Good Q
“How much damage did the federal shutdown do to telehealth?,” asks Mario Aguilar. The answer is, quite a bit: Medicare visits conducted remotely dropped by 24% and Medicare Advantage ones by 13%.
Deals
$23 million for Digitail: The AI-driven veterinary practice management service closed a Series B venture funding in a deal led by Five Elms Capital, bringing its total funding to $37 million. The funds will be used to accelerate product innovation, including automating routine tasks, seeing more patients, and improving patient experiences.
$22 million for Evidium: The computational knowledge company closed a Series A venture funding in a deal led by Health2047 (the AMA venture studio) and WGG Partners.
Amae Health raised $25 million: The startup, focused on severe mental illness, closed a Series B venture funding in a deal led by Altos Ventures. The funds will be used to open Amae clinics nationwide, advance its proprietary AI-driven care platform, and support research into schizophrenia, bipolar disorder, and treatment-resistant depression.
Sovato raised $26 million: The first and only remote robotic surgery and procedure platform closed a Series B round in a deal led by Beringea. The company has raised $41 million so far.
House Rx raised $55 million: The health tech company, which focuses on making specialty medications more accessible, closed a Series B round of funding. Its total funding stands at $100 million.
$26 million for Affect Therapeutics: The gamified mental health startup closed a Series B round. The company raised a total of $49 million so far.
Want to support Second Opinion?
🌟 Leave a review for the Second Opinion Podcast
📧 Share this email with other friends in the healthcare space!
💵 Become a paid subscriber!
📢 Become a sponsor
