Real-World Foundation Models and & Life-saving AI in the Operating Room (Podcast summary)

In the current discourse surrounding AI, the public is inundated with bipolar narratives and hype that often obscures reality. In fact, even in the scientific community, it is not always clear what “AI success” really means, given the growing literature on "AI-for-science". There have been some successes for sure,  but there’s also a long tail of papers where the actual value is much harder to pin down. As a result, the scientific discourse is also not quite where it should be.

There are, certainly concrete, high-value situations where computing & AI are effectively driving decisions that provides a great deal of value in real life.  As an exemplar, here are some highlights from an inspiring conversation with Dr. Todd Hollon, a neurosurgeon and scientist at the University of Michigan. This is part of a podcast-of-sorts I'll be doing on Computing, AI & Science*. 

This is a prime example of what happens when domain expertise meets a "full vertical stack" of computational innovation. The answer: innovation that saves lives! But beside this simple answer, there are many caveats.. for AI to have real world impact, it requires a deep pipeline, institutional expertise and resources.

Here is a video of some excerpts* from the chat. I highly recommend you to listen to this. I've also jotted down some highlights below.

The Problem: The Invisibility of Infiltration
The primary challenge in brain tumor surgery, specifically for gliomas, is that tumors do not grow in neat, solid balls. Instead, they put out "fingers" that infiltrate normal brain tissue, making the margins incredibly subtle to identify even under a high-powered microscope.

For over a century, surgeons have struggled with this, as even under ambient light, differentiating normal tissue from tumor is notoriously difficult. The 21st-century status quo relies on multi-million dollar intraoperative MRI suites or fluorescent dyes, yet these tools often lack the sensitivity to detect low-density, clinically significant residual tumors. We are left with a staggering statistic: in roughly 25% of cases, safely resectable tumor is left behind because the surgeon simply cannot see it!

The Vertical Stack: From Optics to Inference
Dr. Hollon and his collaborators built a technology called "Fast Glioma," represents a breakthrough because it integrates a full pipeline.

  1. Advanced Optics: Using Stimulated Raman Histology (SRH), the team can generate high-resolution microscopic images of tissue in the operating room in real-time, without time-consuming dyes or chemical staining (another nice connection - the technology was operationalized by the University of Michigan!)
  2. The Foundation Model and data pipeline: They developed a foundation model trained on over 4 million images collected over a decade at Michigan and collaborators. Dr. Hollon also makes a clear case from a data and modeling standpoint for why you need a foundation model rather than a task specific classifier**.
  3. Edge Computing: Using an edge device, the model can perform inference in about 10 seconds in the OR providing the surgeon with a spatial tumor-infiltration score (0 to 1) during the procedure.

The result is a drop in the risk of leaving residual tumor from 25% down to just 4%. Talk about impact!! and Dr. Hollon wants to get that to 0%


Click on the image to enlarge it (or better.. just read the Nature paper) 
FastGlioma page   ||    Dr. Todd Hollon's research group

The Human Element: Philosophy and the University of Michigan Environment
Dr. Hollon also spoke (here is the 2nd excerpt from the podcast) about his own background and journey.  It is the fertile environment, rich in complementary expertise that allows a neurosurgery resident to "dive headfirst" into deep learning.

Dr. Hollon’s background is non-traditional; he was a U-M Philosophy major who focused on logic, epistemology, and the philosophy of language before ever touching a line of code. He says that this framework provided the scaffolding for his research.

He credits Michigan’s unique culture of collaboration which allowed him to "cold email" superstar AI researchers like Honglak Lee and receive a welcoming, career-altering response. It is this proximity, where a neurosurgeon can hold a joint appointment in Computer Science, that can bridge the gap between a "cool demo" and an FDA-cleared medical device***. And of course, he is happy to pay it forward to another student who might cold-email him. Look to Michigan for the difference?

What this means
Dr. Hollon says that the same technique can be brought to many other types of cancers. While the push towards Artificial General intelligence requires massive, centralized compute, scientific AI can still be pioneered within the halls of a university with modest resources (64 GPUs). You don't have to be at a frontier AI company to make a meaningful impact; but you may need  a number of elements of the stack (e.g. optics, AI, computing, expertise) and institutional resources. 


* I find myself having some incredible discussions at MICDE every week. We do write about some of these in the magazine, but I thought I'd just save some of the chats in a video form for everyone to enjoy. The production could. be smoother, but I want it natural and no-frills.

** "AI-for-Science-ers", please note. We don't need a foundation model in most cases. But in some, they matter.

*** Only part of the stack has been FDA-approved as of now, btw.


Comments