Listen

Description

Send us Fan Mail

What happens when AI looks strong in a paper, but the workflow still isn’t ready?

In DigiPath Digest #40, I reviewed five recent papers across kidney pathology, oral and maxillofacial pathology, glioma biomarker prediction, digital twins in neuro-oncology, and a major European colorectal cancer cohort. A common theme kept coming back: good performance is not the same thing as real-world readiness.

We started with kidney biopsies and the challenge of assessing interstitial fibrosis and tubular atrophy, where AI shows promise but still does not fully agree with humans. That led into a bigger point I keep seeing in digital pathology: our “ground truth” is often based on human interpretation, and human interpretation has variability too.

From there, I looked at AI in oral and maxillofacial pathology, where the field is still early and one major bottleneck is the lack of strong public datasets. Then I discussed a systematic review on adult-type gliomas showing that multimodal models performed better than unimodal ones, which makes sense when you think about how pathologists actually work: we do not diagnose from one input alone.

I also covered a systematic review on digital twins in neuro-oncology. The idea is exciting, but the paper makes it clear that reproducibility, public code, multimodal integration, and external validation are still limiting factors.

And finally, I talked about a paper I really liked: a large European colorectal cancer cohort built across 26 biobanks in 12 countries. That kind of harmonized, quality-checked dataset matters. A lot. Because better AI starts with better data.

In this episode, I discuss:

Resources mentioned:

Papers discussed:   

Support the show

Get the "Digital Pathology 101" FREE E-book and join us!