Listen

Description

At a radiology AI conference, a clinician described a simple interface choice: an agree/disagree button for feeding back on AI diagnoses. It looked clean and efficient. But it assumed something fundamentally untrue: that clinicians experience a binary internal state when reviewing AI outputs. In reality, what they experience is uncertainty, context, fatigue and gut instinct. And when you force all of that into a binary choice, you don't just create a poor experience. You distort the data training the model.

In this episode, consultant psychiatrist Dr Lia Ali argues that behaviour isn't a surface layer in health technology — it's infrastructure. Drawing on experimental psychology and her work at NHS England, she makes the case that most health AI is built for a version of human cognition that doesn't exist. She calls this the fictional binary human, and she thinks it's one of the biggest unspoken risks in health technology today.

Problems Worth Solving is brought to you by Healthia, the collaborative service design consultancy for transformation in health, care and public services.

Find out more about our work at healthia.services.