Health data and AI: UK public opinion on health data sharing and public trust in AI
New NDORMS focus groups reveal conditional support for NHS data and artificial intelligence when benefits, safeguards, and ethics are clear
Learn how public trust, patient data privacy, and ethical AI in medicine shape the future of sharing medical data for research in the UK
What You'll Learn:
• Why UK public opinion on health data sharing for AI is strongly tied to visible NHS-led public benefit and not just abstract promises
• How clear explanations of risks, safeguards, and outcomes can dramatically increase support for sharing medical data for research and AI development
• Under what conditions participants were more willing to back NHS-led, non-commercial AI projects versus for-profit tech firm involvement
• What focus groups said about NHS control, profit-sharing, and faster care as prerequisites for using patient data in commercial AI tools
• How resources like UK Biobank have enabled thousands of studies while reporting zero re-identification incidents—and why that matters for public trust
• Practical principles for designing ethical AI in healthcare that align with patient expectations on privacy, consent, and transparency
• How findings from a BMJ Digital Health and AI study can guide policymakers, hospital leaders, and researchers in communicating about data use
About the Guest:
This episode features researchers involved in a new NDORMS study published in BMJ Digital Health & AI, who work at the intersection of health data science, medical ethics, and public engagement. Drawing on in-depth UK focus groups, they translate public concerns and expectations into practical guidance for health systems, regulators, and AI developers. Their work centers public voices in debates about NHS data and artificial intelligence to inform safer, more trusted innovation.
Episode Content:
00:00 - Introduction: Why health data and AI need public trust
04:15 - Background: NHS data, AI in healthcare ethics, and recent policy debates
10:30 - Inside the NDORMS study: How the UK focus groups were designed and who took part
17:45 - Conditional support: When people back NHS-led, non-commercial AI projects
24:20 - The commercial question: Public views on for-profit tech firms using NHS data
31:40 - Consent, control, and safeguards: What “trustworthy” data sharing looks like
39:10 - UK Biobank as a case study: Scale, impact, and zero reported re-identification incidents
46:25 - Turning insights into action: Practical guidance for policymakers, clinicians, and AI developers
54:00 - Future directions: Public engagement, regulation, and the next wave of ethical AI in medicine
Health data and AI are reshaping the NHS—but only if the public is on board. This episode dives into new research on UK public opinion on health data sharing, drawing on in-depth NDORMS focus groups that explore what makes people willing—or unwilling—to share their medical records for AI research.
You’ll hear how participants responded when potential benefits were made concrete, such as better diagnostics, faster care, and more efficient NHS services. The conversation unpacks why many people express strong support for non-commercial, NHS-led AI projects aimed at clear public good, while showing far more skepticism toward sharing data with for-profit tech companies.
We break down the conditions that increased support: robust privacy safeguards, meaningful consent, transparent governance, ongoing public oversight, and guarantees that the NHS retains control over how data and resulting AI tools are used. The discussion also examines how expectations shift when financial profit is involved, and why some participants want patients or the NHS to share in that value if their data underpins commercial tools.
The episode highlights the UK Biobank as a large-scale proof of concept for secure health data use—enabling thousands of research papers with zero reported re-identification incidents—and what that example does and does not resolve in public debates about trust. Throughout, we connect these findings to broader questions in AI in healthcare ethics, patient data privacy, and the responsibilities of health systems and regulators.
Whether you’re working in digital health, designing AI tools for medicine, setting NHS data policy, or simply a patient curious about how your records might be used, this episode offers a clear, research-based guide to what the UK public actually says it needs in order to trust sharing medical data for research and AI.
What You'll Learn:
About the Guest:
This episode features researchers involved in a new NDORMS study published in BMJ Digital Health & AI, who work at the intersection of health data science, medical ethics, and public engagement. Drawing on in-depth UK focus groups, they translate public concerns and expectations into practical guidance for health systems, regulators, and AI developers. Their work centers public voices in debates about NHS data and artificial intelligence to inform safer, more trusted innovation.