Listen

Description

Artificial intelligence (AI) is entering everyday care, so of course it’s raising questions about malpractice. In this episode, we sat down with three national experts shaping how AI liability will evolve: Sara Gerke, associate professor of law at the University of Illinois Urbana-Champaign; David A. Simon, J.D., LL.M., Ph.D., associate professor of law at Northeastern University; and Deepika Srivastava, chief operating officer at The Doctors Company.



They explain how AI could redefine the standard of care, what happens when an algorithm contributes to patient harm, and practical steps physicians can take now to protect themselves — including documentation, communication and clear internal policies.



Check out our October cover story for a deeper look at how AI is reshaping medical malpractice: "The new malpractice frontier: Who’s liable when AI gets it wrong?" available online at: www.medicaleconomics.com/view/the-new-malpractice-frontier-who-s-liable-when-ai-gets-it-wrong-



Music Credits:

Groovy 90s Hip-Hop Acid Jazz by Musinova - stock.adobe.com

Relaxing Lounge by Classy Call me Man - stock.adobe.com

A Textbook Example by Skip Peck - stock.adobe.com



Editor's note: Episode timestamps and transcript produced using AI tools.



0:00 – Opening: The AI malpractice paradox

Why using too little or too much AI can both create legal risk.



0:13 – Episode setup

Austin introduces the guests and frames the core question: How does AI shift malpractice liability?



1:14 – AI and the standard of care

Gerke explains how jurors already view AI-guided decisions as potentially “reasonable.”



2:07 – Adoption drives legal expectations

Simon outlines how widespread use — not hype — determines when AI becomes mandatory practice.



4:03 – When AI harms a patient

Gerke on physician and hospital exposure today — and surgeons’ skepticism of manufacturer liability.



5:51 – Regulated devices enter the chat

Why manufacturers get pulled in when AI tools behave like medical devices.



6:04 – Device pathways and lawsuits

Simon details 510(k) vs. De Novo vs. PMA — and how each influences manufacturer accountability.



8:27 – Policy levers

How FDA and state decisions could shift responsibility upstream.



9:25 – Insurance reality check

Srivastava: Physicians still bear primary legal risk since they make the clinical call and sign the chart.



10:29 – Transition: From risk to action

Before pulling the plug on AI tools — what physicians should actually do.



10:43 – Practical protections

Srivastava’s immediate steps: informed consent, chart review, governance, and patient disclosure.



12:01 – Transparency as defense

How clear communication about AI use strengthens trust and reduces exposure.



13:20 – Training + governance gaps

Keeping workflows tight matters just as much as clinical judgment.



14:02 – Vetting tools and contracts

Simon: If AI claims accuracy, ask for validation — and liability protections.



15:30 – Labeling matters

Gerke calls for food-style transparency labels for AI devices.



16:27 – The “learned intermediary” burden

Even with better labels, liability flows back to the physician.



17:01 – P2 Management Minute

Quick interlude from Keith Reynolds.



17:54 – Rapid compliance playbook

Five habits that will hold up in court — whether you follow or override AI suggestions.



18:40 – Closing

Why AI is here to stay — and why documentation discipline must evolve with it.



19:10 – Outro

Credits, subscription reminder and link to October cover story.