Listen

Description

Dr. Kelly Cohen is a Professor of Aerospace Engineering at the University of Cincinnati and a leading authority in explainable, certifiable AI systems. With more than 31 years of experience in artificial intelligence, his research focuses on fuzzy logic, safety-critical systems, and responsible AI deployment in aerospace and autonomous environments. His lab’s work has received international recognition, with students earning top global research awards and building real-world AI products used in industry.

In this episode 190 of the Disruption Now Podcast, πŸ‘©β€πŸ”¬ Dr. Cohen explains:

What explainable AI really means for clinicians
How transparent models improve patient safety
Strategies to reduce algorithmic bias in healthcare systems
Real examples of XAI in diagnostics & treatment

This video is essential for tech leaders, AI researchers, data scientists, clinicians, and anyone interested in ethical, trustworthy AI in medicine.

πŸ“… CHAPTERS / TIMESTAMPS

00:00 Introduction β€” Why XAI in Healthcare
02:15 Kelly Cohen Bio & Expertise
05:40 What Explainable AI Actually Is
11:20 Challenges in Medical AI Adoption
16:50 Case Study: XAI in Diagnostics
22:10 Reducing Bias in ML Models
28:35 Regulatory & Ethical Standards
33:50 Future of Explainability in Medicine
39:25 Audience Q&A Highlights
44:55 Final Thoughts & Next Steps

πŸ’‘ Q&A SNIPPET

Q: What is explainable AI?
A: Explainable AI refers to systems where decisions can be understood, traced, and validated β€” critical for safety-critical applications like aerospace, healthcare, and autonomous vehicles.

Q: Why is black-box AI dangerous?
A: Without transparency, errors cannot be audited, responsibility is unclear, and humans become unknowing test subjects.

Q: What is insurable AI?
A: Insurable AI is AI that has been tested, quantified for risk, and certified to the point where insurers are willing to underwrite it β€” creating real accountability.

πŸ”— RESOURCES & HANDLES

Dr. Kelly Cohen LinkedIn:
πŸ”— https://www.linkedin.com/in/kelly-cohen-phd

Mentioned Concepts:
βœ” Explainable AI (XAI)
βœ” Model interpretability
βœ” Algorithmic bias & fairness

🎧 About This Channel

Disruption Now makes technology accessible and human-centric. Our mission is to demystify complex systems and open conversations across domains that are often hard to grasp β€” from politics to emerging tech, ethics, civic systems, and more β€” so every viewer can engage thoughtfully and confidently. We disrupt the status quo.

πŸ”— Follow & Connect
πŸ‘€ Rob Richardson
Founder, Strategist, Curator

X (Twitter): https://x.com/RobforOhio
Instagram: https://instagram.com/RobforOhio
Facebook: https://www.facebook.com/robforohio/
LinkedIn: https://www.linkedin.com/in/robrichardsonjr

🌐 Disruption Now

Human-centric tech, culture & conversation
YouTube (Core Channel): https://www.youtube.com/channel/UCWDYBJSzBoqgCd1ADPVttSw
TikTok: https://www.tiktok.com/@disruptionnow
Instagram: https://www.instagram.com/disrupt.art/
X (Twitter): https://twitter.com/DisruptionNow
Clubhouse: https://www.clubhouse.com/@disruptionnow

πŸ“… MidwestCon Week

The Midwest’s human-centered tech & innovation week β€” Sept 8–11, 2026 in Cincinnati, Ohio πŸŒ†
MidwestCon Week brings together builders, policymakers, creators, and learners for multi-sector conversations on technology, policy, and inclusive innovation.

Official Website: https://midwestcon.live/
Instagram: https://www.instagram.com/midwestcon_/
TikTok: https://www.tiktok.com/@midwestcon

πŸ”” Subscribe for

Deep, human-centric tech exploration
Conversations breaking down barriers to understanding
Event updates including MidwestCon Week
Culture, policy, and innovation insights

#DisruptionNow #TechForAll #MidwestConWeek

Music Credit: Lofi Music HipHop Chill 2 - DELOSound