When your machine learning model makes a decision that affects someone's medical treatment, financial security, or legal rights, "the algorithm said so" isn't good enough. Stakeholders need to understand why models make the decisions they do, and in high-stakes environments, model interpretability becomes the difference between AI adoption and AI rejection.
In this episode, Serg Masis joins Dr. Genevieve Hayes to share practical strategies for building interpretable machine learning models that earn stakeholder trust and accelerate AI adoption within your organisation.
You'll learn:
Guest Bio
Serg Masis is the Principal AI Scientist at Syngenta, a leading agricultural company with a mission to improve global food security. He is also the author of Interpretable Machine Learning with Python and co-author of the upcoming DIY AI and Building Responsible AI with Python.
Links