Look for any podcast host, guest or anyone
Showing episodes and shows of

Susan Peich

Shows

The AI FundamentalistsThe AI FundamentalistsPrinciples, agents, and the chain of accountability in AI systemsDr. Michael Zargham provides a systems engineering perspective on AI agents, emphasizing accountability structures and the relationship between principals who deploy agents and the agents themselves. In this episode, he brings clarity to the often misunderstood concept of agents in AI by grounding them in established engineering principles rather than treating them as mysterious or elusive entities.Show highlights• Agents should be understood through the lens of the principal-agent relationship, with clear lines of accountability• True validation of AI systems means ensuring outcomes match intentions, not just optimizing loss functions• LLMs by the...2025-05-0846 minThe AI FundamentalistsThe AI FundamentalistsSupervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 2Part 2 of this series could have easily been renamed "AI for science: The expert’s guide to practical machine learning.” We continue our discussion with Christoph Molnar and Timo Freiesleben to look at how scientists can apply supervised machine learning techniques from the previous episode into their research.Introduction to supervised ML for science (0:00) Welcome back to Christoph Molnar and Timo Freiesleben, co-authors of “Supervised Machine Learning for Science: How to Stop Worrying and Love Your Black Box”The model as the expert? (1:00)Evaluation metrics have profound downstream effects on all model...2025-03-2741 minThe AI FundamentalistsThe AI FundamentalistsSupervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 1Machine learning is transforming scientific research across disciplines, but many scientists remain skeptical about using approaches that focus on prediction over causal understanding. That’s why we are excited to have Christoph Molnar return to the podcast with Timo Freiesleben. They are co-authors of "Supervised Machine Learning for Science: How to Stop Worrying and Love your Black Box." We will talk about the perceived problems with automation in certain sciences and find out how scientists can use machine learning without losing scientific accuracy.• Different scientific disciplines have varying goals beyond prediction, including control, explanation, and reaso...2025-03-2527 minThe AI FundamentalistsThe AI FundamentalistsComplex systems: What data science can learn from astrophysics with Rachel LosaccoOur special guest, astrophysicist Rachel Losacco, explains the intricacies of galaxies, modeling, and the computational methods that unveil their mysteries. She shares stories about how advanced computational resources enable scientists to decode galaxy interactions over millions of years with true-to-life accuracy. Sid and Andrew discuss transferable practices for building resilient modeling systems. Prologue: Why it's important to bring stats back [00:00:03]Announcement from the American Statistical Association (ASA): Data Science Statement Updated to Include “ and AI”  Today's guest: Rachel Losacco [00:02:10]Rachel is an astrophysicist who’s worked with major galaxy formation simulations for many yea...2024-09-0441 minThe AI FundamentalistsThe AI FundamentalistsExploring the NIST AI Risk Management Framework (RMF) with Patrick HallJoin us as we chat with Patrick Hall, Principal Scientist at Hallresearch.ai and Assistant Professor at George Washington University. He shares his insights on the current state of AI, its limitations, and the potential risks associated with it. The conversation also touched on the importance of responsible AI, the role of the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) in adoption, and the implications of using generative AI in decision-making.Show notesGovernance, model explainability, and high-risk applications 00:00:03 Intro to PatrickHis latest book: Machine L...2024-07-3041 minThe AI FundamentalistsThe AI FundamentalistsData lineage and AI: Ensuring quality and compliance with Matt BarlinReady to uncover the secrets of modern systems engineering and the future of AI? Join us for an enlightening conversation with Matt Barlin, the Chief Science Officer of Valence. Matt's extensive background in systems engineering and data lineage sets the stage for a fascinating discussion. He sheds light on the historical evolution of the field, the critical role of documentation, and the early detection of defects in complex systems. This episode promises to expand your understanding of model-based systems and data issues, offering valuable insights that only an expert of Matt's caliber can provide.In the heart...2024-07-0328 minThe AI FundamentalistsThe AI FundamentalistsResponsible AI: Does it help or hurt innovation? With Anthony HabayebArtificial Intelligence (AI) stands at a unique intersection of technology, ethics, and regulation. The complexities of responsible AI are brought into sharp focus in this episode featuring Anthony Habayeb, CEO and co-founder of Monitaur,  As responsible AI is scrutinized for its role in profitability and innovation, Anthony and our hosts discuss the imperatives of safe and unbiased modeling systems, the role of regulations, and the importance of ethics in shaping AI.Show notesPrologue: Why responsible AI? Why now? (00:00:00)Deviating from our normal topics about modeling best practicesContext about where regulation plays a...2024-05-0745 minThe AI FundamentalistsThe AI FundamentalistsManaging bias in the actuarial sciences with Joshua Pyle, FCASJoshua Pyle joins us in a discussion about managing bias in the actuarial sciences. Together with Andrew's and Sid's perspectives from  both the economic and data science fields, they deliver an interdisciplinary conversation about bias that you'll only find here.OpenAI news plus new developments in language models. 0:03The hosts get to discuss the aftermath of OpenAI and Sam Altman's return as CEOTension between OpenAI's board and researchers on the push for slow, responsible AI development vs fast, breakthrough model-making.Microsoft researchers find that smaller, high-quality data sets can be more effective for training language models t...2023-12-0743 minB2B Content Show: A Podcast About the How, What, and Why of B2B Content MarketingB2B Content Show: A Podcast About the How, What, and Why of B2B Content MarketingMitigating the risks of generative AIw/ Susan PeichChatGPT has taken the world of marketing by storm, for better and for worse. And by worse, I'm referring to all sorts of questions about privacy and the accuracy of information, intellectual property rights, deep fake technology, and on and on.In short, AI presents us with incredible and tantalizing possibilities, but also with a lot of risks, many of which we're only just beginning to understand. And if you're a B2B marketer and you're using AI or you're going to use it...2023-10-2418 minThe AI FundamentalistsThe AI FundamentalistsModeling with Christoph MolnarEpisode 4. The AI Fundamentalists welcome Christoph Molnar to discuss the characteristics of a modeling mindset in a rapidly innovating world. He is the author of multiple data science books including Modeling Mindsets, Interpretable Machine Learning, and his latest book Introduction to Conformal Prediction with Python. We hope you enjoy this enlightening discussion from a model builder's point of view.To keep in touch with Christoph's work, subscribe to his newsletter Mindful Modeler - "Better machine learning by thinking like a statistician. About model interpretation, paying attention to data, and always staying critical."SummaryI...2023-07-2527 minSaaS Backwards - Reverse Engineering SaaS SuccessSaaS Backwards - Reverse Engineering SaaS SuccessEp. 65 - No longer screaming into the wind about AI Governance, with Susan Peich Head of Marketing at Monitaur AIWhen Susan Peich joined Monitaur AI as their head of marketing earlier this year, the power (and dangers) of AI was still mostly talk amongst data scientists. How quickly things change. With the rise of ChatGPT in recent months, AI's power (and danger) are now at the forefront causing companies to scramble to figure out how to handle its governance.That’s where Monitaur.ai comes in. They help companies in highly regulated environments such as insurance, healthcare and financial to not only leverage the potential of AI and ML but also to mi...2023-05-0528 min