Look for any podcast host, guest or anyone
Showing episodes and shows of

Sid Mangalik

Shows

The AI FundamentalistsThe AI FundamentalistsAI governance: Building smarter AI agents from the fundamentals, part 4Sid Mangalik and Andrew Clark explore the unique governance challenges of agentic AI systems, highlighting the compounding error rates, security risks, and hidden costs that organizations must address when implementing multi-step AI processes. Show notes:• Agentic AI systems require governance at every step: perception, reasoning, action, and learning• Error rates compound dramatically in multi-step processes - a 90% accurate model per step becomes only 65% accurate over four steps• Two-way information flow creates new security and confidentiality vulnerabilities. For example, targeted prompting to improve awareness comes at the cost of performance. (arXiv, May 24, 2025)• T...2025-07-2237 minThe AI FundamentalistsThe AI FundamentalistsLinear programming: Building smarter AI agents from the fundamentals, part 3We continue with our series about building agentic AI systems from the ground up and for desired accuracy.  In this episode, we explore linear programming and optimization methods that enable reliable decision-making within constraints. Show notes:Linear programming allows us to solve problems with multiple constraints, like finding optimal flights that meet budget requirementsThe Lagrange multiplier method helps find optimal solutions within constraints by reformulating utility functionsCombinatorial optimization handles discrete choices like selecting specific flights rather than continuous variablesDynamic programming techniques break complex problems into manageable subproblems to find solutions efficientlyMixed integer programming combines co...2025-07-0829 minThe AI FundamentalistsThe AI FundamentalistsUtility functions: Building smarter AI agents from the fundamentals, part 2The hosts look at utility functions as the mathematical basis for making AI systems. They use the example of a travel agent that doesn’t get tired and can be increased indefinitely to meet increasing customer demand. They also discuss the difference between this structured, economic-based approach with the problems of using large language models for multi-step tasks.This episode is part 2 of our series about building smarter AI agents from the fundamentals. Listen to Part 1 about mechanism design HERE.Show notes:• Discussing the current AI landscape where companies are discovering implementation is hard...2025-06-1241 minThe AI FundamentalistsThe AI FundamentalistsMechanism design: Building smarter AI agents from the fundamentals, Part 1What if we've been approaching AI agents all wrong? While the tech world obsesses over larger language models (LLMs) and prompt engineering, there'a a foundational approach that could revolutionize how we build trustworthy AI systems: mechanism design.This episode kicks off an exciting series where we're building AI agents "the hard way"—using principles from game theory and microeconomics to create systems with predictable, governable behavior. Rather than hoping an LLM can magically handle complex multi-step processes like booking travel, Sid and Andrew explore how to design the rules of the game so that even self-interested agents pr...2025-05-2037 minThe AI FundamentalistsThe AI FundamentalistsPrinciples, agents, and the chain of accountability in AI systemsDr. Michael Zargham provides a systems engineering perspective on AI agents, emphasizing accountability structures and the relationship between principals who deploy agents and the agents themselves. In this episode, he brings clarity to the often misunderstood concept of agents in AI by grounding them in established engineering principles rather than treating them as mysterious or elusive entities.Show highlights• Agents should be understood through the lens of the principal-agent relationship, with clear lines of accountability• True validation of AI systems means ensuring outcomes match intentions, not just optimizing loss functions• LLMs by the...2025-05-0846 minThe AI FundamentalistsThe AI FundamentalistsSupervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 2Part 2 of this series could have easily been renamed "AI for science: The expert’s guide to practical machine learning.” We continue our discussion with Christoph Molnar and Timo Freiesleben to look at how scientists can apply supervised machine learning techniques from the previous episode into their research.Introduction to supervised ML for science (0:00) Welcome back to Christoph Molnar and Timo Freiesleben, co-authors of “Supervised Machine Learning for Science: How to Stop Worrying and Love Your Black Box”The model as the expert? (1:00)Evaluation metrics have profound downstream effects on all model...2025-03-2741 minThe AI FundamentalistsThe AI FundamentalistsSupervised machine learning for science with Christoph Molnar and Timo Freiesleben, Part 1Machine learning is transforming scientific research across disciplines, but many scientists remain skeptical about using approaches that focus on prediction over causal understanding. That’s why we are excited to have Christoph Molnar return to the podcast with Timo Freiesleben. They are co-authors of "Supervised Machine Learning for Science: How to Stop Worrying and Love your Black Box." We will talk about the perceived problems with automation in certain sciences and find out how scientists can use machine learning without losing scientific accuracy.• Different scientific disciplines have varying goals beyond prediction, including control, explanation, and reaso...2025-03-2527 minThe AI FundamentalistsThe AI FundamentalistsThe future of AI: Exploring modeling paradigmsUnlock the secrets to AI's modeling paradigms. We emphasize the importance of modeling practices, how they interact, and how they should be considered in relation to each other before you act. Using the right tool for the right job is key. We hope you enjoy these examples of where the greatest AI and machine learning techniques exist in your routine today.More AI agent disruptors (0:56)Proxy from London start-up Convergence AIAnother hit to OpenAI, this product is available for free, unlike OpenAI’s Operator. AI Paris Summit - What's next for...2025-02-2533 minThe AI FundamentalistsThe AI FundamentalistsAgentic AI: Here we go againAgentic AI is the latest foray into big-bet promises for businesses and society at large. While promising autonomy and efficiency, AI agents raise fundamental questions about their accuracy, governance, and the potential pitfalls of over-reliance on automation. Does this story sound vaguely familiar? Hold that thought. This discussion about the over-under of certain promises is for you.Show NotesThe economics of LLMs and DeepSeek R1 (00:00:03)Reviewing recent developments in AI technologies and their implications Discussing the impact of DeepSeek’s R1 model on the...2025-02-0130 minThe AI FundamentalistsThe AI FundamentalistsContextual integrity and differential privacy: Theory vs. application with Sebastian BenthallWhat if privacy could be as dynamic and socially aware as the communities it aims to protect? Sebastian Benthall, a senior research fellow from NYU’s Information Law Institute, shows us how privacy is complex. He uses Helen Nissenbaum’s work with contextual integrity and concepts in differential privacy to explain the complexity of privacy. Our talk explains how privacy is not just about protecting data but also about following social rules in different situations, from healthcare to education. These rules can change privacy regulations in big ways.Show notesIntro: Sebastian Benthall (0:03)Res...2025-01-0732 minThe AI FundamentalistsThe AI FundamentalistsModel documentation: Beyond model cards and system cards in AI governanceWhat if the secret to successful AI governance lies in understanding the evolution of model documentation? In this episode, our hosts challenge the common belief that model cards marked the start of documentation in AI. We explore model documentation practices, from their crucial beginnings in fields like finance to their adaptation in Silicon Valley. Our discussion also highlights the important role of early modelers and statisticians in advocating for a complete approach that includes the entire model development lifecycle.Show NotesModel documentation origins and best practices (1:03)Documenting a model...2024-11-0927 minThe AI FundamentalistsThe AI FundamentalistsNew paths in AI: Rethinking LLMs and model risk strategiesAre businesses ready for large language models as a path to AI? In this episode, the hosts reflect on the past year of what has changed and what hasn’t changed in the world of LLMs. Join us as we debunk the latest myths and emphasize the importance of robust risk management in AI integration. The good news is that many decisions about adoption have forced businesses to discuss their future and impact in the face of emerging technology. You won't want to miss this discussion.Intro and news: The veto of California's AI Safety Bill (00:00:03)Ca...2024-10-0839 minThe AI FundamentalistsThe AI FundamentalistsComplex systems: What data science can learn from astrophysics with Rachel LosaccoOur special guest, astrophysicist Rachel Losacco, explains the intricacies of galaxies, modeling, and the computational methods that unveil their mysteries. She shares stories about how advanced computational resources enable scientists to decode galaxy interactions over millions of years with true-to-life accuracy. Sid and Andrew discuss transferable practices for building resilient modeling systems. Prologue: Why it's important to bring stats back [00:00:03]Announcement from the American Statistical Association (ASA): Data Science Statement Updated to Include “ and AI”  Today's guest: Rachel Losacco [00:02:10]Rachel is an astrophysicist who’s worked with major galaxy formation simulations for many yea...2024-09-0441 minThe AI FundamentalistsThe AI FundamentalistsPreparing AI for the unexpected: Lessons from recent IT incidentsCan your AI models survive a big disaster? While a recent major IT incident with CrowdStrike wasn't AI related, the magnitude and reaction reminded us that no system no matter how proven is immune to failure. AI modeling systems are no different. Neglecting the best practices of building models can lead to unrecoverable failures. Discover how the three-tiered framework of robustness, resiliency, and anti-fragility can guide your approach to creating AI infrastructures that not only perform reliably under stress but also fail gracefully when the unexpected happens.Show NotesTechnology, incidents...2024-08-2034 minThe AI FundamentalistsThe AI FundamentalistsExploring the NIST AI Risk Management Framework (RMF) with Patrick HallJoin us as we chat with Patrick Hall, Principal Scientist at Hallresearch.ai and Assistant Professor at George Washington University. He shares his insights on the current state of AI, its limitations, and the potential risks associated with it. The conversation also touched on the importance of responsible AI, the role of the National Institute of Standards and Technology (NIST) AI Risk Management Framework (RMF) in adoption, and the implications of using generative AI in decision-making.Show notesGovernance, model explainability, and high-risk applications 00:00:03 Intro to PatrickHis latest book: Machine L...2024-07-3041 minThe AI FundamentalistsThe AI FundamentalistsData lineage and AI: Ensuring quality and compliance with Matt BarlinReady to uncover the secrets of modern systems engineering and the future of AI? Join us for an enlightening conversation with Matt Barlin, the Chief Science Officer of Valence. Matt's extensive background in systems engineering and data lineage sets the stage for a fascinating discussion. He sheds light on the historical evolution of the field, the critical role of documentation, and the early detection of defects in complex systems. This episode promises to expand your understanding of model-based systems and data issues, offering valuable insights that only an expert of Matt's caliber can provide.In the heart...2024-07-0328 minThe AI FundamentalistsThe AI FundamentalistsDifferential privacy: Balancing data privacy and utility in AIExplore the basics of differential privacy and its critical role in protecting individual anonymity. The hosts explain the latest guidelines and best practices in applying differential privacy to data for models such as AI. Learn how this method also makes sure that personal data remains confidential, even when datasets are analyzed or hacked.Show NotesIntro and AI news (00:00) Google AI search tells users to glue pizza and eat rocks Gary Marcus on break? (Maybe and X only break)What is differential privacy? (06:34)Differential privacy is a process for sensitive data anonymization that offers each individual in...2024-06-0428 minThe AI FundamentalistsThe AI FundamentalistsResponsible AI: Does it help or hurt innovation? With Anthony HabayebArtificial Intelligence (AI) stands at a unique intersection of technology, ethics, and regulation. The complexities of responsible AI are brought into sharp focus in this episode featuring Anthony Habayeb, CEO and co-founder of Monitaur,  As responsible AI is scrutinized for its role in profitability and innovation, Anthony and our hosts discuss the imperatives of safe and unbiased modeling systems, the role of regulations, and the importance of ethics in shaping AI.Show notesPrologue: Why responsible AI? Why now? (00:00:00)Deviating from our normal topics about modeling best practicesContext about where regulation plays a...2024-05-0745 minThe AI FundamentalistsThe AI FundamentalistsBaseline modeling and its critical role in AI and business performanceBaseline modeling is a necessary part of model validation. In our expert opinion, it should be required before model deployment. There are many baseline modeling types and in this episode, we're discussing their use cases, strengths, and weaknesses. We're sure you'll appreciate a fresh take on how to improve your modeling practices.Show notesIntroductions and news: why reporting and visibility is a good thing for AI 0:03Spoiler alert: Providing visibility to AI bias audits does NOT mean exposing trade secrets. Some reports claim otherwise.Discussion about AI regulation in the context of current...2024-04-1736 minThe AI FundamentalistsThe AI FundamentalistsInformation theory and the complexities of AI model monitoringIn this episode, we explore information theory and the not-so-obvious shortcomings of its popular metrics for model monitoring; and where non-parametric statistical methods can serve as the better option. Introduction and latest news 0:03Gary Marcus has written an article questioning the hype around generative AI, suggesting it may not be as transformative as previously thought.This in contrast to announcements out of the NVIDIA conference during the same week.Information theory and its applications in AI. 3:45The importance of information theory in computer science, citing its applications in cryptography and communication.The basics of i...2024-03-2621 minThe AI FundamentalistsThe AI FundamentalistsThe importance of anomaly detection in AIIn this episode, the hosts focus on the basics of anomaly detection in machine learning and AI systems, including its importance, and how it is implemented. They also touch on the topic of large language models, the (in)accuracy of data scraping, and the importance of high-quality data when employing various detection methods. You'll even gain some techniques you can use right away to improve your training data and your models.Intro and discussion (0:03)Questions about Information Theory from our non-parametric statistics episode.Google CEO calls out chatbots (WSJ)A statement about anomaly detection as it...2024-03-0635 minThe AI FundamentalistsThe AI FundamentalistsWhat is consciousness, and does AI have it?We're taking a slight detour from modeling best practices to explore questions about AI and consciousness. With special guest Michael Herman, co-founder of Monitaur and TestDriven.io, the team discusses different philosophical perspectives on consciousness and how these apply to AI. They also discuss the potential dangers of AI in its current state and why starting fresh instead of iterating can make all the difference in achieving characteristics of AI that might resemble consciousness. Show notesWhy consciousness for this episode?Enough listeners have randomly asked the hosts if...2024-02-1332 minThe AI FundamentalistsThe AI FundamentalistsUpskilling for AI: Roles, organizations, and new mindsetsData scientists, researchers, engineers, marketers, and risk leaders find themselves at a crossroads to expand their skills or risk obsolescence. The hosts discuss how a growth mindset and "the fundamentals" of AI can help.Our episode shines a light on this vital shift, equipping listeners with strategies to elevate their skills and integrate multidisciplinary knowledge. We share stories from the trenches on how each role affects robust AI solutions that adhere to ethical standards, and how embracing a T-shaped model of expertise can empower data scientists to lead the charge in industry-specific innovations.Zooming out...2024-01-2541 minThe Shifting Privacy Left PodcastThe Shifting Privacy Left PodcastS3E2: 'My Top 20 Privacy Engineering Resources for 2024' with Debra Farber (Shifting Privacy Left)In Honor of Data Privacy Week 2024, we're publishing a special episode. Instead of interviewing a guest, Debra shares her 'Top 20 Privacy Engineering Resources' and why. Check out her favorite free privacy engineering courses, books, podcasts, creative learning platforms, privacy threat modeling frameworks, conferences, government resources, and more.DEBRA's TOP 20 PRIVACY ENGINEERING RESOURCES (in no particular order)Privado's Free Course: 'Technical Privacy Masterclass'OpenMined's Free Course: 'Our Privacy Opportunity' Data Protocol's Privacy Engineering Certification ProgramThe Privacy Quest Platform & Games; Bonus: The Hitchhiker's Guide to Privacy Engineering'Data Privacy: a runbook for engineers by Nishant Bhajaria'Privacy E...2024-01-2354 minThe AI FundamentalistsThe AI FundamentalistsNon-parametric statisticsGet ready for 2024 and a brand new episode! We discuss non-parametric statistics in data analysis and AI modeling. Learn more about applications in user research methods, as well as the importance of key assumptions in statistics and data modeling that must not be overlooked, After you listen to the episode, be sure to check out the supplement material in Exploring non-parametric statistics.Welcome to 2024  (0:03)AI, privacy, and marketing in the tech industryOpenAI's GPT store launch. (The Verge)Google's changes to third-party cookies. (Gizmodo)Non-parametric statistics and its applications (6:49)A solution for modeling i...2024-01-1032 min