Listen

Description

Part 3 of our conversation with Kelly Cochran, Research Director at FinRegLab

As AI systems in financial services grow increasingly sophisticated—from traditional machine learning to generative AI and agentic systems—the question of guardrails becomes critical. In this episode, Kelly Cochran breaks down what responsible AI deployment really means in credit underwriting and financial services.

In This Episode:

Kelly explains two essential categories of AI guardrails: protecting data throughout its lifecycle and ensuring analytics models perform as expected. While machine learning models for credit underwriting have established transparency tools, generative AI models present entirely new challenges.

We explore the fundamental differences between ML models trained on curated financial data and large language models trained on vast swaths of the internet. Kelly discusses the transparency challenges of non-deterministic models—systems that might give slightly different outputs for the same inputs—and the serious implications of AI "hallucinations" in financial contexts where consistency is paramount.

The conversation turns to practical safeguards, particularly human-in-the-loop approaches. Kelly shares insights on the delicate balance required: keeping human reviewers engaged without becoming overly reliant on or skeptical of AI recommendations. We discuss how these safeguards must evolve as systems mature, balancing thoroughness with efficiency.

Perhaps most exciting is Kelly's vision for personal financial agents—AI assistants that could democratize access to quality financial planning. Currently, only 40% of U.S. adults work with a financial planner, dropping to just 20% among low-to-moderate income households. AI agents could provide personalized, accurate financial guidance to millions who can't afford traditional advisory services. From day-to-day expense management to long-term goal planning—even filing taxes—these tools could transform financial inclusion.

But Kelly emphasizes the critical importance of getting this right. When serving financially vulnerable populations, malfunctioning AI agents could make situations worse, not better. This requires rigorous testing, thoughtful design, and appropriate guardrails to ensure reliability.

Kelly also offers practical advice for consumers looking to improve their credit access, highlighting the growing use of cash flow data by lenders as an alternative or supplement to traditional credit scores.

Guest Bio: Kelly Cochran is Research Director at FinRegLab, a nonprofit research organization fostering data-driven dialogue about financial innovation. Her work focuses on ensuring technological advances in financial services benefit all consumers equitably and safely.

Resources Mentioned:

Host: Deepti Kalghatgi, Chief Curator at IDAA Hub and VP of Sales & Partnerships at Cogniquest AI

The IDAA Hub Podcast explores the intersection of AI, finance, and healthcare, featuring conversations with industry leaders driving intelligent automation in enterprise operations.

Episode Timestamps: 00:00:24 - What guardrails do complex AI systems need? 00:00:40 - Two types of guardrails: data and analytics 00:01:18 - Difference between ML models and Generative AI 00:02:22 - Non-deterministic models explained 00:02:59 - The hallucination problem in financial services 00:03:42 - Transparency challenges with GenAI 00:04:02 - Human-in-the-loop strategies and limitations 00:15:14 - Personal financial agents: game-changing opportunity 00:15:50 - Access to financial planning statistics 00:17:25 - Financial stability and economic participation 00:18:03 -  Closing thoughts 

Subscribe: Never miss an epis