Episode Summary:
In this candid episode, Bert and Julianna dive into the often uncomfortable topic of bias in AI. They unpack how algorithms can inherit human prejudices, why even “neutral” data isn’t neutral, and who gets to decide what’s fair in automated systems. Bert brings the tech angle—how models are trained, what’s possible (and what’s hype). Julianna grounds it in people’s real experiences—especially in HR, hiring, and everyday workplace systems.
Whether you're building AI tools or just wondering how they might impact your job or community, this episode gives you a practical and human-centered look at the biases baked into the systems we trust.
In This Episode:
Why AI isn't as objective as it sounds
The “garbage in, garbage out” problem in training data
Real examples of biased AI in hiring and finance
Why compliance doesn’t mean ethical
How companies can approach AI responsibly (without BS buzzwords)
What to ask before trusting an algorithm at work
Favorite Quote:
“Bias doesn’t go away when you automate it. It just gets faster.”
Timestamps (for YouTube or chapters):
00:00 – Cold open: “Can AI be racist?”
02:15 – The myth of neutral data
08:40 – HR and hiring tools: built-in bias?
15:10 – Bert’s breakdown of how AI learns
22:00 – Julianna’s “people-first” take on tech trust
28:30 – How to audit and question AI systems
34:00 – What we’d do differently as leaders
Stephen Wolfram Thought piece referenced by Bert- https://writings.stephenwolfram.com/2023/02/what-is-chatgpt-doing-and-why-does-it-work/