In this episode we talk about unlearning, AI at work, applied behavioral analysis, psychological safety, and why “we’ve always done it this way” is wrecking progress in HR. Duane, Crystal, and Jenny unpack how people cling to broken processes, why unlearning is different from abandoning, and how AI can actually augment work if people know where they need help and feel safe enough to ask for it.
Key Takeaways
Unlearning isn’t about tossing everything out and starting over. It’s about noticing where “the way we’ve always done it” is actually blocking better options, especially with AI. Most orgs default to abandonment – dump the person, the tech, the process – instead of asking what needs to be unlearned and rebuilt.
ABA gives a useful blueprint for unlearning at work. You identify the behavior, ask whether it’s functional or malfunctioning, break down what isn’t serving you, and then rebuild a better pattern. That same logic applies to HR and TA habits, from how you source to how you run performance conversations.
AI only works as augmentation if you can admit where you need help and say it out loud. You have to be specific, contextual, and a little vulnerable in your prompts, which is hard for people who are conditioned to “already know” their jobs. The irony is that the better you are at asking for help from the machine, the more value you unlock from it.
There’s a big trust gap between workers and the people behind AI systems. It’s not that job seekers don’t trust the model, they don’t trust what humans will do with the data the model collects. When 81 percent of people are bringing their own (mostly free) AI to work, HR has to care about moats, privacy, and the basic rule that if it’s free, you’re the product.
Unlearning in HR and leadership starts with psychological safety and process discipline. You have to make it normal to ask “why do we do it this way?” and be okay when the answer is basically Zig Ziglar’s ham story – “because the pan was too small three generations ago.” Performance improvement, AI adoption, and culture all get better when you stop blindly repeating inherited processes and start intentionally redesigning them.
This series was recorded live at HR Tech 2025 inside the HR Executive studios on the expo floor in partnership with the WRKdefined Podcast Network. Make sure you’re subscribed to the full series and visit HRExecutive.com for the news, analysis, and insights shaping the future of work.
Chapters
00:00 AI, unlearning, and why “just replace it” is the wrong instinct
01:08 Live from HR Tech – show intro, married co hosts, and setting up Unfuck It
03:18 What’s fucked – design bias, getting stuck in old solutions, and why unlearning matters
04:33 How humans actually learn – NLP, ABA, learning styles, and the controversy around changing behavior
07:29 Where AI fits – prompt engineering as applied behavior, being specific about help, and the weirdness of being vulnerable with a machine
11:01 Job seekers, trust, and free AI – who sees your data, BYO AI at work, and why “if it’s free, you’re the product” hits HR harder
16:42 How to unfuck unlearning – awareness of stuck patterns, using PIPs to break habits instead of just firing, and the Zig Ziglar ham story
21:23 Wrap up – psychological safety as the prerequisite for unlearning, plus where to find Jenny and where to see her next
Guest Info:
Jenny Cotie Kangas (“JCK”), VP of Innovation, GBS Worldwide
Hosts:
Dwance Lay, Co-host of Un-F*ck It and Chief Innovation Officer, GBS World Wide
Crystal Lay, Co-host of Un-F*ck It and Chief People Officer and Executive, GBS World Wide