Listen

Description

AI tools are changing the pace at which organizations filter and rank candidates. However, matching someone to a job description and actually predicting whether they'll perform well in the role are two very different things. Most hiring processes have never been validated against real performance outcomes, and organizations often don't have a clear, measurable definition of what success looks like in a role. Without that foundation, even the most sophisticated AI is just automating something that was never evidence-based in the first place. 

So what would it actually take to build hiring processes that genuinely predict performance?

My guest this week is Jennifer Yugo, Managing Director and owner of Corvirtus, and an organizational psychologist specializing in evidence-based hiring. In our conversation, she explains the science behind predicting job performance and why most hiring processes are far from where they need to be.

In the interview, we discuss:

Matching candidates vs predicting performance

Why most hiring lacks evidence

Defining what success really looks like and identifying performance indicators

Do some AI hiring tools stand up to scrutiny?

The risks of automating bad decisions

Questions TA leaders should ask vendors

Are we going to see a reckoning for hiring technology?

What might the future look like?

Follow this podcast on Apple Podcasts.

Follow this podcast on Spotify.