Trust in AI isn’t a vibe—it’s something you can intentionally design for (or accidentally break). In this episode, Galen sits down with Cal Al-Dhubaib to unpack “trust engineering”: a shared toolkit that helps cross-functional teams (engineering, UX, governance, risk, and business) talk about the same trust risks in the same language. They get into why “boring AI is safe AI,” how guardrails and human handoffs actually preserve trust, and why the biggest failures often aren’t the model—they’re the systems (and incentives) wrapped around it.
You’ll also hear real-world examples of trust going sideways—from biased outcomes to hallucinated “gaslighting,” to AI-assisted deliverables causing accuracy issues—and what project leaders can do to prevent finger-pointing when it happens.
Resources from this episode: