Listen

Description

Send us a text

Why do AI models make things up? 

In this episode, I explain why Large Language Models “hallucinate” and confidently give wrong answers. Using OpenAI’s latest research, I break down what causes these errors, why rare facts are tricky, and how we can make AI more reliable.

If you want to understand AI’s mistakes and how to use it safely, this episode is for you.

Want to go deeper on AI?

📖 Buy AI Playbook

📩 Get my weekly LinkedIn newsletter, Human in the Loop.

🎓 Level up with the CPD Accredited AI Playbook Diploma

📞 Let's talk about AI training for your team: digitaltraining.ie or publicsectormarketingpros.com if you are in government or publics sector.