Listen

Description

Episode 4 – To Err is AI

This episode delves into the challenges users face in determining the trustworthiness of AI systems, especially when performance feedback is limited. The researchers describe a debugging intervention to cultivate a critical mindset in users, enabling them to evaluate AI advice and avoid both over-reliance and under-reliance, and we discuss the counter-intuitive ways that humans react to AI.


Paper:

To Err Is AI! Debugging as an Intervention to Facilitate Appropriate Reliance on AI Systems, arXiv:2409.14377 [cs.AI]


Guests:

Both at the Web Information Systems group of the Faculty of Electrical Engineering, Mathematics and Computer Science (EEMCS/EWI), Delft University of Technology


Chapters:

00:00   Introduction

00:40   Aye Aye Fact of the Day

01:46   Understanding overreliance and under reliance on AI

02:26   The socio-technical dynamics of AI adoption

04:59   The role of familiarity and domain knowledge in AI use

07:18   The evolution of technology and it impact on trust

10:00   Challenges in AI transparency and trustworthiness

11:33   Background of the paper

12:56   The experiment: Over and under reliance

14:16   Human perception and AI accuracy

18:16   The Dunning-Kruger effect in AI interaction

20:53   Explaining AI: The double-edged sword

23:43   Building warranted trust in AI systems

31:59   Breaking down the Dunning-Kruger effect

39:18   Future research

41:49   Advice to AI product owners

45:45   Lightning Round – Can Transformers get us to AGI?

48:58   Lightning Round – Should we keep training LLM’s?

52:01   Lightning Round – Who should we follow?

54:38   Likelihood of an AI apocalypse?

58:10   Lightening Round – Recommendations for tools or techniques

1:00:48   Close out

 

Music: "Fire" by crimson.