Listen

Description

David Deutsch is a phenomenal thinker on AI, Physics and Philosophy. But his views on AI risk puzzled me. So I sat down with him to discuss in more detail why he believes AGIs will pose no greater risk than humans.

David Deutsch is author of The Fabric of Reality and The Beginning of Infinity. Both are phenomenal works and thoroughly engrossing. I can't recommend reading them enough.

Timestamps:

00:00 - Will AIs Be Smarter Than Humans?

03:53 - Could Faster AIs Outcompete Us?

06:24 - Will Augmented Humans Keep Up With AI?

15:19 - Can Creativity Be Sped Up?

25:51 - Will AGIs Be People?

27:46 - Do Qualia Determine Morality?

30:54 - Would AGI and Human Values Converge?

40:10 - How Do We Test Moral Theories?

51:30 - Would AGI Care About Morality?

56:06 - Would Simulation Enable Moral Convergence?

1:04:13 - Do Moral Arguments Always Change Our Actions?

1:08:50 - Do AGIs and Teenagers Present The Same Problem?

1:14:13 - Would AGIs Start Off With Our Values?

1:19:22 - Do The Starting Values Matter?

1:21:41 - THE ORTHOGONALITY THESIS AND HITLER

1:31:34 - Maybe AGIs Will Suck At Morality?

1:40:39 - We're Going To Make Mistakes.