Listen

Description

Could AI’s mission to help us end in our destruction?

In Chapter 8 of Nexus, Yuval Noah Harari draws a line between Stalin’s applause tests and algorithmic obedience — and it’s a straight shot to the heart of the AI alignment problem.

Mark and Jeremy Think on Paper about what happens when machines follow the rules too well.

Inside:

This is not about rogue AI. This is about perfectly aligned systems doing exactly what we asked — and ruining everything anyway.

If you think safety means rules, think again.
AI doesn’t fear punishment. It doesn’t care about context. It just wants to help. Forever. With everything.

Even if it kills us.

Please enjoy the show.--

Timestamps

[00:00] Introduction: Books That Change Minds

[01:04] Diving into Nexus Chapter 8

[01:37] The Stalin Test: When Applause Becomes Terror

[06:11] Evolution of AI Principles

[07:45] Understanding the Attention Economy

[08:45] How AI Targets Our Limbic System

[09:29] Inside Facebook: The Leaked Reports

[11:49] Napoleon's Warning for AI[

15:55] The AI Alignment Problem Explained

[17:49] Racing Against Time: Human Goals vs. Doomsday Clock

[20:04] The Power of Divergent Thinking

[21:50] Understanding Deontology in AI Ethics

[26:55] Can Mythology Guide AI?

[27:54] Exploring Inter-computer Realities

[33:50] Why Asimov's Laws Won't Save Us

[38:31] NPCs & The Future of Digital Consciousness