Hook: Dario Amodei’s calm essay meets a sharp, urgent rebuttal—tone and risk estimates matter when civilization’s future is on the line. This condensed reaction (original 1 hour → new 4 minutes) with host Liron Shapira and guest Harlan Stewart (MIRI) cuts to the core disagreements over AI risk, alignment, and the ethics of technology. You’ll learn why Harlan argues Amodei understates extinction-level stakes, how caricaturing “doomers” derails productive debate, and why the distinction between AI personalities and the underlying goal-engine matters for catastrophic optimization. Topics covered: artificial intelligence, AI alignment, instrumental convergence, capability trends, and exfiltration risk. Ideal for listeners who want the key arguments and concrete takeaways without the hour-long deep dive. Listen now to get the key ideas in minutes.