podcast
details
.com
Print
Share
Look for any podcast host, guest or anyone
Search
Showing episodes and shows of
Liron Shapira
Shows
Doom Debates
Mike Israetel vs. Liron Shapira — AI Doom Debate
Dr. Mike Israetel, renowned exercise scientist and social media personality, and more recently a low-P(doom) AI futurist, graciously offered to debate me!00:00 Introducing Mike Israetel12:19 What’s Your P(Doom)™30:58 Timelines for Artificial General Intelligence34:49 Superhuman AI Capabilities43:26 AI Reasoning and Creativity47:12 Evil AI Scenario01:08:06 Will the AI Cooperate With Us?01:12:27 AI's Dependence on Human Labor01:18:27 Will AI Keep Us Around to Study Us?01:42:38 AI's Approach to Earth's Resources01:53:22 Global AI Policies and Risks02:03:02 The Qu...
2025-05-08
2h 15
Doom Debates
The Most Likely AI Doom Scenario — with Jim Babcock, LessWrong Team
What’s the most likely (“mainline”) AI doom scenario? How does the existence of LLMs update the original Yudkowskian version? I invited my friend Jim Babcock to help me answer these questions.Jim is a member of the LessWrong engineering team and its parent organization, Lightcone Infrastructure. I’ve been a longtime fan of his thoughtful takes.This turned out to be a VERY insightful and informative discussion, useful for clarifying my own predictions, and accessible to the show’s audience.00:00 Introducing Jim Babcock01:29 The Evolution of LessWrong Doom Scenarios02:22 Le...
2025-04-30
1h 53
Doom Debates
AI Could Give Humans MORE Control — Ozzie Gooen
Ozzie Gooen is the founder of the Quantified Uncertainty Research Institute (QURI), a nonprofit building software tools for forecasting and policy analysis. I’ve known him through the rationality community since 2008 and we have a lot in common.00:00 Introducing Ozzie02:18 The Rationality Community06:32 What’s Your P(Doom)™08:09 High-Quality Discourse and Social Media14:17 Guesstimate and Squiggle Demos31:57 Prediction Markets and Rationality38:33 Metaforecast Demo41:23 Evaluating Everything with LLMs47:00 Effective Altruism and FTX Scandal56:00 The Repugnant Conclusion Debate01:02:25 AI for Go...
2025-04-24
1h 59
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
AI News Crossover: A Candid Chat with Liron Shapira of Doom Debates
In this crossover episode of The Cognitive Revolution, Nathan Labenz joins Liron Shapira of Doom Debates, for a wide-ranging news and analysis discussion about recent AI developments. The conversation covers significant topics including GPT-4o image generation's implications for designers and businesses like Waymark, debates around learning to code, entrepreneurship versus job security, and the validity of OpenAI's $300 billion valuation. Nathan and Leron also explore AI safety organizations, international cooperation possibilities, and Anthropic's new mechanistic interpretability paper, providing listeners with thoughtful perspectives on the high-stakes nature of advanced AI development across society.
2025-04-19
2h 29
Doom Debates
Top AI Professor Has 85% P(Doom) — David Duvenaud, Fmr. Anthropic Safety Team Lead
David Duvenaud is a professor of Computer Science at the University of Toronto, co-director of the Schwartz Reisman Institute for Technology and Society, former Alignment Evals Team Lead at Anthropic, an award-winning machine learning researcher, and a close collaborator of Dr. Geoffrey Hinton. He recently co-authored Gradual Disempowerment.We dive into David’s impressive career, his high P(Doom), his recent tenure at Anthropic, his views on gradual disempowerment, and the critical need for improved governance and coordination on a global scale.00:00 Introducing David03:03 Joining Anthropic and AI Safety Concerns35:58 David’s Ba...
2025-04-18
2h 07
Doom Debates
“AI 2027” — Top Superforecaster's Imminent Doom Scenario
AI 2027, a bombshell new paper by the AI Futures Project, is a highly plausible scenario of the next few years of AI progress. I like this paper so much that I made a whole episode about it.00:00 Overview of AI 202705:13 2025: Stumbling Agents16:23 2026: Advanced Agents21:49 2027: The Intelligence Explosion29:13 AI's Initial Exploits and OpenBrain's Secrecy30:41 Agent-3 and the Rise of Superhuman Engineering37:05 The Creation and Deception of Agent-544:56 The Race Scenario: Humanity's Downfall48:58 The Slowdown Scenario: A Glimmer of Hope53:49 Final...
2025-04-15
57 min
Doom Debates
Top Economist Sees AI Doom Coming — Dr. Peter Berezin, BCA Research
Dr. Peter Berezin is the Chief Global Strategist and Director of Research at BCA Research, the largest Canadian investment research firm. He’s known for his macroeconomics research reports and his frequent appearances on Bloomberg and CNBC.Notably, Peter is one of the only macroeconomists in the world who’s forecasting AI doom! He recently published a research report estimating a “ more than 50/50 chance AI will wipe out all of humanity by the middle of the century”.00:00 Introducing Peter Berezin01:59 Peter’s Economic Predictions and Track Record05:50 Investment Strategies and Beating the Market
2025-04-09
2h 14
Doom Debates
AI News: GPT-4o Images, AI Unemployment, Emmett Shear's New Safety Org — with Nathan Labenz
Nathan Labenz, host of The Cognitive Revolution, joins me for an AI news & social media roundup!00:00 Introducing Nate05:18 What’s Your P(Doom)™23:22 GPT-4o Image Generation40:20 Will Fiverr’s Stock Crash?47:41 AI Unemployment55:11 Entrepreneurship01:00:40 OpenAI Valuation01:09:29 Connor Leahy’s Hair01:13:28 Mass Extinction01:25:30 Is anyone feeling the doom vibes?01:38:20 Rethinking AI Individuality01:40:35 “Softmax” — Emmett Shear's New AI Safety Org01:57:04 Anthropic's Mechanistic Interpretability Paper02:10:11 International Cooperation for AI Safety02:18:43 Final Thoughts
2025-04-04
2h 19
Doom Debates
How an AI Doomer Sees The World — Liron on The Human Podcast
In this special cross-posted episode of Doom Debates, originally posted here on The Human Podcast, we cover a wide range of topics including the definition of “doom”, P(Doom), various existential risks like pandemics and nuclear threats, and the comparison of rogue AI risks versus AI misuse risks.00:00 Introduction01:47 Defining Doom and AI Risks05:53 P(Doom)10:04 Doom Debates’ Mission16:17 Personal Reflections and Life Choices24:57 The Importance of Debate27:07 Personal Reflections on AI Doom30:46 Comparing AI Doom to Other Existential Risks33:42 Strategies to Mit...
2025-03-28
45 min
The Human Podcast
Host of 'Doom Debates' Podcast, Liron Shapira | AI SAFETY SERIES, Ep #3
WATCH ON YOUTUBE: https://youtu.be/0KmaotctziEWELCOME TO THE NEW AI SAFETY SERIES, FEATURING FILMED CONVERSATIONS WITH INDIVIDUALS WORKING IN THE AI SAFETY FIELD, ABOUT THEIR LIVES AND WORK.In this episode, I speak to Liron Shapira,host of the 'Doom Debates' Podcast. Doom Debates' Mission is to raise mainstream awareness of imminent extinction from AGI and build the social infrastructure for high-quality debate.The Human Podcast explores the lives and ideas of inspiring individuals. Subscribe for new interviews every week.🕒 TIMESTAMPS:0:00 - Intro0:05 - What is...
2025-03-23
44 min
Doom Debates
Gödel's Theorem Says Intelligence ≠ Power? AI Doom Debate with Alexander Campbell
Alexander Campbell claims that having superhuman intelligence doesn’t necessarily translate into having vast power, and that Gödel's Incompleteness Theorem ensures AI can’t get too powerful. I strongly disagree.Alex has a Master's of Philosophy in Economics from the University of Oxford and an MBA from the Stanford Graduate School of Business, has worked as a quant trader at Lehman Brothers and Bridgewater Associates, and is the founder of Rose AI, a cloud data platform that leverages generative AI to help visualize data.This debate was recorded in August 2023.00:00 Intro and Alex’...
2025-03-21
50 min
Doom Debates
Alignment is EASY and Roko's Basilisk is GOOD?!
Roko Mijic has been an active member of the LessWrong and AI safety community since 2008. He’s best known for “Roko’s Basilisk”, a thought experiment he posted on LessWrong that made Eliezer Yudkowsky freak out, and years later became the topic that helped Elon Musk get interested in Grimes.His view on AI doom is that:* AI alignment is an easy problem* But the chaos and fighting from building superintelligence poses a high near-term existential risk* But humanity’s course without AI has an even higher near-term existential riskWhile my own view is very diffe...
2025-03-17
1h 59
Doom Debates
Roger Penrose is WRONG about Gödel's Theorem and AI Consciousness
Sir Roger Penrose is a mathematician, mathematical physicist, philosopher of science, and Nobel Laureate in Physics.His famous body of work includes Penrose diagrams, twistor theory, Penrose tilings, and the incredibly bold claim that intelligence and consciousness are uncomputable physical phenomena related to quantum wave function collapse. Dr. Penrose is such a genius that it's just interesting to unpack his worldview, even if it’s totally implausible. How can someone like him be so wrong? What exactly is it that he's wrong about? It's interesting to try to see the world through his eyes, before reco...
2025-03-10
1h 31
Doom Debates
We Found AI's Preferences — What David Shapiro MISSED in this bombshell Center for AI Safety paper
The Center for AI Safety just dropped a fascinating paper — they discovered that today’s AIs like GPT-4 and Claude have preferences! As in, coherent utility functions. We knew this was inevitable, but we didn’t know it was already happening.This episode has two parts:In Part I (48 minutes), I react to David Shapiro’s coverage of the paper and push back on many of his points.In Part II (60 minutes), I explain the paper myself.00:00 Episode Introduction05:25 PART I: REACTING TO DAVID SHAPIRO10:06 Critique of David Sh...
2025-02-21
1h 48
Doom Debates
Does AI Competition = AI Alignment? Debate with Gil Mark
My friend Gil Mark, who leads generative AI products at LinkedIn, thinks competition among superintelligent AIs will lead to a good outcome for humanity. In his view, the alignment problem becomes significantly easier if we build multiple AIs at the same time and let them compete.I completely disagree, but I hope you’ll find this to be a thought-provoking episode that sheds light on why the alignment problem is so hard.00:00 Introduction02:36 Gil & Liron’s Early Doom Days04:58: AIs : Humans :: Humans : Ants08:02 The Convergence of AI Goals15...
2025-02-10
1h 17
Doom Debates
Toy Model of the AI Control Problem
Why does the simplest AI imaginable, when you ask it to help you push a box around a grid, suddenly want you to die?AI doomers are often misconstrued as having "no evidence" or just "anthropomorphizing". This toy model will help you understand why a drive to eliminate humans is NOT a handwavy anthropomorphic speculation, but rather something we expect by default from any sufficiently powerful search algorithm.We’re not talking about AGI or ASI here — we’re just looking at an AI that does brute-force search over actions in a simple grid world.T...
2025-02-06
25 min
Doom Debates
Superintelligent AI vs. Real-World Engineering | Liron Reacts to Bryan Cantrill
Bryan Cantrill, co-founder of Oxide Computer, says in his talk that engineering in the physical world is too complex for any AI to do it better than teams of human engineers. Success isn’t about intelligence; it’s about teamwork, character and resilience.I completely disagree.00:00 Introduction02:03 Bryan’s Take on AI Doom05:55 The Concept of P(Doom)08:36 Engineering Challenges and Human Intelligence15:09 The Role of Regulation and Authoritarianism in AI Control29:44 Engineering Complexity: A Case Study from Oxide Computer40:06 The Value of Team C...
2025-01-31
1h 05
Doom Debates
2,500 Subscribers Live Q&A
Thanks for everyone who participated in the live Q&A on Friday!The topics covered include advice for computer science students, working in AI trustworthiness, what good AI regulation looks like, the implications of the $500B Stargate project, the public's gradual understanding of AI risks, the impact of minor AI disasters, and the philosophy of consciousness.00:00 Advice for Comp Sci Students01:14 The $500B Stargate Project02:36 Eliezer's Recent Podcast03:07 AI Safety and Public Policy04:28 AI Disruption and Politics05:12 DeepSeek and AI Advancements06:54 Human vs...
2025-01-28
1h 23
Doom Debates
AI Twitter Beefs #3: Marc Andreessen, Sam Altman, Mark Zuckerberg, Yann LeCun, Eliezer Yudkowsky & More!
It’s time for AI Twitter Beefs #3:00:00 Introduction01:27 Marc Andreessen vs. Sam Altman09:15 Mark Zuckerberg35:40 Martin Casado47:26 Gary Marcus vs. Miles Brundage Bet58:39 Scott Alexander’s AI Art Turing Test01:11:29 Roon01:16:35 Stephen McAleer01:22:25 Emmett Shear01:37:20 OpenAI’s “Safety”01:44:09 Naval Ravikant vs. Eliezer Yudkowsky01:56:03 Comic Relief01:58:53 Final ThoughtsShow NotesUpcoming Live Q&A: https://lironshapira.substack.com/p/2500-subscribers-live-q-and-a-ask“Make Your Beliefs Pay Rent In Anticipated Experiences” by Eliezer Yud...
2025-01-24
2h 07
Doom Debates
Effective Altruism Debate with Jonas Sota
Effective Altruism has been a controversial topic on social media, so today my guest and I are going to settle the question once and for all: Is it good or bad?Jonas Sota is a Software Engineer at Rippling, BA in Philosophy from UC Berkeley, who’s been observing the Effective Altruism (EA) movement in the San Francisco Bay Area for over a decade… and he’s not a fan.00:00 Introduction01:22 Jonas’s Criticisms of EA03:23 Recoil Exaggeration05:53 Impact of Malaria Nets10:48 Local vs. Global Altruism13:02...
2025-01-17
1h 06
Doom Debates
God vs. AI Doom: Debate with Bentham's Bulldog
Matthew Adelstein, better known as Bentham’s Bulldog on Substack, is a philosophy major at the University of Michigan and an up & coming public intellectual.He’s a rare combination: Effective Altruist, Bayesian, non-reductionist, theist.Our debate covers reductionism, evidence for god, the implications of a fine-tuned universe, moral realism, and AI doom.00:00 Introduction02:56 Matthew’s Research11:29 Animal Welfare16:04 Reductionism vs. Non-Reductionism Debate39:53 The Decline of God in Modern Discourse46:23 Religious Credences50:24 Pascal's Wager and Christianity56:13 Are Miracles Real?0...
2025-01-15
3h 20
Doom Debates
Debate with a former OpenAI Research Team Lead — Prof. Kenneth Stanley
Prof. Kenneth Stanley is a former Research Science Manager at OpenAI leading the Open-Endedness Team in 2020-2022. Before that, he was a Professor of Computer Ccience at the University of Central Florida and the head of Core AI Research at Uber. He coauthored Why Greatness Cannot Be Planned: The Myth of the Objective, which argues that as soon as you create an objective, then you ruin your ability to reach it.In this episode, I debate Ken’s claim that superintelligent AI *won’t* be guided by goals, and then we compare our views on AI doom.
2025-01-06
2h 37
Doom Debates
OpenAI o3 and Claude Alignment Faking — How doomed are we?
OpenAI just announced o3 and smashed a bunch of benchmarks (ARC-AGI, SWE-bench, FrontierMath)!A new Anthropic and Redwood Research paper says Claude is resisting its developers’ attempts to retrain its values!What’s the upshot — what does it all mean for P(doom)?00:00 Introduction01:45 o3’s architecture and benchmarks06:08 “Scaling is hitting a wall” 🤡13:41 How many new architectural insights before AGI?20:28 Negative update for interpretability31:30 Intellidynamics — ***KEY CONCEPT***33:20 Nuclear control rod analogy36:54 Sam Altman's misguided perspective42:40 Claude resisted retraini...
2024-12-30
1h 03
Doom Debates
AI Will Kill Us All — Liron Shapira on The Flares
This week Liron was interview by Gaëtan Selle on @the-flares about AI doom.Cross-posted from their channel with permission.Original source: https://www.youtube.com/watch?v=e4Qi-54I9Zw0:00:02 Guest Introduction 0:01:41 Effective Altruism and Transhumanism 0:05:38 Bayesian Epistemology and Extinction Probability 0:09:26 Defining Intelligence and Its Dangers 0:12:33 The Key Argument for AI Apocalypse 0:18:51 AI’s Internal Alignment 0:24:56 What Will AI's Real Goal Be? 0:26:50 The Train of Apocalypse 0:31:05 Among Intellectuals, Who Rejects the AI Apocalypse Arguments? 0...
2024-12-27
1h 23
Doom Debates
Roon vs. Liron: AI Doom Debate
Roon is a member of the technical staff at OpenAI. He’s a highly respected voice on tech Twitter, despite being a pseudonymous cartoon avatar account. In late 2021, he invented the terms “shape rotator” and “wordcel” to refer to roughly visual/spatial/mathematical intelligence vs. verbal intelligence. He is simultaneously a serious thinker, a builder, and a shitposter. I'm excited to learn more about Roon, his background, his life, and of course, his views about AI and existential risk.00:00 Introduction02:43 Roon’s Quest and Philosophies22:32 AI Creativity30:42 What’s Your P(Doom)™
2024-12-18
1h 44
The Flares - Podcasts
#57 - L’IA va tous nous tuer ! – avec Liron Shapira (Anglais)
⚠️ Découvrez du contenu EXCLUSIF (pas sur la chaîne) ⚠️ ⇒ https://the-flares.com/y/bonus/ ⬇️⬇️⬇️ Infos complémentaires : sources, références, liens... ⬇️⬇️⬇️ Le contenu vous intéresse ? Abonnez-vous et cliquez sur la 🔔 Vous avez aimé cette vidéo ? Pensez à mettre un 👍 et à la partager. C'est un petit geste qui nous aide grandement 🥰 ⚠️ Essayez GRATUITEMENT "The Flares Premium" ⚠️ ⇒ https://the-flares.com/y/premium/ Sommaire : 0:00:02 Introduction de l'invité 0:01:41 Altruisme Efficace et Transhumanisme 0:05:38 Epistemologie Baysienne et probabilité d'extinction 0:09:26 Définition d'intelligence et ses dangers 0:12:33 L'argument principal de l'apocalypse de l'IA 0:18:51 Alignement interne de l'IA 0:24:56 Quel sera réellement l’objectif de l’IA ? 0:26:50 Le train de l'apocalypse 0:31:05 Parmi les intellectuelles...
2024-12-18
1h 23
Doom Debates
Scott Aaronson Makes Me Think OpenAI's “Safety” Is Fake, Clueless, Reckless and Insane
Today I’m reacting to the recent Scott Aaronson interview on the Win-Win podcast with Liv Boeree and Igor Kurganov.Prof. Aaronson is the Director of the Quantum Information Center at the University of Texas at Austin. He’s best known for his research advancing the frontier of complexity theory, especially quantum complexity theory, and making complex insights from his field accessible to a wider readership via his blog.Scott is one of my biggest intellectual influences. His famous Who Can Name The Bigger Number essay and his long-running blog are among my best memories of c...
2024-12-11
1h 52
Doom Debates
Liron Reacts to Subbarao Kambhampati on Machine Learning Street Talk
Today I’m reacting to a July 2024 interview that Prof. Subbarao Kambhampati did on Machine Learning Street Talk.Rao is a Professor of Computer Science at Arizona State University, and one of the foremost voices making the claim that while LLMs can generate creative ideas, they can’t truly reason.The episode covers a range of topics including planning, creativity, the limits of LLMs, and why Rao thinks LLMs are essentially advanced N-gram models.00:00 Introduction02:54 Essentially N-Gram Models?10:31 The Manhole Cover Question20:54 Reasoning vs. Approximate Retrieval47...
2024-11-28
2h 59
Doom Debates
This Yudkowskian Has A 99.999% P(Doom)
In this episode of Doom Debates, I discuss AI existential risks with my pseudonymous guest Nethys.Nethy shares his journey into AI risk awareness, influenced heavily by LessWrong and Eliezer Yudkowsky. We explore the vulnerability of society to emerging technologies, the challenges of AI alignment, and why he believes our current approaches are insufficient, ultimately resulting in 99.999% P(Doom).00:00 Nethys Introduction04:47 The Vulnerable World Hypothesis10:01 What’s Your P(Doom)™14:04 Nethys’s Banger YouTube Comment26:53 Living with High P(Doom)31:06 Losing Access to Distant Stars3...
2024-11-27
1h 04
Doom Debates
Cosmology, AI Doom, and the Future of Humanity with Fraser Cain
Fraser Cain is the publisher of Universe Today, co-host of Astronomy Cast, a popular YouTuber about all things space, and guess what… he has a high P(doom)! That’s why he’s joining me on Doom Debates for a very special AI + space crossover episode.00:00 Fraser Cain’s Background and Interests5:03 What’s Your P(Doom)™07:05 Our Vulnerable World15:11 Don’t Look Up22:18 Cosmology and the Search for Alien Life31:33 Stars = Terrorists39:03 The Great Filter and the Fermi Paradox55:12 Grabby Aliens Hypothesis01:19:40...
2024-11-21
1h 57
Increments
#77 (Bonus) - AI Doom Debate (w/ Liron Shapira)
Back on Liron's Doom Debates podcast! Will we actually get around to the subject of superintelligent AI this time? Is it time to worry about the end of the world? Will Ben and Vaden emotionally recover from the devastating youtube comments from the last episode? Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel and podcast. We discuss Definitions of "new knowledge" The reliance of deep learning on induction Can AIs be creative? The limits of statistical prediction Predictions of what deep learning cannot accomplish Can ChatGPT write funny jokes? ...
2024-11-19
2h 21
Doom Debates
AI Doom Debate: Vaden Masrani & Ben Chugg vs. Liron Shapira
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are back for a Part II! This time we’re going straight to debating my favorite topic, AI doom.00:00 Introduction02:23 High-Level AI Doom Argument17:06 How Powerful Could Intelligence Be?22:34 “Knowledge Creation”48:33 “Creativity”54:57 Stand-Up Comedy as a Test for AI01:12:53 Vaden & Ben’s Goalposts01:15:00 How to Change Liron’s Mind01:20:02 LLMs are Stochastic Parrots?01:34:06 Tools vs. Agents01:39:51 Instrumental Convergence and AI Goals01:45:51 Intelligence vs. Morality01:53:57 Mai...
2024-11-19
2h 21
Doom Debates
Andrew Critch vs. Liron Shapira: Will AI Extinction Be Fast Or Slow?
Dr. Andrew Critch is the co-founder of the Center for Applied Rationality, a former Research Fellow at the Machine Intelligence Research Institute (MIRI), a Research Scientist at the UC Berkeley Center for Human Compatible AI, and the co-founder of a new startup called Healthcare Agents.Dr. Critch’s P(Doom) is a whopping 85%! But his most likely doom scenario isn’t what you might expect. He thinks humanity will successfully avoid a self-improving superintelligent doom scenario, only to still go extinct via the slower process of “industrial dehumanization”.00:00 Introduction01:43 Dr. Critch’s Perspective on LessWrong Sequences0...
2024-11-16
2h 37
Doom Debates
AI Twitter Beefs #2: Yann LeCun, David Deutsch, Tyler Cowen, Jack Clark, Beff Jezos, Samuel Hammond vs. Eliezer Yudkowsky, Geoffrey Hinton, Carl Feynman
It’s time for AI Twitter Beefs #2:00:42 Jack Clark (Anthropic) vs. Holly Elmore (PauseAI US)11:02 Beff Jezos vs. Eliezer Yudkowsky, Carl Feynman18:10 Geoffrey Hinton vs. OpenAI & Meta25:14 Samuel Hammond vs. Liron30:26 Yann LeCun vs. Eliezer Yudkowsky37:13 Roon vs. Eliezer Yudkowsky41:37 Tyler Cowen vs. AI Doomers52:54 David Deutsch vs. LironTwitter people referenced:* Jack Clark: https://x.com/jackclarkSF* Holly Elmore: https://x.com/ilex_ulmus* PauseAI US: https://x.com/PauseAIUS* Geoffrey Hinton: ht...
2024-11-13
1h 06
Increments
#76 (Bonus) - Is P(doom) meaningful? Debating epistemology (w/ Liron Shapira)
Liron Shapira, host of [Doom Debates], invited us on to discuss Popperian versus Bayesian epistemology and whether we're worried about AI doom. As one might expect knowing us, we only got about halfway through the first subject, so get yourselves ready (presumably with many drinks) for part II in a few weeks! The era of Ben and Vaden's rowdy youtube debates has begun. Vaden is jubilant, Ben is uncomfortable, and the world has never been more annoyed by Popperians. Follow Liron on twitter (@liron) and check out the Doom Debates youtube channel and podcast. We...
2024-11-08
2h 50
Doom Debates
Is P(Doom) Meaningful? Epistemology Debate with Vaden Masrani and Ben Chugg
Vaden Masrani and Ben Chugg, hosts of the Increments Podcast, are joining me to debate Bayesian vs. Popperian epistemology.I’m on the Bayesian side, heavily influenced by the writings of Eliezer Yudkowsky. Vaden and Ben are on the Popperian side, heavily influenced by David Deutsch and the writings of Popper himself.We dive into the theoretical underpinnings of Bayesian reasoning and Solomonoff induction, contrasting them with the Popperian perspective, and explore real-world applications such as predicting elections and economic policy outcomes.The debate highlights key philosophical differences between our two epistemological frameworks, an...
2024-11-08
2h 50
Doom Debates
15-Minute Intro to AI Doom
Our top researchers and industry leaders have been warning us that superintelligent AI may cause human extinction in the next decade.If you haven't been following all the urgent warnings, I'm here to bring you up to speed.* Human-level AI is coming soon* It’s an existential threat to humanity* The situation calls for urgent actionListen to this 15-minute intro to get the lay of the land.Then follow these links to learn more and see how you can help:* The CompendiumA...
2024-11-04
15 min
Doom Debates
Lee Cronin vs. Liron Shapira: AI Doom Debate
Prof. Lee Cronin is the Regius Chair of Chemistry at the University of Glasgow. His research aims to understand how life might arise from non-living matter. In 2017, he invented “Assembly Theory” as a way to measure the complexity of molecules and gain insight into the earliest evolution of life.Today we’re debating Lee's claims about the limits of AI capabilities, and my claims about the risk of extinction from superintelligent AGI.00:00 Introduction04:20 Assembly Theory05:10 Causation and Complexity10:07 Assembly Theory in Practice12:23 The Concept of Assembly Index1...
2024-10-30
1h 31
Doom Debates
Ben Horowitz says nuclear proliferation is GOOD? I disagree.
Ben Horowitz, cofounder and General Partner at Andreessen Horowitz (a16z), says nuclear proliferation is good.I was shocked because I thought we all agreed nuclear proliferation is VERY BAD.If Ben and a16z can’t appreciate the existential risks of nuclear weapons proliferation, why would anyone ever take them seriously on the topic of AI regulation?00:00 Introduction00:49 Ben Horowitz on Nuclear Proliferation02:12 Ben Horowitz on Open Source AI05:31 Nuclear Non-Proliferation Treaties10:25 Escalation Spirals15:20 Rogue Actors16:33 Nuclear Accidents17:19 Sa...
2024-10-25
28 min
The Knowledge with David Elikwu
121. Rationality, Startups, and the Danger of Belief with Liron Shapira
David speaks with Liron Shapira, an entrepreneur and angel investor. He is the founder of Relationship Hero, a relationship coaching service with over 100,000 clients. He is also a technologist, rationalist, and serial entrepreneur, known for his sceptical views on crypto and other inflated startups. Currently, he hosts the Doom Debates podcast, which aims to raise mainstream awareness of AGI's imminent risks and build the social infrastructure for high-quality debate.They talked about:🧠 The bridge between rationality and belief💼 How entrepreneurs find value in risk⚠️ The danger of hype-driven startups🍀 The role of l...
2024-10-17
35 min
Doom Debates
“AI Snake Oil” Prof. Arvind Narayanan Can't See AGI Coming | Liron Reacts
Today I’m reacting to Arvind Narayanan’s interview with Robert Wright on the Nonzero podcast: https://www.youtube.com/watch?v=MoB_pikM3NYDr. Narayanan is a Professor of Computer Science and the Director of the Center for Information Technology Policy at Princeton. He just published a new book called AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.Arvind claims AI is “normal technology like the internet”, and never sees fit to bring up the impact or urgency of AGI. So I’ll take it upon...
2024-10-13
1h 12
Doom Debates
Dr. Keith Duggar has a high P(doom)?! Debate with MLST Co-host
Dr. Keith Duggar from Machine Learning Street Talk was the subject of my recent reaction episode about whether GPT o1 can reason. But instead of ignoring or blocking me, Keith was brave enough to come into the lion’s den and debate his points with me… and his P(doom) might shock you!First we debate whether Keith’s distinction between Turing Machines and Discrete Finite Automata is useful for understanding limitations of current LLMs. Then I take Keith on a tour of alignment, orthogonality, instrumental convergence, and other popular stations on the “doom train”, to compare our views...
2024-10-09
2h 11
Doom Debates
Getting Arrested for Barricading OpenAI's Office to Stop AI
Sam Kirchner and Remmelt Ellen, leaders of the Stop AI movement, think the only way to effectively protest superintelligent AI development is with civil disobedience.Not only are they staging regular protests in front of AI labs, they’re barricading the entrances and blocking traffic, then allowing themselves to be repeatedly arrested.Is civil disobedience the right strategy to pause or stop AI?00:00 Introducing Stop AI00:38 Arrested at OpenAI Headquarters01:14 Stop AI’s Funding01:26 Blocking Entrances Strategy03:12 Protest Logistics and Arrest08:13 Blocking Traffic
2024-10-05
45 min
Doom Debates
Q&A #1 Part 2: Stock Picking, Creativity, Types of Doomers, Favorite Books
This episode is a continuation of Q&A #1 Part 1 where I answer YOUR questions!00:00 Introduction01:20 Planning for a good outcome?03:10 Stock Picking Advice08:42 Dumbing It Down for Dr. Phil11:52 Will AI Shorten Attention Spans?12:55 Historical Nerd Life14:41 YouTube vs. Podcast Metrics16:30 Video Games26:04 Creativity30:29 Does AI Doom Explain the Fermi Paradox?36:37 Grabby Aliens37:29 Types of AI Doomers44:44 Early Warning Signs of AI Doom48:34 Do Current AIs Have General Intelligence?51:07...
2024-10-03
1h 09
Doom Debates
Q&A #1 Part 1: College, Asperger's, Elon Musk, Double Crux, Liron's IQ
Thanks for being one of the first Doom Debates subscribers and sending in your questions! This episode is Part 1; stay tuned for Part 2 coming soon.00:00 Introduction01:17 Is OpenAI a sinking ship?07:25 College Education13:20 Asperger's16:50 Elon Musk: Genius or Clown?22:43 Double Crux32:04 Why Call Doomers a Cult?36:45 How I Prepare Episodes40:29 Dealing with AI Unemployment44:00 AI Safety Research Areas46:09 Fighting a Losing Battle53:03 Liron’s IQ01:00:24 Final ThoughtsExplanation of Double Cruxhttps://ww...
2024-10-01
1h 01
Doom Debates
Doom Tiffs #1: Amjad Masad, Eliezer Yudkowsky, Helen Toner, Roon, Lee Cronin, Naval Ravikant, Martin Casado, Yoshua Bengio
In today’s episode, instead of reacting to a long-form presentation of someone’s position, I’m reporting on the various AI x-risk-related tiffs happening in my part of the world. And by “my part of the world” I mean my Twitter feed.00:00 Introduction01:55 Followup to my MSLT reaction episode03:48 Double Crux04:53 LLMs: Finite State Automata or Turing Machines?16:11 Amjad Masad vs. Helen Toner and Eliezer Yudkowsky17:29 How Will AGI Literally Kill Us?33:53 Roon37:38 Prof. Lee Cronin40:48 Defining AI Creativity43:44 Na...
2024-09-25
1h 14
Doom Debates
Can GPT o1 Reason? | Liron Reacts to Tim Scarfe & Keith Duggar
How smart is OpenAI’s new model, o1? What does “reasoning” ACTUALLY mean? What do computability theory and complexity theory tell us about the limitations of LLMs?Dr. Tim Scarfe and Dr. Keith Duggar, hosts of the popular Machine Learning Street Talk podcast, posted an interesting video discussing these issues… FOR ME TO DISAGREE WITH!!!00:00 Introduction02:14 Computability Theory03:40 Turing Machines07:04 Complexity Theory and AI23:47 Reasoning44:24 o147:00 Finding gold in the Sahara56:20 Self-Supervised Learning and Chain of Thought01:04:01 The Miracle of AI Op...
2024-09-18
2h 06
Doom Debates
Yuval Noah Harari's AI Warnings Don't Go Far Enough | Liron Reacts
Yuval Noah Harari is a historian, philosopher, and bestselling author known for his thought-provoking works on human history, the future, and our evolving relationship with technology. His 2011 book, Sapiens: A Brief History of Humankind, took the world by storm, offering a sweeping overview of human history from the emergence of Homo sapiens to the present day. Harari just published a new book which is largely about AI. It’s called Nexus: A Brief History of Information Networks from the Stone Age to AI. Let’s go through the latest interview he did as part of his book tour...
2024-09-12
1h 06
Doom Debates
I talked to Dr. Phil about AI extinction risk!
It's finally here, the Doom Debates / Dr. Phil crossover episode you've all been asking for 😂 The full episode is called “AI: The Future of Education?"While the main focus was AI in education, I'm glad the show briefly touched on how we're all gonna die. Everything in the show related to AI extinction is clipped here.Join the conversation at DoomDebates.com or youtube.com/@DoomDebates, suggest topics or guests, and help us spread awareness about the urgent risk of AI extinction. Thanks for watching. This is a public episode. If yo...
2024-09-10
07 min
Doom Debates
Debate with Roman Yampolskiy: 50% vs. 99.999% P(Doom) from AI
Dr. Roman Yampolskiy is the director of the Cyber Security Lab at the University of Louisville. His new book is called AI: Unexplainable, Unpredictable, Uncontrollable.Roman’s P(doom) from AGI is a whopping 99.999%, vastly greater than my P(doom) of 50%. It’s a rare debate when I’m LESS doomy than my opponent!This is a cross-post from the For Humanity podcast hosted by John Sherman. For Humanity is basically a sister show of Doom Debates. Highly recommend subscribing!00:00 John Sherman’s Intro05:21 Diverging Views on AI Safety and Control12...
2024-09-06
1h 31
Doom Debates
Jobst Landgrebe Doesn't Believe In AGI | Liron Reacts
Jobst Landgrebe, co-author of Why Machines Will Never Rule The World: Artificial Intelligence Without Fear, argues that AI is fundamentally limited in achieving human-like intelligence or consciousness due to the complexities of the human brain which are beyond mathematical modeling.Contrary to my view, Jobst has a very low opinion of what machines will be able to achieve in the coming years and decades.He’s also a devout Christian, which makes our clash of perspectives funnier.00:00 Introduction03:12 AI Is Just Pattern Recognition?06:46 Mathematics and the Limits of AI...
2024-09-04
1h 28
Doom Debates
Arvind Narayanan's Makes AI Sound Normal | Liron Reacts
Today I’m reacting to the 20VC podcast with Harry Stebbings and Princeton professor Arvind Narayanan.Prof. Narayanan is known for his critical perspective on the misuse and over-hype of artificial intelligence, which he often refers to as “AI snake oil”. Narayanan’s critiques aim to highlight the gap between what AI can realistically achieve, and the often misleading promises made by companies and researchers. I analyze Arvind’s takes on the comparative dangers of AI and nuclear weapons, the limitations of current AI models, and AI’s trajectory toward being a commodity rather than a superintel...
2024-08-29
1h 09
Doom Debates
Bret Weinstein Bungles It On AI Extinction | Liron Reacts
Today I’m reacting to the Bret Weinstein’s recent appearance on the Diary of a CEO podcast with Steven Bartlett. Bret is an evolutionary biologist known for his outspoken views on social and political issues.Bret gets off to a promising start, saying that AI risk should be “top of mind” and poses “five existential threats”. But his analysis is shallow and ad-hoc, and ends in him dismissing the idea of trying to use regulation as a tool to save our species from a recognized existential threat.I believe we can raise the level of AI doom d...
2024-08-27
1h 01
Doom Debates
SB 1047 AI Regulation Debate: Holly Elmore vs. Greg Tanaka
California's SB 1047 bill, authored by CA State Senator Scott Wiener, is the leading attempt by a US state to regulate catastrophic risks from frontier AI in the wake of President Biden's 2023 AI Executive Order.Today’s debate:Holly Elmore, Executive Director of Pause AI US, representing Pro- SB 1047Greg Tanaka, Palo Alto City Councilmember, representing Anti- SB 1047Key Bill Supporters: Geoffrey Hinton, Yoshua Bengio, Anthropic, PauseAI, and about a 2/3 majority of California voters surveyed.Key Bill Opponents: OpenAI, Google, Meta, Y Combinator, Andreessen HorowitzLinksGr...
2024-08-26
56 min
Doom Debates
David Shapiro Part II: Unaligned Superintelligence Is Totally Fine?
Today I’m reacting to David Shapiro’s response to my previous episode, and also to David’s latest episode with poker champion & effective altruist Igor Kurganov.I challenge David's optimistic stance on superintelligent AI inherently aligning with human values. We touch on factors like instrumental convergence and resource competition. David and I continue to clash over whether we should pause AI development to mitigate potential catastrophic risks. I also respond to David's critiques of AI safety advocates.00:00 Introduction01:08 David's Response and Engagement03:02 The Corrigibility Problem05:38 Nirvana Fallacy10:5...
2024-08-22
1h 07
Doom Debates
Maciej Ceglowski, Pinboard Founder, Says the Idea of Superintelligence “Eats Smart People” | Liron Reacts
Maciej Ceglowski is an entrepreneur and owner of the bookmarking site Pinboard. I’ve been a long-time fan of his sharp, independent-minded blog posts and tweets.In this episode, I react to a great 2016 talk he gave at WebCamp Zagreb titled Superintelligence: The Idea That Eats Smart People. This talk was impressively ahead of its time, as the AI doom debate really only heated up in the last few years.---00:00 Introduction02:13 Historical Analogies and AI Risks05:57 The Premises of AI Doom08:25 Mind Design Space and AI Op...
2024-08-19
1h 32
Doom Debates
David Shapiro Doesn't Get PauseAI | Liron Reacts
Today I’m reacting to David Shapiro’s latest YouTube video: “Pausing AI is a spectacularly bad idea―Here's why”.In my opinion, every plan that doesn’t evolve pausing frontier AGI capabilities development now is reckless, or at least every plan that doesn’t prepare to pause AGI once we see a “warning shot” that enough people agree is terrifying.We’ll go through David’s argument point by point, to see if there are any good points about why maybe pausing AI might actually be a bad idea.00:00 Introduction01:16 The Pause AI Movement
2024-08-16
57 min
Doom Debates
David Brooks's Non-Doomer Non-Argument in the NY Times | Liron Reacts
John Sherman and I go through David Brooks’s appallingly bad article in the New York Times titled “Many People Fear AI. They Shouldn’t.”For Humanity is basically the sister podcast to Doom Debates. We have the same mission to raise awareness of the urgent AI extinction threat, and build grassroots support for pausing new AI capabilities development until it’s safe for humanity.Subscribe to it on YouTube: https://www.youtube.com/@ForHumanityPodcastFollow it on X: https://x.com/ForHumanityPod This is a public episode. If you would like...
2024-08-15
48 min
Doom Debates
Richard Sutton Dismisses AI Extinction Fears with Simplistic Arguments | Liron Reacts
Dr. Richard Sutton is a Professor of Computing Science at the University of Alberta known for his pioneering work on reinforcement learning, and his “bitter lesson” that scaling up an AI’s data and compute gives better results than having programmers try to handcraft or explicitly understand how the AI works.Dr. Sutton famously claims that AIs are the “next step in human evolution”, a positive force for progress rather than a catastrophic extinction risk comparable to nuclear weapons.Let’s examine Sutton’s recent interview with Daniel Fagella to understand his crux of disagreement with the AI do...
2024-08-13
1h 26
Doom Debates
AI Doom Debate: “Cards Against Humanity” Co-Creator David Pinsof
David Pinsof is co-creator of the wildly popular Cards Against Humanity and a social science researcher at UCLA Social Minds Lab. He writes a blog called “Everything Is B******t”.He sees AI doomers as making many different questionable assumptions, and he sees himself as poking holes in those assumptions.I don’t see it that way at all; I think the doom claim is the “default expectation” we ought to have if we understand basic things about intelligence.At any rate, I think you’ll agree that his attempt to poke holes in my doom c...
2024-08-08
1h 40
Doom Debates
P(Doom) Estimates Shouldn't Inform Policy??
Princeton Comp Sci Ph.D. candidate Sayash Kapoor co-authored a blog post last week with his professor Arvind Narayanan called "AI Existential Risk Probabilities Are Too Unreliable To Inform Policy".While some non-doomers embraced the arguments, I see it as contributing nothing to the discourse besides demonstrating a popular failure mode: a simple misunderstanding of the basics of Bayesian epistemology.I break down Sayash's recent episode of Machine Learning Street Talk point-by-point to analyze his claims from the perspective of the one true epistemology: Bayesian epistemology.00:00 Introduction03:40 Bayesian Reasoning04:33...
2024-08-05
52 min
Doom Debates
Liron Reacts to Martin Casado's AI Claims
Martin Casado is a General Partner at Andreessen Horowitz (a16z) who has strong views about AI.He claims that AI is basically just a buzzword for statistical models and simulations. As a result of this worldview, he only predicts incremental AI progress that doesn’t pose an existential threat to humanity, and he sees AI regulation as a net negative.I set out to understand his worldview around AI, and pinpoint the crux of disagreement with my own view.Spoiler: I conclude that Martin needs to go beyond analyzing AI as just st...
2024-07-31
2h 37
Doom Debates
AI Doom Debate: Tilek Mamutov vs. Liron Shapira
Tilek Mamutov is a Kyrgyzstani software engineer who worked at Google X for 11 years before founding his own international software engineer recruiting company, Outtalent.Since first encountering the AI doom argument at a Center for Applied Rationality bootcamp 10 years ago, he considers it a serious possibility, but he doesn’t currently feel convinced that doom is likely.Let’s explore Tilek’s worldview and pinpoint where he gets off the doom train and why!00:12 Tilek’s Background01:43 Life in Kyrgyzstan04:32 Tilek’s Non-Doomer Position07:12 Debating AI Doom Scenarios
2024-07-26
1h 44
Doom Debates
Liron Reacts to Mike Israetel's "Solving the AI Alignment Problem"
Dr. Mike Israetel is a well-known bodybuilder and fitness influencer with over 600,000 Instagram followers, and a surprisingly intelligent commentator on other subjects, including a whole recent episode on the AI alignment problem:Mike brought up many interesting points that were worth responding to, making for an interesting reaction episode. I also appreciate that he’s helping get the urgent topic of AI alignment in front of a mainstream audience.Unfortunately, Mike doesn’t engage with the possibility that AI alignment is an intractable technical problem on a 5-20 year timeframe, which I think is more like...
2024-07-18
1h 32
Doom Debates
Robin Hanson Highlights and Post-Debate Analysis
What did we learn from my debate with Robin Hanson? Did we successfully isolate the cruxes of disagreement? I actually think we did!In this post-debate analysis, we’ll review what those key cruxes are, and why I still think I’m right and Robin is wrong about them!I’ve taken the time to think much harder about everything Robin said during the debate, so I can give you new & better counterarguments than the ones I was able to make in realtime.Timestamps00:00 Debate Reactions06:08 AI Timelines and Key Me...
2024-07-12
1h 11
Doom Debates
Robin Hanson vs. Liron Shapira: Is Near-Term Extinction From AGI Plausible?
Robin Hanson is a legend in the rationality community and one of my biggest intellectual influences.In 2008, he famously debated Eliezer Yudkowsky about AI doom via a sequence of dueling blog posts known as the great Hanson-Yudkowsky Foom Debate. This debate picks up where Hanson-Yudkowsky left off, revisiting key arguments in the light of recent AI advances.My position is similar to Eliezer's: P(doom) is on the order of 50%. Robin's position is shockingly different: P(doom) is below 1%.00:00 Announcements03:18 Debate Begins05:41 Discussing AI Timelines and Predictions19:54...
2024-07-08
2h 08
Doom Debates
Preparing for my AI Doom Debate with Robin Hanson
This episode is a comprehensive preparation session for my upcoming debate on AI doom with the legendary Robin Hanson.Robin’s P(doom) is
2024-07-05
49 min
Doom Debates
Robin Hanson debate prep: Liron argues *against* AI doom!
I’ve been studying Robin Hanson’s catalog of writings and interviews in preparation for our upcoming AI doom debate. Now I’m doing an exercise where I step into Robin’s shoes, and make the strongest possible case for his non-doom position!This exercise is called the Ideological Turing Test, and it’s based on the idea that it’s only productive to argue against someone if you understand what you’re arguing against. Being able to argue *for* a position proves that you understand it.My guest David Xu is a fellow AI doomer, and de...
2024-06-30
1h 32
Doom Debates
AI Doom Q&A
Today I'm answering questions from listener Tony Warren.1:16 Biological imperatives in machine learning2:22 Evolutionary pressure vs. AI training4:15 Instrumental convergence and AI goals6:46 Human vs. AI problem domains9:20 AI vs. human actuators18:04 Evolution and intelligence33:23 Maximum intelligence54:55 Computational limits and the futureFollow Tony: https://x.com/Pove_iOS---Doom Debates catalogues all the different stops where people get off the "doom train", all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.If you'd like the full Do...
2024-06-26
1h 05
Doom Debates
AI Doom Debate: Will AGI’s analysis paralysis save humanity?
My guest Rob thinks superintelligent AI will suffer from analysis paralysis from trying to achieve a 100% probability of killing humanity. Since AI won’t be satisfied with 99.9% of defeating us, it won’t dare to try, and we’ll live!Doom Debates catalogues all the different stops where people get off the “doom train”, all the different reasons people haven’t (yet) followed the train of logic to the conclusion that humanity is doomed.Follow Rob: https://x.com/LoB_BlacksageIf you want to get the full Doom Debates experience, it's as easy as doin...
2024-06-22
56 min
Doom Debates
AI Doom Debate: Steven Pinker vs. Liron Shapira
Today I’m debating the one & only Professor Steven Pinker!!! Well, I kind of am, in my head. Let me know if you like this format…Dr. Pinker is optimistic that AI doom worries are overblown. But I find his arguments shallow, and I’m disappointed with his overall approach to the AI doom discourse.Here’s the full video of Steven Pinker talking to Michael C. Moynihan on this week’s episode of “Honestly with Bari Weiss”: https://youtube.com/watch?v=mTuH1Ucbif4If you want to get the full Doom Debates experi...
2024-06-21
28 min
Doom Debates
AI Doom Debate: What's a plausible alignment scenario?
RJ, a pseudonymous listener, volunteered to debate me.Follow RJ: https://x.com/impershblknightIf you want to get the full Doom Debates experience, it's as easy as doing 4 separate things:1. Join my Substack - https://doomdebates.com2. Search "Doom Debates" to subscribe in your podcast player3. Subscribe to YouTube videos - https://youtube.com/@doomdebates4. Follow me on Twitter - https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to...
2024-06-20
26 min
Doom Debates
Q&A: How scary is a superintelligent football coach?
Danny asks:> You've said that an intelligent AI would lead to doom because it would be an excellent goal-to-action mapper. A great football coach like Andy Reid is a great goal-to-action mapper. He's on the sidelines, but he knows exactly what actions his team needs to execute to achieve the goal and win the game. > But if he had a team of chimpanzees or elementary schoolers, or just players who did not want to cooperate, then his team would not execute his plans and they would lose. And even his very talented team of highly moti...
2024-06-18
11 min
Doom Debates
AI Doom Debate: George Hotz vs. Liron Shapira
Today I’m going to play you my debate with the brilliant hacker and entrepreneur, George Hotz.This took place on an X Space last August.Prior to our debate, George had done a debate with Eliezer Yudkowsky on Dwarkesh Podcast:Follow George: https://x.com/realGeorgeHotzFollow Liron: https://x.com/liron This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
2024-06-18
1h 17
Doom Debates
Should we gamble on AGI before all 8 billion of us die?
Chase Mann claims accelerating AGI timelines is the best thing we can do for the survival of the 8 billion people alive today.I claim pausing AI is still the highest-expected-utility decision for everyone.Who do you agree with? Comment on my Substack/X/YouTube and let me know!Follow Chase:https://x.com/ChaseMannFollow Liron:https://x.com/lironLessWrong has some great posts about cryonics: https://www.lesswrong.com/tag/cryonics This is a public episode. If you would like to discuss this with...
2024-06-16
56 min
Doom Debates
Can humans judge AI's arguments?
It’s a monologue episode!Robin Hanson’s blog: https://OvercomingBias.comRobin Hanson’s famous concept, the Great Filter: https://en.wikipedia.org/wiki/Great_FilterRobin Hanson’s groundbreaking 2021 solution to the Fermi Paradox: https://GrabbyAliens.comRobin Hanson’s conversation with Ronny Fernandez about AI doom from May 2023: My tweet about whether we can hope to control superintelligent AI by judging its explanations and arguments: https://x.com/liron/status/1798135026166698239Zvi Mowshowitz’s blog where he posts EXCELLENT weekly AI roundups: https://thezvi.wordpress.comA...
2024-06-14
33 min
Doom Debates
What this "Doom Debates" podcast is about
Welcome and thanks for listening!* Why is Liron finally starting a podcast?* Who does Liron want to debate?* What’s the debate format?* What are Liron’s credentials?* Is someone “rational” like Liron actually just a religious cult member?Follow Ori on Twitter: https://x.com/ygrowthcoMake sure to subscribe for more episodes! This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit lironshapira.substack.com
2024-06-12
39 min
Doom Debates
AI Doom Debate: Liron Shapira vs. Kelvin Santos
Kelvin is optimistic that the forces of economic competition will keep AIs sufficiently aligned with humanity by the time they become superintelligent.He thinks AIs and humans will plausibly use interoperable money systems (powered by crypto).So even if our values diverge, the AIs will still uphold a system that respects ownership rights, such that humans may hold onto a nontrivial share of capital with which to pursue human values.I view these kinds of scenarios as wishful thinking with probability much lower than that of the simple undignified scenario I expect, wherein...
2024-06-10
38 min
Future of Life Institute Podcast
Liron Shapira on Superintelligence Goals
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us against AI? 56:28 An Apollo program for AI safety 01:04:49 Do we only have one chance? 01:07:34 Are we living in a crucial time? 01:16:52 Would superintelligence be fragile? 01:21:42 Would human-inspired AI be safe?
2024-04-19
1h 26
Future of Life Institute Podcast
Liron Shapira on Superintelligence Goals
Liron Shapira joins the podcast to discuss superintelligence goals, what makes AI different from other technologies, risks from centralizing power, and whether AI can defend us from AI. Timestamps: 00:00 Intelligence as optimization-power 05:18 Will LLMs imitate human values? 07:15 Why would AI develop dangerous goals? 09:55 Goal-completeness 12:53 Alignment to which values? 22:12 Is AI just another technology? 31:20 What is FOOM? 38:59 Risks from centralized power 49:18 Can AI defend us against AI? 56:28 An Apollo program for AI safety 01:04:49 Do we only have one chance? 01:07:34 Are we living in a crucial time? 01:16:52 Would superintelligence be fragile? 01:21:42 Would human-inspired AI be safe?
2024-04-19
1h 26
"Upstream" with Erik Torenberg
Liron Shapira on the Case for Pausing AI
This week on Upstream, Erik is joined by Liron Shapira to discuss the case against further AI development, why Effective Altruism doesn’t deserve its reputation, and what is misunderstood about nuclear weapons. Upstream is sponsored by Brave: Head to https://brave.com/brave-ads/ and mention “MoZ” when signing up for a 25% discount on your first campaign.--RECOMMENDED PODCAST: History 102 with WhatifAltHistEvery week, creator of WhatifAltHist Rudyard Lynch and Erik Torenberg cover a major topic in history in depth -- in under an hour. This season will cover classical Greece, early America, the Vikings, medieval...
2024-03-02
1h 09
For Humanity: An AI Safety Podcast
"AI Risk=Jenga" For Humanity, An AI Safety Podcast Episode #17, Liron Shapira Interview
In Episode #17, AI Risk + Jenga, Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He says something like Sora, seemingly just a video innovation, could actually end all life on earth. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the un...
2024-02-28
1h 32
For Humanity: An AI Safety Podcast
"AI Risk=Jenga" For Humanity, An AI Safety Podcast #17 TRAILER, Liron Shapira Interview
In Episode #17 TRAILER, "AI Risk=Jenga", Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He explains how something like Sora, seemingly just a video tool, is actually a significant, real Jenga piece, and could actually end all life on earth. This podcast is not journalism. But it’s not opinion either. This sh...
2024-02-26
02 min
Theo Jaffee Podcast
#10: Liron Shapira - AI doom, FOOM, rationalism, and crypto
Liron Shapira is an entrepreneur, angel investor, and CEO of counseling startup Relationship Hero. He’s also a rationalist, advisor for the Machine Intelligence Research Institute and Center for Applied Rationality, and a consistently candid AI doom pointer-outer. Liron’s Twitter: https://twitter.com/liron Liron’s Substack: https://lironshapira.substack.com/ Liron’s old blog, Bloated MVP: https://www.bloatedmvp.com TJP LINKS: TRANSCRIPT: https://www.theojaffee.com/p/10-liron-shapira YouTube: https://youtu.be/YfEcAtHExFM Apple Podcasts: https://podcasts.apple.com/us/podcast/theo-jaffee-podcast/id1699912677 RSS: h...
2023-12-26
1h 25
Magic Internet Money
The Ethical Dilemmas of AI and Crypto: A Critical Examination with Liron Shapira
This episode was recorded on April 25, 2023In this episode, Brad Mills is joined by Liron Shapira, a vocal critic of the cryptocurrency landscape and an advocate for rational thinking in the age of digital disruption.Together, Brad and Liron navigate the tricky topic of artificial intelligence - dissecting the existential threats that loom over humanity as AI's capabilities grow increasingly advanced. They ponder the profound implications of AI surpassing human intellect and the potential havoc it could wreak, from manipulating markets to commandeering communication channels.As the conversation turns to the realm of...
2023-12-23
1h 34
Complete Tech Heads with Tom Edwards
OpenAI IMPLODES: Liron Shapira on Altman, AGI, and impending existential doom
OpenAI has fully imploded - I'm talking with Liron Shapira about what it means for tech and the world. Liron is my first returning guest, and regular listeners may remember him as being someone who is extremely concerned about the potential existential dangers of AI. Hosted on Acast. See acast.com/privacy for more information.
2023-11-21
52 min
Complete Tech Heads with Tom Edwards
"The default outcome is... we all die" | talking AI risk with Liron Shapira
This week I'm talking with Liron Shapira, founder, technologist, and self-styled AI doom pointer-outer.In the show, we cover an intro to AI risk, thoughts on a new tier of intelligence, causally mapping goals to outcomes (and why that's dangerous), a variety of rebuttals to Marc Andreesen's recent essay on AI, thoughts on how AI might plausibly take over and kill all humans, the rise and danger of AI girlfriends, Open AI's new super alignment team, Elon Musk's latest AI safety venture XAI, and other topics. Hosted on Acast. See...
2023-07-25
1h 15
Business Podcast by Roohi | VC, Startups
Raising Millions for a Failed Venture Ft. Liron Shapira(Co Founder of Relationship Hero)
Raising Millions for a Failed Ventures This week I sit down with Liron Shapira(Co Founder of Relationship Hero, Angel Investor) Some of my key takeaways were: 1. Great ideas and executing well 2. Realized failure earlier than most 3. Reaching investors = showing that you are on a good trajectory, Why and Value Add It was a value packed episode! If you want to connect with Liron you can reach him on the below platforms: Twitter: @liron Email: wiseguy@gmail...
2023-04-19
22 min
AdQuick Madvertising Podcast
Liron Shapira: Web 3 mania, Cybersecurity, how AI could brick the universe | Madvertising #6
In this episode of the AdQuick Madvertising podcast, we interview Liron Shapira to talk Web 3 mania, cybersecurity, and go deep into AI existential and business risks and opportunities.
2023-04-06
1h 03
Stoa Conversations: Stoicism Applied
Liron Shapira on Specificity and AI Doom (Episode 19)
Want to become more Stoic? Join us and other Stoics this October: Stoicism Applied by Caleb Ontiveros and Michael Tremblay on MavenWhat's the best test of a business idea? Are we doomed by AI?Caleb Ontiveros talks with Liron Shapira about specificity, startup ideas, crypto skepticism, and the case for AI risk.Bloated MVPSpecificity Sequence(02:19) Criticizing Crypto(04:31) Specificity(10:11) Intellectual Role Models(15:06) Insight from the Rationality Community(19:22) AI Risk(31:07) Back to Business(39:46) Making Sense of Our...
2023-02-23
45 min
The Knowledge with David Elikwu
51. Crypto, AI, and Techno-optimism with Liron Shapira
David speaks with Liron Shapira, Founder&CEO of RelationshipHero.com, a relationship coaching service with over 100,000 clients.Liron is a technologist, rationalist, and serial entrepreneur, whose skeptical takes about crypto and other bloated startups on BloatedMVP.com have been read over a million times.If you wanted an opportunity to dig into everything that is at the frontier of technology right now, then this is the episode for you.📜 Full transcript:www.theknowledge.io/lironshapira/📹 Watch on YouTubehttps://youtu.be/w72vE97OzDw👤 Connect with Liron...
2023-02-23
1h 17
Indie Hackers
#259 – Dating, Hollow Abstractions, and Making Millions of Dollars with Liron Shapira of Relationship Hero
Liron Shapira (@liron) talks relationship and dating advice, building a coaching business, marketing via Facebook groups, and growing his revenue to millions of dollars with Courtland (@csallen) and Channing (@ChanningAllen).
2022-09-15
52 min
Startup Project: Build the future
#24 Liron Shapira: Web3 Critique, Founder & Early Investor in Coinbase
Liron Shapira is known as the guy who writes twitter threads criticizing web3 use cases. Other than being a web3 critic, he is also angel investor in Coinbase & Wyre (the irony), founder of a Y Combinator company & raised $50M round led by Alibaba. Full conversation includes: • Failure with Quixey ( 50M+ Series C) • Relationship Hero (YC Backed) • Seed Investment in $COIN (listen for exit multiple) • Investing in Russian Bonds • Web3 Use cases (cross border payments, helium, NFTs in gaming, Uber on Web3, Identity) Nataraj is...
2022-08-05
1h 06
The Café Bitcoin Podcast
STACKCHAIN & Exposing Crypto Scams with Derek Moss, Anthony vonStackChain & Liron Shapira
Today we discuss, "What is the Stackchain"? ( NO IT IS NOT ANOTHER CRYPTO ). We talk about how the memes are helping encourage and educate people to stack sats in a very fun and interactive way. We also talk with Liron Shapira about exposing crypto scams. We appreciate you tuning in and to the speakers that joined us this morning. Thank you as always for listening and we look forward to bringing you the best bitcoin content daily. Here on "The Café Bitcoin Podcast". Join us Monday - Friday 7am PST/10am EST every Morning and become part o...
2022-08-01
2h 05
Scam Economy
25: Helium is Touted as the Web3 Gold Standard. Is It All Just a Bunch of Hot Air? (w/ Liron Shapira)
Angel investor Liron Shapira joins Scam Economy host Matt Binder to discuss his viral Twitter thread about the decentralized wireless network Helium, often viewed as the gold standard of Web3 uses cases. We discuss what Helium is and how does it work, how budding entrepreneurs buy expensive hotspots in hopes of earning Helium's cryptocurrency token, who is actually using Helium's wireless network, and much more. Plus: BONUS! As a result of Matt's research into this episode, there is an added commentary about how Helium boasted for years about one of its largest clients being the global scooter rideshare company Lime...
2022-07-30
1h 07
The Blockchain Debate Podcast
Motion: Web3 is worse than Web2 (Liron Shapira vs. Kyle Samani)
Announcement: I have a new show called “Crypto This Week.” It’s a weekly, five-minute news comedy satire focused on the world of crypto. Check it out on YouTube here: Crypto This Week with Richard YanGuests:Liron Shapira (twitter.com/liron)Kyle Samani (twitter.com/kylesamani)Host:Richard Yan (twitter.com/gentso09)Today’s motion is “Web3 is worse than Web2.”Web3 is a new buzzword that’s gener...
2021-12-24
1h 22
Village Global Podcast
Relationship Coaching, Bloated MVPs, and Evaluating Startup Ideas with Liron Shapira
Liron Shapira (@liron), co-founder of Relationship Hero, joins Erik on this episode to discuss:- How they are creating a new industry and new a field of study in relationships.- Why people are becoming more goal-oriented and analytical in general in society and how this applies to dating and relationships.- His thoughts on unbundling venture capital.- The bloated MVP thesis and the idea of a “Great Filter” for startups.- Misconceptions that people have about building an MVP.- The future of dating apps.
2021-02-28
39 min
Entrepreneur Gains
#10: Liron Shapira - Relationship Hero - Y Combinator Alumni Interviews (YC S17)
Liron Shapira is the founder of Relationship Hero, a company that provides on-demand, professional, one on one, relationship coaching services. Relationship Hero currently has over 100 experts available 24/7 to help people with relationship issues.
2020-10-19
48 min
Simulation
#613 Liron Shapira - Relationship Hero
Liron Shapira is Co-Founder & CEO of Relationship Hero which is the #1 Relationship Coaching Service in the world. They have over 100 full time relationship coaches available 24/7 via phone or online chat. https://relationshiphero.com/simulation LinkedIn ► https://linkedin.com/in/lironshapira Twitter ► https://twitter.com/liron ******* Simulation interviews the world’s greatest minds to uncover the nature of reality and elevate our planet’s consciousness ► http://simulationseries.com ******* Design Merch, Get Paid, Spread Thought-Provoking Questions ► https://yoobe.me/simulation ******* Subscribe across platforms ► Youtube ► http://bit.ly/SimYoTu iTunes ► http://bit.ly/SimulationiTunes Instagram ► http://bit.ly/SimulationIG Twitter ► http://bit.ly/SimulationTwitter Spot...
2020-02-16
1h 39