Look for any podcast host, guest or anyone
Showing episodes and shows of

John Sherman

Shows

For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastIs AI Alive? | Episode #66 | For Humanity: An AI Risk Podcast🎙️ Guest: Cameron Berg, AI research scientist probing consciousness in frontier AI systems📍 Host: John Sherman, journalist & AI-risk communicatorWhat does it mean to be alive? How close do current frontier AI models get to consciousness? See for yourself like never before. Are advanced language models beginning to exhibit signs of subjective experience? In this episode, John sits down with Cameron Berg to explore the line between next character prediction and the conscious mind. What happens when you ask an AI model to essentially meditate, to look inward in a loop, to focus on its focus and repeat. Does it feel a sense...2025-06-051h 57For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastSeventh Grader vs AI Risk | Episode #64 | For Humanity: An AI Risk PodcastIn Episode #64, interview, host John Sherman interviews seventh grader Dylan Pothier, his mom Bridget and his teach Renee DiPietro. Dylan is a award winning student author who is converend about AI risk.(FULL INTERVIEW STARTS AT 00:33:34)Sam Altman/Chris Anderson @ TEDhttps://www.youtube.com/watch?v=5MWT_doo68kCheck out our partner channel: Lethal Intelligence AILethal Intelligence AI - Home https://lethalintelligence.aiFOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4...2025-04-221h 42For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastJustice For Suchir | Episode #63 | For Humanity: An AI Risk PodcastIn an emotional interview, host John Sherman interviews Poornima Rao and Balaji Ramamurthy, the parents of Suchir Balaji. (FULL INTERVIEW STARTS AT 00:18:38)Suchir Balaji was a 26-year-old artificial intelligence researcher who worked at OpenAI. He was involved in developing models like GPT-4 and WebGPT. In October 2024, he publicly accused OpenAI of violating U.S. copyright laws by using proprietary data to train AI models, arguing that such practices harmed original content creators. His essay, "When does generative AI qualify for fair use?", gained attention and was cited in ongoing lawsuits against OpenAI. Suchir left OpenAI in August 2024, expressing concerns...2025-04-111h 19For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastKeep The Future Human | Episode #62 | For Humanity: An AI Risk PodcastHost John Sherman conducts an important interview with Anthony Aguirre, Executive Director of the Future of Life Institute. The Future of Life Institute reached out to For Humanity to see if Anthony could come on to promote his very impressive new campaign called Keep The Future Human. The campaign includes a book, an essay, a website, a video, it’s all incredible work. Please check it out:https://keepthefuturehuman.ai/John and Anthony have a broad ranging AI risk conversation, covering in some detail Anthony’s four essential measures for a human futu...2025-03-261h 47For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastDark Patterns In AI | Episode #61 | For Humanity: An AI Risk PodcastHost John Sherman interviews Esban Kran, CEO of Apart Research about a broad range of AI risk topics. Most importantly, the discussion covers a growing for-profit AI risk business landscape, and Apart’s recent report on Dark Patterns in LLMs. We hear about the benchmarking of new models all the time, but this project has successfully identified some key dark patterns in these models.MORE FROM OUR SPONSOR:https://www.resist-ai.agency/BUY LOUIS BERMAN’S NEW BOOK ON AMAZON!!!https://a.co/d/8WSNNuo2025-03-121h 31For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastAI Risk Rising | Episode #60 | For Humanity: An AI Risk PodcastHost John Sherman interviews Pause AI Global Founder Joep Meindertsma following the AI summits in Paris. The discussion begins at the dire moment we are in, the stakes, and the failure of our institutions to respond, before turning into a far-ranging discussion of AI risk reduction communications strategies.(FULL INTERVIEW STARTS AT)FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5kmGet Involved!SUPPORT...2025-02-281h 43For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastSmarter-Than-Human Robots? | Episode #59 | For Humanity: An AI Risk PodcastHost John Sherman interviews Jad Tarifi, CEO of Integral AI, about Jad's company's work to try to create a world of trillions of AGI-enabled robots by 2035. Jad was a leader at Google's first generative AI team, his views of his former colleague Geoffrey Hinton's views on existential risk from advanced AI come up more than once.FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS:$1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT$10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo$25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh$100 MONTH https://buy.stripe.com/aEU007bVp7...2025-02-111h 42For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastProtecting Our Kids From AI Risk | Episode #58Host John Sherman interviews Tara Steele, Director, The Safe AI For Children Alliance, about her work to protect children from AI risks such as deep fakes, her concern about AI causing human extinction, and what we can do about all of it. FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh $100 MONTH https://buy.stripe.com/aEU007bVp7fAfcI5km You can also donate any amount one time. ...2025-01-271h 42For Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast2025 AI Risk Preview | For Humanity: An AI Risk Podcast | Episode #57What will 2025 bring? Sam Altman says AGI is coming in 2025. Agents will arrive for sure. Military use will expand greatly. Will we get a warning shot? Will we survive the year? In Episode #57, host John Sherman interviews AI Safety Research Engineer Max Winga about the latest in AI advances and risks and the year to come. FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9SodQT $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y46oo $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIggM4gh $100 MONTH https://buy.stripe.com/aEU007bVp7...2025-01-131h 40For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastAGI Goes To Washington | For Humanity: An AI Risk Podcast | Episode #56FOR HUMANITY MONTHLY DONATION SUBSCRIPTION LINKS: $1 MONTH https://buy.stripe.com/7sI3cje3x2Zk9S... $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg... $100 MONTH https://buy.stripe.com/aEU007bVp7fAfc... In Episode #56, host John Sherman travels to Washington DC to lobby House and Senate staffers for AI regulation along with Felix De Simone and Louis Berman of Pause AI. We unpack what we saw and heard as we presented AI risk to the people who have the power to make real change. SUPPORT PAUSE AI: https://pauseai...2024-12-191h 14For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastAI Risk Special | "Near Midnight in Suicide City" | Episode #55In a special episode of For Humanity: An AI Risk Podcast, host John Sherman travels to San Francisco. Episode #55 "Near Midnight in Suicide City" is a set of short pieces from our trip out west, where we met with Pause AI, Stop AI, Liron Shapira and stopped by Open AI among other events. Big, huge massive thanks to Beau Kershaw, Director of Photography, and my biz partner and best friend who made this journey with me through the work side and the emotional side of this. The work is beautiful and the days were wet and long and heavy...2024-12-051h 31For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastConnor Leahy Interview | Helping People Understand AI Risk | Episode #543,893 views Nov 19, 2024 For Humanity: An AI Safety PodcastIn Episode #54 John Sherman interviews Connor Leahy, CEO of Conjecture. (FULL INTERVIEW STARTS AT 00:06:46) DONATION SUBSCRIPTION LINKS: $10 MONTH https://buy.stripe.com/5kAbIP9Nh0Rc4y... $25 MONTH https://buy.stripe.com/3cs9AHf7B9nIgg... $100 MONTH https://buy.stripe.com/aEU007bVp7fAfc... EMAIL JOHN: forhumanitypodcast@gmail.com Check out Lethal Intelligence AI: Lethal Intelligence AI - Home https://lethalintelligence.ai @lethal-intelligence-clips    / @lethal-intelligence-clips  2024-11-252h 24For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastAI Risk Funding | Big Tech vs. Small Safety I Episode #51In Episode #51 , host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared. Learn More About Founders Pledge: https://www.founderspledge.com/ No celebration of life this week!! Youtube finally got me with a copyright flag, had to edit the song out. THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST Join Zoom Meeting: https://storyfarm.zoom.us/j/816517210... Passcode: 829191 Please Donate Here To Help Promote...2024-10-231h 06For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastAI Risk Funding | Big Tech vs. Small Safety | Episode #51 TRAILERIn Episode #51 Trailer, host John Sherman talks with Tom Barnes, an Applied Researcher with Founders Pledge, about the reality of AI risk funding, and about the need for emergency planning for AI to be much more robust and detailed than it is now. We are currently woefully underprepared. Learn More About Founders Pledge: https://www.founderspledge.com/ THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST Join Zoom Meeting: https://storyfarm.zoom.us/j/816517210... Passcode: 829191 Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But...2024-10-2106 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastAccurately Predicting Doom | What Insight Can Metaculus Reveal About AI Risk? | Episode # 50In Episode #50, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards. THURSDAY NIGHTS--LIVE FOR HUMANITY COMMUNITY MEETINGS--8:30PM EST Join Zoom Meeting: https://storyfarm.zoom.us/j/816517210... Passcode: 829191 LEARN MORE– www.metaculus.com Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable prob...2024-10-211h 18For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastAccurately Predicting Doom | What Insight Can Metaculus Reveal About AI Risk? | Episode # 50 TRAILERIn Episode #50 TRAILER, host John Sherman talks with Deger Turan, CEO of Metaculus about what his prediction market reveals about the AI future we are all heading towards. LEARN MORE–AND JOIN STOP AI www.stopai.info Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI S...2024-10-1405 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #49: “Go To Jail To Stop AI” For Humanity: An AI Risk PodcastIn Episode #49, host John Sherman talks with Sam Kirchner and Remmelt Ellen, co-founders of Stop AI. Stop AI is a new AI risk protest organization, coming at it with different tactics and goals than Pause AI. LEARN MORE–AND JOIN STOP AI www.stopai.info Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings toge...2024-10-141h 17For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastGo To Jail To Stop AI | Stopping AI | Episode #49 TRAILERIn Episode #49 TRAILER, host John Sherman talks with Sam Kirchner and Remmelt Ellen, co-founders of Stop AI. Stop AI is a new AI risk protest organization, coming at it with different tactics and goals than Pause AI. LEARN MORE–AND JOIN STOP AI www.stopai.info Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhu... EMAIL JOHN: forhumanitypodcast@gmail.com ********************* This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end...2024-10-0804 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastWhat Is The Origin Of AI Safety? | AI Safety Movement | Episode #48In Episode #48, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins. Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom.us...2024-10-081h 09For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastAI Safety's Limiting Origins: For Humanity, An AI Risk Podcast, Episode #48 TrailerIn Episode #48 Trailer, host John Sherman talks with Pause AI US Founder Holly Elmore about the limiting origins of the AI safety movement. Polls show 60-80% of the public are opposed to building artificial superintelligence. So why is the movement to stop it still so small? The roots of the AI safety movement have a lot to do with it. Holly and John explore the present day issues created by the movements origins. Let's build community! Live For Humanity Zoom Community Meeting via Zoom Thursdays at 8:30pm EST...explanation during the full show! USE THIS THINK: https://storyfarm.zoom...2024-09-3007 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #47: “Can AI Be Controlled?“ For Humanity: An AI Risk PodcasIn Episode #47, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck’s thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that’s supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck’s writing on these topics below: https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerful http...2024-09-251h 19For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #47 Trailer : “Can AI Be Controlled?“ For Humanity: An AI Risk PodcastIn Episode #47 Trailer, host John Sherman talks with Buck Shlegeris, CEO of Redwood Research, a non-profit company working on technical AI risk challenges. The discussion includes Buck’s thoughts on the new OpenAI o1-preview model, but centers on two questions: is there a way to control AI models before alignment is achieved if it can be, and how would the system that’s supposed to save the world actually work if an AI lab found a model scheming. Check out these links to Buck’s writing on these topics below: https://redwoodresearch.substack.com/p/the-case-for-ensuring-that-powerful ...2024-09-2504 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #46: “Is AI Humanity’s Worthy Successor?“ For Humanity: An AI Risk PodcastIn Episode #46, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity’s worthy successor. More About Daniel Faggella https://danfaggella.com/ LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To He...2024-09-181h 17For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode 46 Trailer: “Is AI Humanity’s Worthy Successor?“ For Humanity: An AI Risk PodcastIn Episode #46 Trailer, host John Sherman talks with Daniel Faggella, Founder and Head of Research at Emerj Artificial Intelligence Research. Dan has been speaking out about AI risk for a long time but comes at it from a different perspective than many. Dan thinks we need to talk about how we can make AGI and whatever comes after become humanity’s worthy successor. More About Daniel Faggella https://danfaggella.com/ LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To...2024-09-1605 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #45: “AI Risk And Child Psychology” For Humanity: An AI Risk PodcastIn Episode #45, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future.FULL INTERVIEW STARTS AT (00:05:28) Mike’s book: Tech Generation: Raising Balanced Kids in a Hyper-Connected World An article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of Progress Fine Dr. Brooks on Social Media LinkedIn | X/Twitter | YouTube | TikTok | Instagram | Facebook https://www.linkedin.com/in/dr-mike-brooks-b1164120 https://x.com/drmikebrooks https://ww...2024-09-111h 24For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #45 TRAILER: “AI Risk And Child Psychology” For Humanity: An AI Risk PodcastIn Episode #45 TRAILER, host John Sherman talks with Dr. Mike Brooks, a Psychologist focusing on kids and technology. The conversation is broad-ranging, touching on parenting, happiness and screens, the need for human unity, and the psychology of humans facing an ever more unknown future. Mike’s book: Tech Generation: Raising Balanced Kids in a Hyper-Connected World An article from Mike in Psychology Today: The Happiness Illusion: Facing the Dark Side of Progress LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Do...2024-09-0906 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #44: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk PodcastIn Episode #44, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Let us know in the comments! LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE 2024-09-041h 31For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #43: “So what exactly is the good case for AI?” For Humanity: An AI Risk PodcastIn Episode #43,  host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future. LEARN HOW TO HELP RAISE AI RISK AWARENESS IN YOUR COMMUNITY HERE https://pauseai.info/local-organizing Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings tog...2024-09-021h 16For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #44 Trailer: “AI P-Doom Debate: 50% vs 99.999%” For Humanity: An AI Risk PodcastIn Episode #44 Trailer, host John Sherman brings back friends of For Humanity Dr. Roman Yamopolskiy and Liron Shapira. Roman is an influential AI Safety researcher, through leader, and Associate Professor at the University of Louisville. Liron is a tech CEO and host of the excellent Doom Debates podcast. Roman famously holds a 99.999% p-doom, Liron has a nuanced 50%. John starts out at 75%, unrelated to their numbers. Where are you? Did Roman or Liron move you in their direction at all? Watch the full episode and let us know in the comments. LEARN HOW TO HELP RAISE AI RISK...2024-09-0207 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #43 TRAILER: “So what exactly is the good case for AI?” For Humanity: An AI Risk PodcastIn Episode #43 TRAILER,  host John Sherman talks with DevOps Engineer Aubrey Blackburn about the vague, elusive case the big AI companies and accelerationists make for the good case AI future. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For...2024-08-2607 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #42: “Actors vs. AI” For Humanity: An AI Risk PodcastIn Episode #42,  host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  Fo...2024-08-211h 23For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #42 TRAILER: “Actors vs. AI” For Humanity: An AI Risk PodcastIn Episode #42 Trailer, host John Sherman talks with actor Erik Passoja about AI’s impact on Hollywood, the fight to protect people’s digital identities, and the vibes in LA about existential risk. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth...2024-08-1903 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #41 “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk PodcastIn Episode #41, host John Sherman begins with a personal message to David Brooks of the New York Times. Brooks wrote an article titled “Many People Fear AI: They Shouldn’t”–and in full candor it pissed John off quite much. During this episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion...2024-08-1448 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #41 TRAILER “David Brooks: Dead Wrong on AI” For Humanity: An AI Risk PodcastIn Episode #41 TRAILER, host John Sherman previews the full show with a personal message to David Brooks of the New York Times. Brooks wrote something–and in full candor it pissed John off quite much. During the full episode, John and Doom Debates host Liron Shapira go line by line through David Brooks’s 7/31/24 piece in the New York Times. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a l...2024-08-1209 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #40 “Surviving Doom” For Humanity: An AI Risk PodcastIn Episode #40, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he’s helping others do the same. James shares his powerful insight, long-time awareness, and expertise helping others find a way to survive and rebuild from a post-AGI disaster warning shot. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/fo...2024-08-071h 30For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #40 TRAILER “Surviving Doom” For Humanity: An AI Risk PodcastIn Episode #40, TRAILER, host John Sherman talks with James Norris, CEO of Upgradable and longtime AI safety proponent. James has been concerned about AI x-risk for 26 years. He lives now in Bali and has become an expert in prepping for a very different world post-warning shot or other major AI-related disaster, and he’s helping others do the same. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a lo...2024-08-0506 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #39 “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk PodcastIn Episode #39, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation starts ut with the various state AI laws that are coming up and moves into the shifting political landscape around AI-risk legislation in America in July 2024. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and un...2024-07-311h 23For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #39 Trailer “Did AI-Risk Just Get Partisan?” For Humanity: An AI Risk PodcastIn Episode #39 Trailer, host John Sherman talks with Matthew Taber, Founder, advocate and expert in AI-risk legislation. The conversation addresses the shifting political landscape around AI-risk legislation in America in July 2024. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  F...2024-07-2904 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #38 “France vs. AGI” For Humanity: An AI Risk PodcastIn Episode #38, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and government? And would France even join an international treaty? The conversation covers the potential for international treaties on AI safety, the psychological factors influencing public perception, and the power dynamics shaping AI's future.    Please Donate Her...2024-07-241h 20For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #38 TRAILER “France vs. AGI” For Humanity: An AI Risk PodcastIn Episode #38 TRAILER, host John Sherman talks with Maxime Fournes, Founder, Pause AI France. With the third AI “Safety” Summit coming up in Paris in February 2025, we examine France’s role in AI safety, revealing France to be among the very worst when it comes to taking AI risk seriously. How deep is madman Yan Lecun’s influence in French society and government? And would France even join an international treaty?   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast EMAIL JOHN: forhumanitypodcast@gmail.com This po...2024-07-2206 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #37 “Christianity vs. AGI” For Humanity: An AI Risk PodcastIn Episode #37, host John Sherman talks with writer Peter Biles. Peter is a Christian who often writes from that perspective. He is a prolific fiction writer and has written stories and essays for a variety of publications. He was born and raised in Ada, Oklahoma and is a contributing writer and editor for Mind Matters. The conversation centers on the intersection between Christianity and AGI, questions like what is the role of faith in a world where no one works? And could religions unite to oppose AGI? Some of Peter Biles related writing: https://mindmatters...2024-07-171h 21For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #37 Trailer “Christianity vs. AGI” For Humanity: An AI Risk PodcastIn Episode #37 Trailer, host John Sherman talks with writer Peter Biles. Peter is a Christian who often writes from that perspective. He is a prolific fiction writer and has written stories and essays for a variety of publications. He was born and raised in Ada, Oklahoma and is a contributing writer and editor for Mind Matters. The conversation centers on the intersection between Christianity and AGI, questions like what is the role of faith in a world where no one works? And could religions unite to oppose AGI? Some of Peter Biles related writing: https...2024-07-1509 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #36 “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity: An AI Risk PodcastIn Episode #36, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two. Gladstone AI Action Plan https://www.gladstone.ai/action-plan2024-07-101h 25For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #36 Trailer “The AI Risk Investigators: Inside Gladstone AI, Part 2” For Humanity: An AI Risk PodcastIn Episode #36 Trailer, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows, this the second of the two. Gladstone AI Action Plan https://www.gladstone.ai...2024-07-0805 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #35 “The AI Risk Investigators: Inside Gladstone AI, Part 1” For Humanity: An AI Risk PodcastIn Episode #35  host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows. Gladstone AI Action Plan https://www.gladstone.ai/action-plan TIME MAGAZINE ON THE G...2024-07-031h 01For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #35 TRAILER “The AI Risk Investigators: Inside Gladstone AI, Part 1” For Humanity: An AI Risk PodcastIn Episode #35 TRAILER:, host John Sherman talks with Jeremie and Eduard Harris, CEO and CTO of Gladstone AI. Gladstone AI is the private company working most closely with the US government on assessing AI risk. The Gladstone Report, published in February, was the first public acknowledgment of AI risk reality by the US government in any way. These are two very important people doing incredibly important work. The full interview lasts more than 2 hours and will be broken into two shows. TIME MAGAZINE ON THE GLADSTONE REPORT https://time.com/6898967/ai-extinction-national-security-risks-report/   P...2024-07-0104 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #34 - “The Threat of AI Autonomous Replication” For Humanity: An AI Risk PodcastIn Episode #34, host John Sherman talks with Charbel-Raphaël Segerie, Executive Director, Centre pour la sécurité de l'IA. Among the very important topics covered: autonomous AI self replication, the potential for warning shots to go unnoticed due to a public and journalist class that are uneducated on AI risk, and the potential for a disastrous Yan Lecunnification of the upcoming February 2025 Paris AI Safety Summit.   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This i...2024-06-261h 17For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #34 TRAILER - “The Threat of AI Autonomous Replication” For Humanity: An AI Risk PodcastIn Episode #34, host John Sherman talks with Charbel-Raphaël Segerie, Executive Director, Centre pour la sécurité de l'IA. Among the very important topics covered: autonomous AI self replication, the potential for warning shots to go unnoticed due to a public and journalist class that are uneducated on AI risk, and the potential for a disastrous Yan Lecunnification of the upcoming February 2025 Paris AI Safety Summit.   Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This i...2024-06-2404 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #33 - “Dad vs. AGI” For Humanity: An AI Risk PodcastIn episode 33, host John Sherman talks with Dustin Burham, who is a dad, an anesthetist, an AI risk realist, and a podcast host himself about being a father while also understanding the realities of AI risk and the precarious moment we are living in. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of...2024-06-191h 23For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #33 TRAILER - “Dad vs. AGI” For Humanity: An AI Risk PodcastIn episode 33 Trailer, host John Sherman talks with Dustin Burham, who is a dad, an anesthetist, an AI risk realist, and a podcast host himself about being a father while also understanding the realities of AI risk and the precarious moment we are living in. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the en...2024-06-1704 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #32 - “Humans+AIs=Harmony?” For Humanity: An AI Risk PodcastCould humans and AGIs live in a state of mutual symbiosis, like the ecostsystem of a coral reef? (FULL INTERVIEW STARTS AT 00:23:21) Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast In episode 32, host John Sherman interviews BioComm AI CEO Peter Jensen. Peter is working on a number of AI-risk related projects. He believes it’s possible humans and AGIs can co-exist in mutual symbiosis. This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This...2024-06-121h 37For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #32 TRAILER - “Humans+AIs=Harmony?” For Humanity: An AI Risk PodcastCould humans and AGIs live in a state of mutual symbiosis, like the ecostsystem of a coral reef? Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast In episode 32, host John Sherman interviews BioComm AI CEO Peter Jensen. Peter is working on a number of AI-risk related projects. He believes it’s possible humans and AGIs can co-exist in mutual symbiosis. This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the exis...2024-06-1002 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #31 - “Trucker vs. AGI” For Humanity: An AI Risk PodcastIn Episode #31 John Sherman interviews a 29-year-old American truck driver about his concerns over human extinction and artificial intelligence. They discuss the urgency of raising awareness about AI risks, the potential job displacement in industries like trucking, and the geopolitical implications of AI advancements. Leighton shares his plans to start a podcast and possibly use filmmaking to engage the public in AI safety discussions. Despite skepticism from others, they stress the importance of community and dialogue in understanding and mitigating AI threats, with Leighton highlighting the risk of a "singleton event" and ethical concerns in AI development. ...2024-06-051h 15For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #31 TRAILER - “Trucker vs. AGI” For Humanity: An AI Risk PodcastEpisode #31 TRAILER  - “Trucker vs. AGI” For Humanity: An AI Risk Podcast In Episode #31 TRAILER, John Sherman interviews a 29-year-old American truck driver about his concerns over human extinction and artificial intelligence. Please Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safet...2024-06-0305 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #30 - “Dangerous Days At Open AI” For Humanity: An AI Risk PodcastPlease Donate Here To Help Promote For Humanity https://www.paypal.com/paypalme/forhumanitypodcast In episode 30, John Sherman interviews Professor Olle Häggström on a wide range of AI risk topics. At the top of the list is the super-instability and the super-exodus from OpenAI’s super alignment team following the resignations of Jan Lieke and Ilya Sutskyver. This podcast is not journalism. But it’s not opinion either. This is a long form public service announcement. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of...2024-05-2802 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #27 - “1800 Mile AGI Protest Road Trip” For Humanity: An AI Safety PodcastEpisode #27  - “1800 Mile AGI Protest Road Trip” For Humanity: An AI Safety Podcast Please Donate Here To Help Promote This Show https://www.paypal.com/paypalme/forhumanitypodcast In episode #27, host John Sherman interviews Jon Dodd and Rev. Trevor Bingham of the World Pause Coalition about their recent road trip to San Francisco to protest outside the gates of OpenAI headquarters. A group of six people drove 1800 miles to be there. We hear firsthand what happens when OpenAI employees meet AI risk realists. This podcast is not journalism. But it’s not opinion...2024-05-081h 20For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #27 Trailer - “1800 Mile AGI Protest Road Trip” For Humanity: An AI Safety PodcastPlease Donate Here To Help Promote This Show https://www.paypal.com/paypalme/forhumanitypodcast In episode #27 Trailer, host John Sherman interviews Jon Dodd and Rev. Trevor Bingham of the World Pause Coaltion about their recent road trip to San Francisco to protest outside the gates of OpenAI headquarters. A group of six people drove 1800 miles to be there. We hear firsthand what happens when OpenAI employees meet AI risk realists. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable pr...2024-05-0602 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #26 - “Pause AI Or We All Die” Holly Elmore Interview, For Humanity: An AI Safety PodcastPlease Donate Here To Help Promote This Show https://www.paypal.com/paypalme/forhumanitypodcast In episode #26, host John Sherman and Pause AI US Founder Holly Elmore talk about AI risk. They discuss how AI surprised everyone by advancing so fast, what it’s like for employees at OpenAI working on safety, and why it’s so hard for people to imagine what they can’t imagine. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all l...2024-05-011h 51For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #26 TRAILER - “Pause AI Or We All Die” Holly Elmore Interview, For Humanity: An AI Safety PodcastPlease Donate Here To Help Promote This Show https://www.paypal.com/paypalme/forhumanitypodcast In episode #26 TRAILER, host John Sherman and Pause AI US Founder Holly Elmore talk about AI risk. They discuss how AI surprised everyone by advancing so fast, what it’s like for employees at OpenAI working on safety, and why it’s so hard for people to imagine what they can’t imagine. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of a...2024-04-2904 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #25 - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety PodcastEpisode #25  - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast FULL INTERVIEW STARTS AT (00:08:20) DONATE HERE TO HELP PROMOTE THIS SHOW https://www.paypal.com/paypalme/forhumanitypodcast In episode #25, host John Sherman and Dr. Emile Torres explore the concept of humanity's future and the rise of artificial general intelligence (AGI) and machine superintelligence. Dr. Torres lays out his view that the AI safety movement has it all wrong on existential threat.  Concerns are voiced about the potential risks of advanced AI, que...2024-04-241h 50For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #25 TRAILER  - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety PodcastDONATE HERE TO HELP PROMOTE THIS SHOW Episode #25 TRAILER  - “Does The AI Safety Movement Have It All Wrong?” Dr. Émile Torres Interview, For Humanity: An AI Safety Podcast In episode #25 TRAILER, host John Sherman and Dr. Emile Torres explore the concept of humanity's future and the rise of artificial general intelligence (AGI) and machine superintelligence. Dr. Torres lays out his view that the AI safety movement has it all wrong on existential threat.  Concerns are voiced about the potential risks of advanced AI, questioning the effectiveness of AI safety...2024-04-2202 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #24 - “YOU can help save the world from AI Doom” For Humanity: An AI Safety PodcastIn episode #24, host John Sherman and Nonlinear Co-founder Kat Woods discusses the critical need for prioritizing AI safety in the face of developing superintelligent AI. In this conversation, Kat and John discuss the topic of AI safety and the potential risks associated with artificial superintelligence. Kat shares her personal transformation from being a skeptic to becoming an advocate for AI safety. They explore the idea that AI could pose a near-term threat rather than just a long-term concern. They also discuss the importance of prioritizing AI safety over other philanthropic endeavors and the need for talented individuals...2024-04-171h 21For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #24 TRAILER - “YOU can help save the world from AI Doom” For Humanity: An AI Safety PodcastIn episode #24, host John Sherman and Nonlinear Co-founder Kat Woods discusses the critical need for prioritizing AI safety in the face of developing superintelligent AI. She compares the challenge to the Titanic's course towards an iceberg, stressing the difficulty in convincing people of the urgency. Woods argues that AI safety is a matter of both altruism and self-preservation. She uses human-animal relations to illustrate the potential consequences of a disparity in intelligence between humans and AI. She notes a positive shift in the perception of AI risks, from fringe to mainstream concern, and shares a personal anecdote from her...2024-04-1505 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #23 - “AI Acceleration Debate” For Humanity: An AI Safety PodcastFULL INTERVIEW STARTS AT (00:22:26) Episode #23 - “AI Acceleration Debate” For Humanity: An AI Safety Podcast  e/acc: Suicide or Salvation? In episode #23, AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, the existential risks and benefits of AI, questioning the AI safety movement and discussing the concept of AI as humanity's child. They talk about whether AI should align with human values and the potential consequences of alignment. Paul has some wild views, including that AI safety efforts could inadvertently lead to the very dangers they aim to prevent. The conversation touches on the p...2024-04-102h 01For Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #23 TRAILER - “AI Acceleration Debate” For Humanity: An AI Safety PodcastSuicide or Salvation? In episode #23 TRAILER, AI Risk-Realist John Sherman and Accelerationist Paul Leszczynski debate AI accelerationism, the existential risks and benefits of AI, questioning the AI safety movement and discussing the concept of AI as humanity's child. They ponder whether AI should align with human values and the potential consequences of such alignment. Paul suggests that AI safety efforts could inadvertently lead to the very dangers they aim to prevent. The conversation touches on the philosophy of accelerationism and the influence of human conditioning on our understanding of AI. This podcast is not journalism. But it’s...2024-04-0805 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastEpisode #22 - “Sam Altman: Unelected, Unvetted, Unaccountable” For Humanity: An AI Safety PodcastIn Episode #22, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of...2024-04-0338 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast“Sam Altman: Unelected, Unvetted, Unaccountable” For Humanity: An AI Safety Podcast Episode #22 TRAILERIn episode #22, host John Sherman critically examines Sam Altman's role as CEO of OpenAI, focusing on the ethical and safety challenges of AI development. The discussion critiques Altman's lack of public accountability and the risks his decisions pose to humanity. Concerns are raised about the governance of AI, the potential for AI to cause harm, and the need for safety measures and regulations. The episode also explores the societal impact of AI, the possibility of AI affecting the physical world, and the importance of public awareness and engagement in AI risk discussions. Overall, the episode emphasizes the urgency of...2024-04-0102 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21“Why AI Killing You Isn’t On The News” For Humanity: An AI Safety Podcast Episode #21 Interview starts at 20:10 Some highlights of John’s news career start at 9:14 In In Episode #21 “Why AI Killing You Isn’t On The News” Casey Clark Interview,, host John Sherman and WJZY-TV News Director Casey Clark explore the significant underreporting of AI's existential risks in the media. They recount a disturbing incident where AI bots infiltrated a city council meeting, spewing hateful messages. The conversation delves into the challenges of conveying the complexities of artificial general intelligence to the public...2024-03-271h 13For Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast“Why AI Killing You Isn’t On The News” TRAILER For Humanity: An AI Safety Podcast Episode #21In Episode #21 TRAILER “Why AI Killing You Isn’t On The News” Casey Clark Interview, John Sherman interviews WJZY-TV News Director Casey Clark about TV news coverage of AI existential risk. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody...2024-03-2503 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast0:01 / 3:52 “AI Risk Realist vs. Coding Cowboy” For Humanity: An AI Safety Podcast Episode #20In Episode #20 “AI Safety Debate: Risk Realist vs Coding Cowboy” John Sherman debates AI risk with lifelong coder and current Chief AI Officer Mark Tellez. The full show conversation covers issues like can AI systems be contained to the digital world, should we build data centers with explosives lining the walls just in case, are the AI CEOs just big liars. Mark believes we are on a safe course, and when that changes we will have time to react. John disagrees. What follows is a candid and respectful exchange of ideas. This podcast is not journalism. But it’s...2024-03-201h 49For Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast“AI Risk Realist vs. Coding Cowboy” TRAILER For Humanity: An AI Safety Podcast Episode #20In Episode #20 “AI Safety Debate: Risk Realist vs Coding Cowboy” TRAILER, John Sherman debates AI risk with a lifelong coder and current Chief AI Officer. The full show conversation covers issues like can AI systems be contained to the digital world, should we build data centers with explosives lining the walls just in case, are the AI CEOs just big liars. Mark believes we are on a safe course, and when that changes we will have time to react. John disagrees. What follows is a candid and respectful exchange of ideas. This podcast is not journalism. But it’s...2024-03-1803 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast“David Shapiro AI-Risk Interview” For Humanity: An AI Safety Podcast Episode #19In Episode #19, “David Shapiro Interview” John talks with AI/Tech YouTube star David Shapiro. David has several successful YouTube channels. His main channel (link below: go follow him!), with more than 140k subscribers, is a constant source of new AI and AGI and post-labor economy-related video content. Dave does a great job breaking things down. But a lot Dave’s content is about a post AGI future. And this podcast’s main concern is that we won’t get there, cuz AGI will kill us all first. So this show is a two part conversation, first about if we can...2024-03-131h 40For Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast“David Shapiro AI-Risk Interview” For Humanity: An AI Safety Podcast Episode #19 TRAILERIn Episode #19 TRAILER, “David Shapiro Interview” John talks with AI/Tech YouTube star David Shapiro. David has several successful YouTube channels, his main channel (link below go follow him!), with more than 140k subscribers, is a constant source of new AI and AGI and post-labor economy-related video content. Dave does a great job breaking things down. But a lot Dave’s content is about a post-AGI future. And this podcast’s main concern is that we won’t get there, cuz AGI will kill us all first. So this show is a two-part conversation, first about if we can live past...2024-03-1107 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast“Worse Than Extinction, CTO vs. S-Risk” For Humanity, An AI Safety Podcast Episode #18In Episode #18 TRAILER, “Worse Than Extinction, CTO vs. S-Risk” Louis Berman Interview, John talks with tech CTO Louis Berman about a broad range of AI risk topics centered around existential risk. The conversation goes to the darkest corner of the AI risk debate, S-risk, or suffering risk. This episode has a lot in it that is very hard to hear. And say.The tech CEOs are spinning visions of abundance and utopia for the public. Someone needs to fill in the full picture of the realm of possibilities, no matter how hard it is to h...2024-03-061h 33For Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast“Worse Than Extinction, CTO vs. S-Risk” TRAILER For Humanity, An AI Safety Podcast Episode #18In Episode #18 TRAILER, “Worse Than Extinction, CTO vs. S-Risk” Louis Berman Interview, John talks with tech CTO Louis Berman about a broad range of AI risk topics centered around existential risk. The conversation goes to the darkest corner of the AI risk debate, S-risk, or suffering risk.This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show f...2024-03-0403 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"AI Risk=Jenga" For Humanity, An AI Safety Podcast Episode #17, Liron Shapira InterviewIn Episode #17, AI Risk + Jenga, Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He says something like Sora, seemingly just a video innovation, could actually end all life on earth. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the un...2024-02-281h 32For Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"AI Risk=Jenga" For Humanity, An AI Safety Podcast #17 TRAILER, Liron Shapira InterviewIn Episode #17 TRAILER, "AI Risk=Jenga", Liron Shapira Interview, John talks with tech CEO and AI Risk Activist Liron Shapira about a broad range of AI risk topics centered around existential risk. Liron likens AI Risk to a game of Jenga, where there are a finite number of pieces, and each one you pull out leaves you one closer to collapse. He explains how something like Sora, seemingly just a video tool, is actually a significant, real Jenga piece, and could actually end all life on earth. This podcast is not journalism. But it’s not opinion either. This sh...2024-02-2602 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"AI Risk Super Bowl I: Conner vs. Beff" For Humanity, An AI Safety Podcast Episode #15In Episode #15, AI Risk Superbowl I: Conner vs. Beff, Highlights and Post-Game Analysis, John takes a look at the recent debate on the Machine Learning Street Talk Podcast between AI safety hero Connor Leahy and Acceleration cult leader Beff Jezos, aka Guillaume Vendun. The epic three hour debate took place on 2/2/24. With a mix of highlights and analysis, John, with Beff’s help, reveals the truth about the e/acc movement: it’s anti-human at its core. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthin...2024-02-191h 00For Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"AI Risk Super Bowl I: Conner vs. Beff" For Humanity, An AI Safety Podcast Episode #15 TRAILERIn Episode #15 TRAILER, AI Risk Super Bowl I: Conner vs. Beff, Highlights and Post-Game Analysis, John takes a look at the recent debate on the Machine Learning Street Talk Podcast between AI safety hero Connor Leahy and Acceleration cult leader Beff Jezos, aka Guillaume Vendun. The epic three hour debate took place on 2/2/24. With a mix of highlights and analysis, John, with Beff’s help, reveals the truth about the e/acc movement: it’s anti-human at its core. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and unders...2024-02-1202 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"Pause AI or Die" For Humanity: An AI Safety Podcast Episode #14, Joep Meindertsma InterviewIn Episode #14, John interviews Joep Meinderstma, Founder of Pause AI, a global AI safety policy and protest organization. Pause AI was behind the first ever AI Safety protests on the planet. John and Joep talk about what's being done, how it all feels, how it all might end, and even broach the darkest corner of all of this: suffering risk. This conversation embodies a spirit this movement needs: we can be upbeat and positive as we talk about the darkest subjects possible. It's not "optimism" to race to build suicide machines, but it is optimism to assume the best...2024-02-071h 16For Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"Pause AI or Die" For Humanity: An AI Safety Podcast Episode #14 TRAILER, Joep Meindertsma InterviewIn Episode #14 TRAILER, John interviews Joep Meinderstma, Founder of Pause AI, a global AI safety policy and protest organization. Pause AI was behind the first ever AI Safety protests on the planet. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist Jo...2024-02-0503 minThe Farm to School PodcastThe Farm to School PodcastBringing Learning to Life In the School Garden: Special Guest John FisherJoin us as we interview John Fisher, from Life Lab in Santa Cruz County, California.  John has dedicated his life to bringing school gardens to the forefront in the USA, and has been a main architect in the school garden movement.We would love to hear from you! Send us a message.2024-02-0130 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"Uncontrollable AI" For Humanity: An AI Safety Podcast, Episode #13 , Darren McKee InterviewIn Episode #13, “Uncontrollable AI” TRAILER John Sherman interviews Darren McKee, author of Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. In this Trailer, Darren starts off on an optimistic note by saying AI Safety is winning. You don’t often hear it, but Darren says the world has moved on AI Safety with greater speed and focus and real promise than most in the AI community had thought was possible. Apologies for the laggy cam on Darren! Darren’s book is an excellent resource, like this podcast it is intended for the general public. This pod...2024-01-301h 40For Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"Uncontrollable AI" For Humanity: An AI Safety, Podcast Episode #13, Author Darren McKee InterviewIn Episode #13, “Uncontrollable AI” TRAILER John Sherman interviews Darren McKee, author of Uncontrollable: The Threat of Artificial Superintelligence and the Race to Save the World. In this Trailer, Darren starts off on an optimistic note by saying AI Safety is winning. You don’t often hear it, but Darren says the world has moved on AI Safety with greater speed and focus and real promise than most in the AI community had thought was possible. Darren’s book is an excellent resource, like this podcast it is intended for the general public. This podcast is not journalism. But it’s not opin...2024-01-2901 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"AI Risk Debate" For Humanity: An AI Safety Podcast Episode #12 Theo Jaffee InterviewIn Episode #12, we have our first For Humanity debate!! John talks with Theo Jaffee, a fast-rising AI podcaster who is a self described “techno-optimist.” The debate covers a wide range of topics in AI risk. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth.  For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinct...2024-01-251h 40For Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"AI Risk Debate" For Humanity: An AI Safety Podcast Episode #12 Theo Jaffee Interview TRAILERIn Episode #12 TRAILER, we have our first For Humanity debate!! John talks with Theo Jaffee, a fast-rising AI podcaster who is a self described “techno-optimist.” The debate covers a wide range of topics in AI risk. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabod...2024-01-2205 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"Artist vs. AI Risk" For Humanity: An AI Safety Podcast Episode #11 Stephen Hanson InterviewIn Episode #11, we meet Stephen Hanson, a painter and digital artist from Northern England. Stephen first became aware of AI risk in December 2022, and has spent 12+ months carrying the weight of it all. John and Steve talk about what it's like to have a family and how to talk to them about AI risk, what the future holds, and what we the AI Risk Realists can do to change the future, while keeping our sanity at the same time. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and un...2024-01-171h 19For Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"Artist vs. AI Risk" For Humanity: An AI Safety Podcast Episode #11 Stephen Hanson Interview TRAILERIn Episode #11 Trailer, we meet Stephen Hanson, a painter and digital artist from Northern England. Stephen first became aware of AI risk in December 2022, and has spent 12+ months carrying the weight of it all. John and Steve talk about what it's like to have a family and how to talk to them about AI risk, what the future holds, and what we the AI Risk Realists can do to change the future, while keeping our sanity at the same time. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts an...2024-01-1601 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"Eliezer Yudkowsky's 2024 Doom Update" For Humanity: An AI Safety Podcast, Episode #10In Episode #10, AI Safety Research icon Eliezer Yudkowsky updates his AI doom predictions for 2024. After For Humanity host John Sherman tweeted at Eliezer, he revealed new timelines and predictions for 2024. Be warned, this is a heavy episode. But there is some hope and a laugh at the end. Most important among them, he believes: -Humanity no longer has 30-50 years to solve the alignment and interpretability problems, our broken processes just won't allow it -Human augmentation is the only viable path for humans to compete with AGIs -We have ONE YEAR, THIS YEAR, 2024, to mount a global WW2-style...2024-01-1028 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"Eliezer Yudkowsky's 2024 Doom Update" For Humanity: An AI Safety Podcast, Episode #10 TrailerIn Episode #10 TRAILER, AI Safety Research icon Eliezer Yudkowsky updates his AI doom predictions for 2024. After For Humanity host John Sherman tweeted at Eliezer, he revealed new timelines and predictions for 2024. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required. Our show focuses solely on the threat of human extinction from AI. Peabody Award-winning former journalist John Sh...2024-01-0901 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"AI's Top 3 Doomers" For Humanity, An AI Safety Podcast: Episode #8Who are the most dangerous "doomers" in AI? It's the people bringing the doom threat to the world, not the people calling them out for it. In Episode #8 TRAILER, host Josh Sherman points fingers and lays blame. How is it possible we're actually really discussing a zero-humans-on-earth future? Meet the people making it happen, the real doomers. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Sa...2023-12-2238 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"AI's Top 3 Doomers" For Humanity, An AI Safety Podcast: Episode #8 TRAILERWho are the most dangerous "doomers" in AI? It's the people bringing the doom threat to the world, not the people calling them out for it. In Episode #8 TRAILER, host Josh Sherman points fingers and lays blame. How is it possible we're actually really discussing a zero-humans-on-earth future? Meet the people making it happen, the real doomers. This podcast is not journalism. But it’s not opinion either. This show simply strings together the existing facts and underscores the unthinkable probable outcome, the end of all life on earth. For Humanity: An AI Safety Podcast, is the accessible AI Sa...2023-12-2102 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"Moms Talk AI Extinction Risk" For Humanity, An AI Safety Podcast: Episode #7You've heard all the tech experts. But what do regular moms think about AI and human extinction? In our Episode #7 TRAILER, "Moms Talk AI Extinction Risk" host John Sherman moves the AI Safety debate from the tech world to the real world. 30-something tech dudes believe they somehow have our authorization to toy with killing our children. And our children's yet unborn children too. They do not have this authorization. So what do regular moms think of all this? Watch and find out. This podcast is not journalism. But it’s not opinion either. This show simply strings together th...2023-12-1452 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"Moms Talk AI Extinction Risk" For Humanity, An AI Safety Podcast: Episode #7 TRAILERYou've heard all the tech experts. But what do regular moms think about AI and human extinction? In our Episode #7 TRAILER, "Moms Talk AI Extinction Risk" host John Sherman moves the AI Safety debate from the tech world to the real world. 30-something tech dudes believe they somehow have our authorization to toy with killing our children. And our children's yet unborn children too. They do not have this authorization. So what do regular moms think of all this? Watch and find out. This podcast is not journalism. But it’s not opinion either. This show simply strings together th...2023-12-1302 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety Podcast"Team Save Us vs Team Kill Us" For Humanity, An AI Safety Podcast Episode #6: The Munk DebateIn Episode #6, Team Save Us vs. Team Kill Us,, host John Sherman weaves together highlights and analysis of The Munk Debate on AI Safety to show the case for and against AI as a human extinction risk. The debate took place in Toronto in June 2023, and it remains entirely current and relevant today and stands alone as one of the most well-produced, well-argued debates on AI Safety anywhere. All of the issues debated remain unsolved. All of the threats debated only grow in urgency. In this Munk Debate, you’ll meet two teams: Max Tegmark and Yoshua Bengio on Te...2023-12-0643 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastTeam Save Us vs Team Kill Us: For Humanity, An AI Safety Podcast Episode #6: The Munk Debate TRAILERWant to see the most important issue in human history, extinction from AI, robustly debated, live and in person? It doesn’t happen nearly often enough. In our Episode #6, Team Save Us vs. Team Kill Us, TRAILER, John Sherman weaves together highlights and analysis of The Munk Debate on AI Safety to show the case for and against AI as a human extinction risk. The debate took place in June 2023, and it remains entirely current and relevant today and stands alone as one of the most well-produced, well-argued debates on AI Safety anywhere. All of the issues debated remain un...2023-12-0301 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastDr. Roman Yampolskiy Interview, Part 2: For Humanity, An AI Safety Podcast Episode #5In Episode #5 Part 2: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -what is at the core of AI safety risk skepticism -why AI safety research leaders themselves are so all over the map -why journalism is failing so miserably to cover AI safety appropriately -the drastic step the federal government could take to really slow Big AI down For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background required...2023-11-2741 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastDr. Roman Yampolskiy Interview, Part 2: For Humanity, An AI Safety Podcast Episode #5 TRAILERIn Episode #5 Part 2, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -what is are the core of AI safety risk skepticism -why AI safety research leaders themselves are so all over the map -why journalism is failing so miserably to cover AI safety appropriately -the drastic step the federal government could take to really slow Big AI down For Humanity: An AI Safety Podcast, is the accessible AI Safety Podcast for all humans, no tech background...2023-11-2602 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastDr. Roman Yampolskiy Interview, Part 1: For Humanity, An AI Safety Podcast Episode #4In Episode #4 Part 1, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -why more average people aren't more involved and upset about AI safety -how frontier AI capabilities workers go to work every day knowing their work risks human extinction and go back to work the next day -how we can talk do our kids about these dark, existential issues -what if AI safety researchers concerned about human extinction over AI are just somehow wrong? For Humanity...2023-11-2235 minFor Humanity: An AI Safety PodcastFor Humanity: An AI Safety PodcastDr. Roman Yampolskiy Interview, Part 1: For Humanity, An AI Safety Podcast Episode #4 TRAILERIn Episode #4 Part 1, TRAILER: John Sherman interviews Dr. Roman Yampolskiy, Director of the Cyber Security Lab at the University of Louisville, and renowned AI safety researcher. Among the many topics discussed in this episode: -why more average people aren't more involved and upset about AI safety -how frontier AI capabilities workers go to work every day knowing their work risks human extinction and go back to work the next day -how we can talk do our kids about these dark, existential issues -what if AI safety researchers concerned about human extinction over AI are just somehow wrong? For Humanity...2023-11-2001 min