AI therapy is here and OpenAI is terrified of it.
This week on They Might Be Self-Aware, Hunter and Daniel break down the life-or-death stakes of letting Large Language Models play the role of therapist. Should ChatGPT ever be allowed to talk people through self-harm? Or is that legal and ethical liability too great for OpenAI to risk?
We explore the explosive debate around AI therapy vs human therapists, OpenAI’s controversial age-gating model for self-harm conversations, and why lawsuits are forcing companies to walk a tightrope between saving lives and avoiding blame. Along the way, we tackle Anthropic’s ban on domestic surveillance, the growing fears of an AGI job apocalypse, and the rise of AI in dating and religion.
Whether you see AI as savior or doom, this episode delivers a no-holds-barred look at the frontier of mental health, surveillance, and humanity’s biggest gamble.
⏱️ CHAPTERS
00:00 Intro – Egos, Algorithms & Madness
01:40 OpenAI age-gating self-harm chats
05:10 AI lawsuits & the risk of AI therapy
09:45 Should ChatGPT replace human therapists?
11:20 Anthropic bans AI from domestic surveillance
15:00 The race toward AGI – danger or hype?
16:55 AI jobs crisis: mass layoffs & automation
22:00 Pareto principle & Twitter layoffs
23:50 Hunger strike against AGI
25:40 AI in personal life: dating, religion & fear
🎧 Listen & Subscribe
📱 Listen on Spotify: https://open.spotify.com/show/3EcvzkWDRFwnmIXoh7S4Mb?si=3d0f8920382649cc
🍎 Subscribe on Apple Podcasts: https://podcasts.apple.com/us/podcast/they-might-be-self-aware/id1730993297
▶️ Subscribe on YouTube: https://www.youtube.com/channel/UCy9DopLlG7IbOqV-WD25jcw?sub_confirmation=1
📢 Engage
Share your thoughts in the comments and we might feature them in a future episode.
#AItherapy #ArtificialIntelligence #Podcast