podcast
details
.com
Print
Share
Look for any podcast host, guest or anyone
Search
Showing episodes and shows of
Chase Precopia
Shows
muckrAIkers
Tech Bros Love AI Waifus
OpenAI is pivoting to porn while public sentiment turns decisively against AI. Pew Research shows Americans are now concerned over excited by a 2:1 margin. We trace how we got here: broken promises of cancer cures replaced by addiction mechanics and expensive APIs. Meanwhile, data centers are hiding a near-recession, straining power grids, and literally breaking your household appliances. Drawing parallels to the 1970s AI winter, we argue the bubble is shaking and needs to pop now, before it becomes another 2008. The good news? Grassroots resistance works. Protests have already blocked $64 billion in data center projects.NOTE: The...
2025-12-15
45 min
Into AI Safety
Against 'The Singularity' w/ Dr. David Thorstad
Philosopher Dr. David Thorstad tears into one of AI safety's most influential arguments: the singularity hypothesis. We discuss why the idea of recursive self-improvement leading to superintelligence doesn't hold up under scrutiny, how these arguments have redirected hundreds of millions in funding away from proven interventions, and why people keep backpedaling to weaker versions when challenged.David walks through the actual structure of singularity arguments, explains why similar patterns show up in other longtermist claims, and makes the case for why we should focus on concrete problems happening right now like poverty, disease, the rise of authoritarianism...
2025-11-24
1h 09
Into AI Safety
Getting Agentic w/ Alistair Lowe-Norris
Alistair Lowe-Norris, Chief Responsible AI Officer at Iridius and co-host of The Agentic Insider podcast, joins to discuss AI compliance standards, the importance of narrowly scoping systems, and how procurement requirements could encourage responsible AI adoption across industries. We explore the gap between the empty promises companies provide and actual safety practices, as well as the importance of vigilance and continuous oversight.Listen to Alistair on his podcast, The Agentic Insider!As part of my effort to make this whole podcasting thing more sustainable, I have created a Kairos.fm Patreon which includes an extended...
2025-10-20
1h 11
muckrAIkers
AI Safety for Who?
Jacob and Igor argue that AI safety is hurting users, not helping them. The techniques used to make chatbots "safe" and "aligned," such as instruction tuning and RLHF, anthropomorphize AI systems such they take advantage of our instincts as social beings. At the same time, Big Tech companies push these systems for "wellness" while dodging healthcare liability, causing real harms today We discuss what actual safety would look like, drawing on self-driving car regulations.Chapters(00:00) - Introduction & AI Investment Insanity (01:43) - The Problem with AI Safety (08:16) - Anthropomorphizing AI & Its Dangers (26:55) - Mental Health, Wellness...
2025-10-13
49 min
Into AI Safety
Growing BlueDot's Impact w/ Li-Lian Ang
I'm joined by my good friend, Li-Lian Ang, first hire and product manager at BlueDot Impact. We discuss how BlueDot has evolved from their original course offerings to a new "defense-in-depth" approach, which focuses on three core threat models: reduced oversight in high risk scenarios (e.g. accelerated warfare), catastrophic terrorism (e.g. rogue actors with bioweapons), and the concentration of wealth and power (e.g. supercharged surveillance states). On top of that, we cover how BlueDot's strategies account for and reduce the negative impacts of common issues in AI safety, including exclusionary tendencies, elitism, and echo chambers.
2025-09-15
1h 07
muckrAIkers
The Co-opting of Safety
We dig into how the concept of AI "safety" has been co-opted and weaponized by tech companies. Starting with examples like Mecha-Hitler Grok, we explore how real safety engineering differs from AI "alignment," the myth of the alignment tax, and why this semantic confusion matters for actual safety.(00:00) - Intro (00:21) - Mecha-Hitler Grok (10:07) - "Safety" (19:40) - Under-specification (53:56) - This time isn't different (01:01:46) - Alignment Tax myth (01:17:37) - Actually making AI safer LinksJMLR article - Underspecification Presents Challenges for Credibility in Modern Machine LearningTrail of Bits paper - Towards Comprehensive Risk Assessments and Assurance of AI-Based...
2025-08-21
1h 24
Into AI Safety
Layoffs to Leadership w/ Andres Sepulveda Morales
Andres Sepulveda Morales joins me to discuss his journey from three tech layoffs to founding Red Mage Creative and leading the Fort Collins chapter of the Rocky Mountain AI Interest Group (RMAIIG). We explore the current tech job market, AI anxiety in nonprofits, dark patterns in AI systems, and building inclusive tech communities that welcome diverse perspectives.Reach out to Andres on his LinkedIn, or check out the Red Mage Creative website!For any listeners in Colorado, consider attending an RMAIIG event: Boulder; Fort Collins(00:00) - Intro (01:04) - Andres' Journey (05:15...
2025-08-04
1h 39
Into AI Safety
Getting Into PauseAI w/ Will Petillo
Will Petillo, onboarding team lead at PauseAI, joins me to discuss the grassroots movement advocating for a pause on frontier AI model development. We explore PauseAI's strategy, talk about common misconceptions Will hears, and dig into how diverse perspectives still converge on the need to slow down AI development.Will's LinksPersonal blog on AIHis mindmap of the AI x-risk debateGame demosAI focused YouTube channel(00:00) - Intro (03:36) - What is PauseAI (10:10) - Will Petillo's journey into AI safety advocacy (21:13) - Understanding PauseAI (31:35) - Pursuing a pause (40:06) - Balancing advocacy in a complex world (45:54...
2025-06-23
1h 48
Into AI Safety
Making Your Voice Heard w/ Tristan Williams & Felix de Simone
I am joined by Tristan Williams and Felix de Simone to discuss their work on the potential of constituent communication, specifically in the context of AI legislation. These two worked as part of an AI Safety Camp team to understand whether or not it would be useful for more people to be sharing their experiences, concerns, and opinions with their government representative (hint, it is).Check out the blogpost on their findings, "Talking to Congress: Can constituents contacting their legislator influence policy?" and the tool they created!(01:53) - Introductions (04:04) - Starting the project (13:30...
2025-05-19
1h 33
muckrAIkers
DeepSeek: 2 Months Out
DeepSeek has been out for over 2 months now, and things have begun to settle down. We take this opportunity to contextualize the developments that have occurred in its wake, both within the AI industry and the world economy. As systems get more "agentic" and users are willing to spend increasing amounts of time waiting for their outputs, the value of supposed "reasoning" models continues to be peddled by AI system developers, but does the data really back these claims?Check out our DeepSeek minisode for a snappier overview!EPISODE RECORDED 2025.03.30(00:40...
2025-04-09
1h 31
Into AI Safety
INTERVIEW: Scaling Democracy w/ (Dr.) Igor Krawczuk
The almost Dr. Igor Krawczuk joins me for what is the equivalent of 4 of my previous episodes. We get into all the classics: eugenics, capitalism, philosophical toads... Need I say more?If you're interested in connecting with Igor, head on over to his website, or check out placeholder for thesis (it isn't published yet).Because the full show notes have a whopping 115 additional links, I'll highlight some that I think are particularly worthwhile here:The best article you'll ever read on Open Source AIThe best article you'll ever read on emergence in MLKate Crawford's...
2024-06-03
2h 58
Into AI Safety
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (3)
As always, the best things come in 3s: dimensions, musketeers, pyramids, and... 3 installments of my interview with Dr. Peter Park, an AI Existential Safety Post-doctoral Fellow working with Dr. Max Tegmark at MIT.As you may have ascertained from the previous two segments of the interview, Dr. Park cofounded StakeOut.AI along with Harry Luk and one other cofounder whose name has been removed due to requirements of her current position. The non-profit had a simple but important mission: make the adoption of AI technology go well, for humanity, but unfortunately, StakeOut.AI had to dissolve in...
2024-03-25
1h 42
Into AI Safety
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (2)
Join me for round 2 with Dr. Peter Park, an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. Dr. Park was a cofounder of StakeOut.AI, a non-profit focused on making AI go well for humans, along with Harry Luk and one other individual, whose name has been removed due to requirements of her current position.In addition to the normal links, I wanted to include the links to the petitions that Dr. Park mentions during the podcast. Note that the nonprofit which began these petitions, StakeOut.AI, has been dissolved.Right AI...
2024-03-18
1h 06
Into AI Safety
MINISODE: Restructure Vol. 2
UPDATE: Contrary to what I say in this episode, I won't be removing any episodes that are already published from the podcast RSS feed.After getting some advice and reflecting more on my own personal goals, I have decided to shift the direction of the podcast towards accessible content regarding "AI" instead of the show's original focus. I will still be releasing what I am calling research ride-along content to my Patreon, but the show's feed will consist only of content that I aim to make as accessible as possible.00:35 - TL;DL01:12...
2024-03-11
13 min
Into AI Safety
INTERVIEW: StakeOut.AI w/ Dr. Peter Park (1)
Dr. Peter Park is an AI Existential Safety Postdoctoral Fellow working with Dr. Max Tegmark at MIT. In conjunction with Harry Luk and one other cofounder, he founded StakeOut.AI, a non-profit focused on making AI go well for humans.00:54 - Intro03:15 - Dr. Park, x-risk, and AGI08:55 - StakeOut.AI12:05 - Governance scorecard19:34 - Hollywood webinar22:02 - Regulations.gov comments23:48 - Open letters 26:15 - EU AI Act35:07 - Effective accelerationism40:50 - Divide and conquer dynamics45:40 - AI "art"53:09 - OutroLinks to all art...
2024-03-04
54 min
Into AI Safety
MINISODE: "LLMs, a Survey"
Take a trip with me through the paper Large Language Models, A Survey, published on February 9th of 2024. All figures and tables mentioned throughout the episode can be found on the Into AI Safety podcast website.00:36 - Intro and authors01:50 - My takes and paper structure04:40 - Getting to LLMs07:27 - Defining LLMs & emergence12:12 - Overview of PLMs15:00 - How LLMs are built18:52 - Limitations if LLMs23:06 - Uses of LLMs25:16 - Evaluations and Benchmarks28:11 - Challenges and future directions29:21 - Recap & outroLinks to...
2024-02-26
30 min
Into AI Safety
FEEDBACK: Applying for Funding w/ Esben Kran
Esben reviews an application that I would soon submit for Open Philanthropy's Career Transitition Funding opportunity. Although I didn't end up receiving the funding, I do think that this episode can be a valuable resource for both others and myself when applying for funding in the future.Head over to Apart Research's website to check out their work, or the Alignment Jam website for information on upcoming hackathons.A doc-capsule of the application at the time of this recording can be found at this link.01:38 - Interview starts05:41 - Proposal11:00...
2024-02-19
45 min
Into AI Safety
MINISODE: Reading a Research Paper
Before I begin with the paper-distillation based minisodes, I figured we would go over best practices for reading research papers. I go through the anatomy of typical papers, and some generally applicable advice.00:56 - Anatomy of a paper02:38 - Most common advice05:24 - Reading sparsity and path07:30 - Notes and motivationLinks to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.Ten simple rules for reading a scientific paperBest sources I foundLet's get critical: Reading academic articles#GradHacks: A guide to...
2024-02-12
09 min
Into AI Safety
HACKATHON: Evals November 2023 (2)
Join our hackathon group for the second episode in the Evals November 2023 Hackathon subseries. In this episode, we solidify our goals for the hackathon after some preliminary experimentation and ideation.Check out Stellaric's website, or follow them on Twitter.01:53 - Meeting starts05:05 - Pitch: extension of locked models23:23 - Pitch: retroactive holdout datasets34:04 - Preliminary results37:44 - Next steps42:55 - RecapLinks to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.Evalugator libraryPassword Locked Model blogpostTruthfulQA: Measuring...
2024-02-05
48 min
Into AI Safety
MINISODE: Portfolios
I provide my thoughts and recommendations regarding personal professional portfolios.00:35 - Intro to portfolios01:42 - Modern portfolios02:27 - What to include04:38 - Importance of visual05:50 - The "About" page06:25 - Tools08:12 - Future of "Minisodes"Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.From Portafoglio to Eportfolio: The Evolution of Portfolio in Higher EducationGIMPAlternativeToJekyllGitHub PagesMinimal MistakesMy portfolio
2024-01-29
09 min
Into AI Safety
INTERVIEW: Polysemanticity w/ Dr. Darryl Wright
Darryl and I discuss his background, how he became interested in machine learning, and a project we are currently working on investigating the penalization of polysemanticity during the training of neural networks.Check out a diagram of the decoder task used for our research!01:46 - Interview begins02:14 - Supernovae classification08:58 - Penalizing polysemanticity20:58 - Our "toy model"30:06 - Task description32:47 - Addressing hurdles39:20 - Lessons learnedLinks to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance....
2024-01-22
45 min
Into AI Safety
MINISODE: Starting a Podcast
A summary and reflections on the path I have taken to get this podcast started, including some resources recommendations for others who want to do something similar.Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.LessWrongSpotify for PodcastersInto AI Safety podcast websiteEffective Altruism GlobalOpen Broadcaster Software (OBS)CraigRiverside
2024-01-15
10 min
Into AI Safety
HACKATHON: Evals November 2023 (1)
This episode kicks off our first subseries, which will consist of recordings taken during my team's meetings for the AlignmentJams Evals Hackathon in November of 2023. Our team won first place, so you'll be listening to the process which, at the end of the day, turned out to be pretty good.Check out Apart Research, the group that runs the AlignmentJamz Hackathons.Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.Generalization Analogies: A Testbed for Generalizing AI...
2024-01-08
1h 08
Into AI Safety
MINISODE: Staying Up-to-Date in AI
In this minisode I give some tips for staying up-to-date in the everchanging landscape of AI. I would like to point out that I am constantly iterating on these strategies, tools, and sources, so it is likely that I will make an update episode in the future.Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.ToolsFeedlyarXiv Sanity LiteZoteroAlternativeToMy "Distilled AI" FolderAI Explained YouTube channelAI Safety newsletterData Machina newsletterImport AIMidwit AlignmentHonourable MentionsAI Alignment ForumLessWrongBounded Regret (Jacob Steinhart's blog)Cold Takes (Holden Karnofsky's...
2024-01-01
13 min
Into AI Safety
INTERVIEW: Applications w/ Alice Rigg
Alice Rigg, a mechanistic interpretability researcher from Ottawa, Canada, joins me to discuss their path and the applications process for research/mentorship programs.Join the Mech Interp Discord server and attend reading groups at 11:00am on Wednesdays (Mountain Time)!Check out Alice's website.Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance. EleutherAI Join the public EleutherAI discord server DistillEffective Altruism (EA)MATS Retrospective Summer 2023 postAmbitious Mechanistic Interpretability AISC research plan by Alice RiggSPARStability AI During their most recent fund...
2023-12-18
1h 10
Into AI Safety
MINISODE: Program Applications (Winter 2024)
We're back after a month-long hiatus with a podcast refactor and advice on the applications process for research/mentorship programs.Check out the About page on the Into AI Safety website for a summary of the logistics updates.Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance. MATSASTRA FellowshipARENAAI Safety CampBlueDot ImpactTech with TimFast.AI's Practical Deep Learning for CodersKaggleAlignmentJamsLessWrongAI Alignment Forum
2023-12-11
18 min