Look for any podcast host, guest or anyone
Showing episodes and shows of

Holden Karnofsky

Shows

80,000 Hours Podcast80,000 Hours PodcastBeyond human minds: The bewildering frontier of consciousness in insects, AI, and moreWhat if there’s something it’s like to be a shrimp — or a chatbot?For centuries, humans have debated the nature of consciousness, often placing ourselves at the very top. But what about the minds of others — both the animals we share this planet with and the artificial intelligences we’re creating?We’ve pulled together clips from past conversations with researchers and philosophers who’ve spent years trying to make sense of animal consciousness, artificial sentience, and moral consideration under deep uncertainty.Links to learn more and full transcript: https://80k.info/nhs2025-05-233h 3480,000 Hours Podcast80,000 Hours PodcastSerendipity, weird bets, & cold emails that actually work: Career advice from 16 former guestsHow do you navigate a career path when the future of work is uncertain? How important is mentorship versus immediate impact? Is it better to focus on your strengths or on the world’s most pressing problems? Should you specialise deeply or develop a unique combination of skills?From embracing failure to finding unlikely allies, we bring you 16 diverse perspectives from past guests who’ve found unconventional paths to impact and helped others do the same.Links to learn more and full transcript.Chapters:Cold open (00:00:00)Luisa's intro (00:01:04)Holden Karnofsky on just kick...2025-04-242h 18London FuturistsLondon FuturistsHuman extinction: thinking the unthinkable, with Sean ÓhÉigeartaighOur subject in this episode may seem grim – it’s the potential extinction of the human species, either from a natural disaster, like a supervolcano or an asteroid, or from our own human activities, such as nuclear weapons, greenhouse gas emissions, engineered biopathogens, misaligned artificial intelligence, or high energy physics experiments causing a cataclysmic rupture in space and time.These scenarios aren’t pleasant to contemplate, but there’s a school of thought that urges us to take them seriously – to think about the unthinkable, in the phrase coined in 1962 by pioneering futurist Herman Kahn. Over the last coupl...2025-04-2343 minEA Forum Podcast (Curated & popular)EA Forum Podcast (Curated & popular)“EA Adjacency as FTX Trauma” by Mjreard This is a Forum Team crosspost from Substack. Matt would like to add: "Epistemic status = incomplete speculation; posted here at the Forum team's request" When you ask prominent Effective Altruists about Effective Altruism, you often get responses like these: For context, Will MacAskill and Holden Karnofsky are arguably, literally the number one and two most prominent Effective Altruists on the planet. Other evidence of their ~spouses’ personal involvement abounds, especially Amanda's. Now, perhaps they’ve had changes of heart in recent months or years – and they’re certainly entitled to have those – but being evasive an...2025-04-1019 minEA Forum Podcast (Curated & popular)EA Forum Podcast (Curated & popular)“EA Adjacency as FTX Trauma” by Mjreard This is a Forum Team crosspost from Substack. Matt would like to add: "Epistemic status = incomplete speculation; posted here at the Forum team's request" When you ask prominent Effective Altruists about Effective Altruism, you often get responses like these: For context, Will MacAskill and Holden Karnofsky are arguably, literally the number one and two most prominent Effective Altruists on the planet. Other evidence of their ~spouses’ personal involvement abounds, especially Amanda's. Now, perhaps they’ve had changes of heart in recent months or years – and they’re certainly entitled to have those – but being evasive an...2025-04-1019 minEA Forum Podcast (Curated & popular)EA Forum Podcast (Curated & popular)“Anthropic is not being consistently candid about their connection to EA” by burner2 In a recent Wired article about Anthropic, there's a section where Anthropic's president, Daniela Amodei, and early employee Amanda Askell seem to suggest there's little connection between Anthropic and the EA movement: Ask Daniela about it and she says, "I'm not the expert on effective altruism. I don't identify with that terminology. My impression is that it's a bit of an outdated term". Yet her husband, Holden Karnofsky, cofounded one of EA's most conspicuous philanthropy wings, is outspoken about AI safety, and, in January 2025, joined Anthropic. Many others also remain engaged with EA. As early employee Amanda...2025-03-3106 minEA Forum Podcast (Curated & popular)EA Forum Podcast (Curated & popular)“Anthropic is not being consistently candid about their connection to EA” by burner2 In a recent Wired article about Anthropic, there's a section where Anthropic's president, Daniela Amodei, and early employee Amanda Askell seem to suggest there's little connection between Anthropic and the EA movement: Ask Daniela about it and she says, "I'm not the expert on effective altruism. I don't identify with that terminology. My impression is that it's a bit of an outdated term". Yet her husband, Holden Karnofsky, cofounded one of EA's most conspicuous philanthropy wings, is outspoken about AI safety, and, in January 2025, joined Anthropic. Many others also remain engaged with EA. As early employee Amanda...2025-03-3106 min80,000 Hours Podcast80,000 Hours Podcast15 expert takes on infosec in the age of AI"There’s almost no story of the future going well that doesn’t have a part that’s like '…and no evil person steals the AI weights and goes and does evil stuff.' So it has highlighted the importance of information security: 'You’re training a powerful AI system; you should make it hard for someone to steal' has popped out to me as a thing that just keeps coming up in these stories, keeps being present. It’s hard to tell a story where it’s not a factor. It’s easy to tell a story where it is a fa...2025-03-282h 3580,000 Hours Podcast80,000 Hours PodcastAGI disagreements and misconceptions: Rob, Luisa, & past guests hash it outWill LLMs soon be made into autonomous agents? Will they lead to job losses? Is AI misinformation overblown? Will it prove easy or hard to create AGI? And how likely is it that it will feel like something to be a superhuman AGI?With AGI back in the headlines, we bring you 15 opinionated highlights from the show addressing those and other questions, intermixed with opinions from hosts Luisa Rodriguez and Rob Wiblin recorded back in 2023.Check out the full transcript on the 80,000 Hours website.You can decide whether the views we expressed (and...2025-02-103h 12LessWrong (Curated & Popular)LessWrong (Curated & Popular)“What’s the short timeline plan?” by Marius HobbhahnThis is a low-effort post. I mostly want to get other people's takes and express concern about the lack of detailed and publicly available plans so far. This post reflects my personal opinion and not necessarily that of other members of Apollo Research. I’d like to thank Ryan Greenblatt, Bronson Schoen, Josh Clymer, Buck Shlegeris, Dan Braun, Mikita Balesni, Jérémy Scheurer, and Cody Rushing for comments and discussion.I think short timelines, e.g. AIs that can replace a top researcher at an AGI lab without losses in capabilities by 2027, are plausible. Some people have post...2025-01-0344 minAI Safety FundamentalsAI Safety FundamentalsIf-Then Commitments for AI Risk ReductionThis article from Holden Karnofsky, now a visiting scholar at the Carnegie Endowment for International Peace, discusses "If-Then" commitments as a structured approach to managing AI risks without hindering innovation. These commitments offer a framework in which specific responses are triggered when particular risks arise, allowing for a proactive and organized approach to AI safety. The article emphasizes that as AI technology rapidly advances, such predefined voluntary commitments or regulatory requirements can help guide timely interventions, ensuring that AI developments remain safe and beneficial while minimizing unnecessary delays.Original text: https://carnegieendowment.org/research/2024/09/if-then-commitments-for-ai-risk-reduction...2025-01-0240 minLessWrong (Curated & Popular)LessWrong (Curated & Popular)“Catastrophic sabotage as a major threat model for human-level AI systems” by evhubThanks to Holden Karnofsky, David Duvenaud, and Kate Woolverton for useful discussions and feedback.Following up on our recent “Sabotage Evaluations for Frontier Models” paper, I wanted to share more of my personal thoughts on why I think catastrophic sabotage is important and why I care about it as a threat model. Note that this isn’t in any way intended to be a reflection of Anthropic's views or for that matter anyone's views but my own—it's just a collection of some of my personal thoughts.First, some high-level thoughts on what I want to talk abo...2024-11-1527 min80,000 Hours Podcast80,000 Hours PodcastParenting insights from Rob and 8 past guestsWith kids very much on the team's mind we thought it would be fun to review some comments about parenting featured on the show over the years, then have hosts Luisa Rodriguez and Rob Wiblin react to them. Links to learn more and full transcript.After hearing 8 former guests’ insights, Luisa and Rob chat about:Which of these resonate the most with Rob, now that he’s been a dad for six months (plus an update at nine months).What have been the biggest surprises for Rob in becoming a parent.How Rob's dealt with...2024-11-081h 35EA Forum Podcast (Curated & popular)EA Forum Podcast (Curated & popular)“Joining the Carnegie Endowment for International Peace” by Holden Karnofsky Effective today, I’ve left Open Philanthropy and joined the Carnegie Endowment for International Peace[1] as a Visiting Scholar. At Carnegie, I will analyze and write about topics relevant to AI risk reduction. In the short term, I will focus on (a) what AI capabilities might increase the risk of a global catastrophe; (b) how we can catch early warning signs of these capabilities; and (c) what protective measures (for example, strong information security) are important for safely handling such capabilities. This is a continuation of the work I’ve been doing over the last ~year. I want...2024-04-2903 minEA Forum Podcast (Curated & popular)EA Forum Podcast (Curated & popular)“Joining the Carnegie Endowment for International Peace” by Holden Karnofsky Effective today, I’ve left Open Philanthropy and joined the Carnegie Endowment for International Peace[1] as a Visiting Scholar. At Carnegie, I will analyze and write about topics relevant to AI risk reduction. In the short term, I will focus on (a) what AI capabilities might increase the risk of a global catastrophe; (b) how we can catch early warning signs of these capabilities; and (c) what protective measures (for example, strong information security) are important for safely handling such capabilities. This is a continuation of the work I’ve been doing over the last ~year. I want...2024-04-2903 minEA Forum Podcast (Curated & popular)EA Forum Podcast (Curated & popular)“Joining the Carnegie Endowment for International Peace” by Holden Karnofsky Effective today, I’ve left Open Philanthropy and joined the Carnegie Endowment for International Peace[1] as a Visiting Scholar. At Carnegie, I will analyze and write about topics relevant to AI risk reduction. In the short term, I will focus on (a) what AI capabilities might increase the risk of a global catastrophe; (b) how we can catch early warning signs of these capabilities; and (c) what protective measures (for example, strong information security) are important for safely handling such capabilities. This is a continuation of the work I’ve been doing over the last ~year. I want...2024-04-2903 minAI Safety FundamentalsAI Safety FundamentalsAI Could Defeat All Of Us CombinedThis blog post from Holden Karnofsky, Open Philanthropy’s Director of AI Strategy, explains how advanced AI might overpower humanity. It summarizes superintelligent takeover arguments and provides a scenario where human-level AI disempowers humans without achieving superintelligence. As Holden summarizes: “if there's something with human-like skills, seeking to disempower humanity, with a population in the same ballpark as (or larger than) that of all humans, we've got a civilization-level problem."Original text:https://www.cold-takes.com/ai-could-defeat-all-of-us-combined/#the-standard-argument-superintelligence-and-advanced-technologyAuthors:Holden KarnofskyA podcast by BlueDot Impact....2024-04-2223 minLessWrong (Curated & Popular)LessWrong (Curated & Popular)The Best Tacit Knowledge Videos on Every Subject TL;DRTacit knowledge is extremely valuable. Unfortunately, developing tacit knowledge is usually bottlenecked by apprentice-master relationships. Tacit Knowledge Videos could widen this bottleneck. This post is a Schelling point for aggregating these videos—aiming to be The Best Textbooks on Every Subject for Tacit Knowledge Videos. Scroll down to the list if that's what you're here for. Post videos that highlight tacit knowledge in the comments and I’ll add them to the post. Experts in the videos include Stephen Wolfram, Holden Karnofsky, Andy Matuschak, Jonathan Blow, George Hotz, and others. What are Tacit Knowledge Vide...2024-04-0114 minThe Time Capsule PodcastThe Time Capsule PodcastBonus DayDear Friends,Every four years we are gifted an extra day, the 29th of February, allowing our unique little planet an additional 24 hours to complete its orbit around the sun. Isn't it comforting to know that we (and our planet) get an extra day to catch up?Anyhow, just how unique is our little planet and its 365.25-day orbit around the sun? One of my favorite pieces of online writing is Tim Urban’s explanation of the Fermi Paradox from 2014. At nearly 5,000 words, it takes about 20 minutes to read — a worthwhile way to spend 1.4% of our...2024-02-2910 minInto AI SafetyInto AI SafetyMINISODE: Staying Up-to-Date in AIIn this minisode I give some tips for staying up-to-date in the everchanging landscape of AI. I would like to point out that I am constantly iterating on these strategies, tools, and sources, so it is likely that I will make an update episode in the future.Links to all articles/papers which are mentioned throughout the episode can be found below, in order of their appearance.ToolsFeedlyarXiv Sanity LiteZoteroAlternativeToMy "Distilled AI" FolderAI Explained YouTube channelAI Safety newsletterData Machina newsletterImport AIMidwit AlignmentHonourable MentionsAI Alignment ForumLessWrongBounded Regret (Jacob Steinhart's blog)Cold Takes (Holden Karnofsky's...2024-01-0113 min硅谷101硅谷101E131|OpenAI动荡始末:两股硅谷势力的角逐与“执剑人”Ilya在OpenAI董事会发起的这场“政变”的五天中,OpenAI的权力更迭在不停反转,每一次反转都可以称之为教科书般的公关案例与博弈策略。经历五天的攻防战,最终还是以原CEO Sam Altman的回归且更换董事会成员收尾。 如果要追溯这场矛盾的根源,从ChatGPT的发布开始,内部已经有了不同的声音。而从OpenAI创建到今天,内部也已经经历了三次大的分裂。 OpenAI的组织架构也是商业史上的一次创新,非营利性的组织形态是为了保证让AI安全的发展,而公司架构则是为了AI发展提供必要的资金。但这两种思想如何共存,安全发展AI的度在哪里,硅谷一直有两种声音,这两种声音也存在于OpenAI的董事会中。 这期节目,我们着重从另一个OpenAI的核心人物Ilya Sutskever的视角,也被看作是对这次OpenAI内乱起着最关键作用的一个人,来看看OpenAI这些积累已久的矛盾到底是什么。 另外,感谢VSP投资人培训营项目对本期节目的支持,如果大家对这个项目感兴趣,更多信息可以在这里找到,报名链接在这里。如果大家有任何问题,可以发邮件到vsp@sav.vc 咨询详情。本项目每个季度开营一次,报名链接长期有效。 【主播】 泓君,《硅谷101》播客主理人 【嘉宾】 陶芳波,人工智能公司MindOS创始人 周恒星Kevin,英文科技媒体Pandaily创始人 陈茜,《硅谷101》视频主理人 【你将听到】 00:10 VSP训练营 02:34 正片 04:28 截止11月20日晚上OpenAI三次反转时间线梳理 09:56 ChatGPT发布已出现分裂:追求商业化,还是以研究为中心? 12:31 OpenAI成立八年以来的三次决裂 14:43 事发前三天采访Sam Altman,原本打算庆祝ChatGPT一周年 16:04 OpenAI的铁三角:Ilya负责研发、Greg负责技术、Mira负责产品 19:08 OpenAI的两派一直在激烈斗争 21:00 “董事会告诉员工,为了使命哪怕毁灭公司也在所不惜” 23:24 安全问题是如何成为关注焦点的?有效利他主义与有效加速主义两种势力的角逐 25:35 Ilya的超级对齐:对人类文明底层的爱 29:35 Ilya Sutskever PK Sam Altman,谁是OpenAI的灵魂人物 32:41 OpenAI组织的独特之处 34:50 Sam Altman:真诚、做事滴水不漏,但不懂技术 39:52 OpenAI的董事会成员与有效利他主义的关系 43:01 董事会背后千丝万缕的联系:扎克伯格、Open Philanthropy与Anthropic 47:46 三名董事会成员今年突然退出,力量失衡 49:09 有效加速主义与有效利他主义的历史渊源 54:09 “执剑人”Ilya:同样担心AI风险,但不以放弃发展为目标 56:48 OpenAI事件的行业影响 【董事会成员】 【员工董事】 Sam Altman:Open AI前CEO Ilya Sutskever:OpenAI首席科学家 Greg Brockman:OpenAI前董事长兼总裁 【独立董事】 Adam D’Angelo:Quora的联合创始人兼职CEO Tasha McCauley:机器人公司Fellow Robots首席执行官 Helen Toner:乔治城大学安全与新兴技术中心战略总监 【出事前几个月退出的董事】 Reid Hoffman:OpenAI投资人,因投资另外一家AI公司担心利益冲突在今年3月退出。 Will Hurd:前共和党联邦众议员 Shivon Zilis:脑机接口公司Neuralink总监,因马斯克连带关系,紧随其后退出董事会。 【初步新董事会成员】 Bret Taylor:Salesforce前联合首席执行官 Larry Summers:美国前财政部部长 Adam D’Angelo:Quora的联合创始人兼职CEO 【名词解释】 Anthropic 是一家美国的人工智能(AI)初创企业和公益公司,由OpenAI的前成员Dario Amodei和Daniela Amodei在2021年创立。现任CEO为Dario Amodei。 Open Philanthropy 开放慈善事业,是一个研究和赠款基金会,根据有效利他主义的原则发放赠款。它是由GiveWell和Good Ventures自2011年起合作成立的。2021年Holden Karnofsky卸任CEO之后,现在CEO为Alexander Berger。 有效利他主义 英文是Effective Altruism,简称EA,有效利他主义是一个哲学和社会运动,倡导“利用证据和理性来找出如何尽量多地造福他人,并基于此采取行动”。在本期的内容中代表“AI需要仔细缓慢的发展”。 有效加速主义 英文是Effective Accelerationism,简称E/ACC,有效加速主义涉及加速或强化社会或经济中特定过程或趋势,以更快速或更有效地实现期望的结果。在本期的内容中代表“AI需要大胆快速的发展”。 【相关阅读】 《硅谷101》视频 AI保守派大战激进派,我们细扒了OpenAI董事会每一位成员 Sam Altman回归!聊聊“叛变者”的恐惧与信念:OpenAI技术灵魂人物Ilya Sutskever 【后期】 AMEI 【BGM】 Anticipating a New Day - Stationary Sign Interruption - Craft Case For One Brief Day - Francis Wells 【Shownotes】 Jessica 【在这里找到我们】 公众号:硅谷101 收听渠道:苹果|小宇宙|喜马拉雅|蜻蜓FM|网易云音乐|QQ音乐|荔枝播客 海外用户:Apple Podcast|Spotify|TuneIn|Google Podcast|Amazon Music 联系我们:podcast@sv101.net2023-11-231h 01LessWrong (Curated & Popular)LessWrong (Curated & Popular)"We're Not Ready: thoughts on "pausing" and responsible scaling policies" by Holden KarnofskyViews are my own, not Open Philanthropy’s. I am married to the President of Anthropic and have a financial interest in both Anthropic and OpenAI via my spouse.Over the last few months, I’ve spent a lot of my time trying to help out with efforts to get responsible scaling policies adopted. In that context, a number of people have said it would be helpful for me to be publicly explicit about whether I’m in favor of an AI pause. This post will give some thoughts on these topics. Source:https...2023-10-3011 minEA Forum Podcast (Curated & popular)EA Forum Podcast (Curated & popular)“We’re Not Ready: thoughts on ‘pausing’ and responsible scaling policies” by Holden Karnofsky Views are my own, not Open Philanthropy's. I am married to the President of Anthropic and have a financial interest in both Anthropic and OpenAI via my spouse. Over the last few months, I’ve spent a lot of my time trying to help out with efforts to get responsible scaling policies adopted. In that context, a number of people have said it would be helpful for me to be publicly explicit about whether I’m in favor of an AI pause. This post will give some thoughts on these topics. I think transformative AI coul...2023-10-2711 min80k After Hours80k After HoursHighlights: #158 – Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI riskThis is a selection of highlights from episode #158 of The 80,000 Hours Podcast.These aren't necessarily the most important, or even most entertaining parts of the interview — and if you enjoy this, we strongly recommend checking out the full episode:Holden Karnofsky on how AIs might take over even if they’re no smarter than humans, and his four-part playbook for AI riskAnd if you're finding these highlights episodes valuable, please let us know by emailing podcast@80000hours.org.Highlights put together by Simon Monsour and Milo McGuire2023-10-0623 min80,000 Hours Podcast80,000 Hours Podcast#158 – Holden Karnofsky on how AIs might take over even if they're no smarter than humans, and his 4-part playbook for AI riskBack in 2007, Holden Karnofsky cofounded GiveWell, where he sought out the charities that most cost-effectively helped save lives. He then cofounded Open Philanthropy, where he oversaw a team making billions of dollars’ worth of grants across a range of areas: pandemic control, criminal justice reform, farmed animal welfare, and making AI safe, among others. This year, having learned about AI for years and observed recent events, he's narrowing his focus once again, this time on making the transition to advanced AI go well.In today's conversation, Holden returns to the show to share his overall understanding of th...2023-08-013h 13Effektiver Altruismus: ArtikelEffektiver Altruismus: ArtikelDas wichtigste Jahrhundert: So kann's nicht weitergehen – Holden KarnofskyWie weit lassen sich derzeitige Trends wie das derzeitige Wirtschaftswachstum extrapolieren? Auf einem geschichtlichen Maßstab: nicht sehr lang, meint Holden Karnofsky. Wie wird es also mit uns weitergehen?Der vollständige Text ist zu finden unter: https://effektiveraltruismus.audio/episode/das-wichtigste-jahrhundert-so-kanns-nicht-weitergehen-holden-karnofsky2023-07-1721 minEffektiver Altruismus: ArtikelEffektiver Altruismus: ArtikelDas wichtigste Jahrhundert: Sämtliche möglichen Ansichten über die Zukunft der Menschheit sind verrückt – Holden KarnofskyBefinden wir uns an einem menschheitsgeschichtlich wichtigen Zeitpunkt? Holden Karnofsky, der Autor dieses Textes, meint: Einiges spricht dafür.Der vollständige Text mit allen Diagrammen ist zu finden unter: https://effektiveraltruismus.audio/episode/das-wichtigste-jahrhundert-samtliche-moglichen-ansichten-uber-die-zukunft-der-menschheit-sind-verruckt-holden-karnofsky2023-07-0522 minEffektiver Altruismus: ArtikelEffektiver Altruismus: ArtikelDas wichtigste Jahrhundert: Die Beitragsreihe kurz zusammengefasst – Holden KarnofskyKönnte es sich bei diesem Jahrhundert um einen geschichtlichen Dreh- und Angelpunkt handelt? Für diese Hypothese argumentiert Holden Karnofksy in seiner Beitragsreihe Das wichtigste Jahrhundert. Vollständiger Text unter: https://effektiveraltruismus.audio/episode/das-wichtigste-jahrhundert-die-beitragsreihe-kurz-zusammengefasst-holden-karnofskySchick uns Deine Fragen und Dein Feedback unter: https://effektiveraltruismus.audio/contact2023-05-1923 minAI Safety Fundamentals: GovernanceAI Safety Fundamentals: Governance[Week 3] “AI Safety Seems Hard to Measure” by Holden KarnofskyIn previous pieces, I argued that there’s a real and large risk of AI systems’ developing dangerous goals of their own and defeating all of humanity - at least in the absence of specific efforts to prevent this from happening. A young, growing field of AI safety research tries to reduce this risk, by finding ways to ensure that AI systems behave as intended (rather than forming ambitious aims of their own and deceiving and manipulating humans as needed to accomplish them).Maybe we’ll succeed in reducing the risk, and maybe we won’t...2023-05-1300 minAI Safety Fundamentals: GovernanceAI Safety Fundamentals: Governance[Week 5] “Racing Through a Minefield: the AI Deployment Problem” by Holden KarnofskyPush AI forward too fast, and catastrophe could occur. Too slow, and someone else less cautious could do it. Is there a safe course?Source:https://www.cold-takes.com/racing-through-a-minefield-the-ai-deployment-problem/Crossposted from the Cold Takes Audio podcast. ---2023-05-1300 minAI Safety Fundamentals: GovernanceAI Safety Fundamentals: Governance[Week 7] “What AI companies can do today to help with the most important century” by Holden KarnofskyI’ve been writing about tangible things we can do today to help the most important century go well. Previously, I wrote about helpful messages to spread and how to help via full-time work.This piece is about what major AI companies can do (and not do) to be helpful. By “major AI companies,” I mean the sorts of AI companies that are advancing the state of the art, and/or could play a major role in how very powerful AI systems end up getting used.1This piece could be useful to people who work at tho...2023-05-1300 minAI Safety Fundamentals: GovernanceAI Safety Fundamentals: Governance[Week 8] “My current impressions on career choice for longtermists” by Holden KarnofskyThis post summarizes the way I currently think about career choice for longtermists. I have put much less time into thinking about this than 80,000 Hours, but I think it’s valuable for there to be multiple perspectives on this topic out there.Edited to add: see below for why I chose to focus on longtermism in this post.While the jobs I list overlap heavily with the jobs 80,000 Hours lists, I organize them and conceptualize them differently. 80,000 Hours tends to emphasize “paths” to particular roles working on particular causes; by con...2023-05-1300 minThe Unite Americans ShowThe Unite Americans ShowEpisode 8: Woke’s Worst Enemy • Owners of AI • Citizen Journalism • Pakistani LaunderingSHOW NOTES: “The Unite Americans Show”Episode 8Woke Culture’s Worst Enemy, Owners of AI, Becoming a Citizen Journalist, Pakistan Money LaunderingI’m Mark Pukita, and we’ll explore these topics and more in today’s 8th Episode of “TheUnite Americans Show”.Welcome!One of Woke’s Worst Enemies: James Lindsay (00:00)https://www.youtube.com/watch?v=xbby7yFrIxMOwners of OpenAIPrevious Segments:• No One Can...2023-05-0956 minLessWrong (Curated & Popular)LessWrong (Curated & Popular)"Discussion with Nate Soares on a key alignment difficulty" by Holden Karnofskyhttps://www.lesswrong.com/posts/iy2o4nQj9DnQD7Yhj/discussion-with-nate-soares-on-a-key-alignment-difficultyCrossposted from the AI Alignment Forum. May contain more technical jargon than usual.In late 2022, Nate Soares gave some feedback on my Cold Takes series on AI risk (shared as drafts at that point), stating that I hadn't discussed what he sees as one of the key difficulties of AI alignment.  I wanted to understand the difficulty he was pointing to, so the two of us had an extended Slack exchange, and I then wrote up a summary of the exchange that w...2023-04-0539 minTYPE III AUDIO (All episodes)TYPE III AUDIO (All episodes)"Discussion with Nate Soares on a key alignment difficulty" by Holden Karnofsky---client: lesswrongproject_id: curatedfeed_id: ai_safety narrator: pwqa: mdsqa_time: 1h00m---In late 2022, Nate Soares gave some feedback on my Cold Takes series on AI risk (shared as drafts at that point), stating that I hadn't discussed what he sees as one of the key difficulties of AI alignment.I wanted to understand the difficulty he was pointing to, so the two of us had an extended Slack exchange, and I then wrote up a summary of the exchange that w...2023-04-0539 minAstral Codex Ten PodcastAstral Codex Ten PodcastWhy I Am Not (As Much Of) A Doomer (As Some People)Machine Alignment Monday 3/13/23 https://astralcodexten.substack.com/p/why-i-am-not-as-much-of-a-doomer (see also Katja Grace and Will Eden’s related cases) The average online debate about AI pits someone who thinks the risk is zero, versus someone who thinks it’s any other number. I agree these are the most important debates to have for now. But within the community of concerned people, numbers vary all over the place: Scott Aaronson says says 2% Will MacAskill says 3% The median machine learning researcher on Katja Grace’s survey says 5 - 10%2023-03-2523 minFuture Matters ReaderFuture Matters ReaderHolden Karfnosky — Success without dignity: a nearcasting story of avoiding catastrophe by luckSuccess without dignity: a nearcasting story of avoiding catastrophe by luck, by Holden Karnofsky. https://forum.effectivealtruism.org/posts/75CtdFj79sZrGpGiX/success-without-dignity-a-nearcasting-story-of-avoiding Note: Footnotes in the original article have been omitted.2023-03-2019 minOm och menOm och men35. Artificiell intelligens: "Datorerna tar över"Nytt avsnitt av podcasten Om och men!Vi pratar om artificiell intelligens. Vi har läst 4 artiklar av Holden Karnofsky, så nu vet vi allt om hur datorerna kommer ta över världen.Bea går på lägenhetsvisning och spanar efter klipp. Vincent är jättearg på Karl Ove Knausgård.  Hosted on Acast. See acast.com/privacy for more information.2023-02-2657 minHear This IdeaHear This IdeaBonus: Damon Binder on Economic History and the Future of PhysicsDamon Binder is a research analyst at Open Philanthropy. His research focuses on potential risks from pandemics and from biotechnology. He previously worked as a research scholar at the University of Oxford’s Future of Humanity Institute, where he studied existential risks. Prior to that he completed his PhD in theoretical physics at Princeton University. We discuss: How did early states manage large populations? What explains the hockeystick shape of world economic growth? Did urbanisation enable more productive farming, or vice-versa? What does transformative AI mean for growth? Would 'degrowth' benefit the world? What do theoretical ph...2023-01-304h 00The ValmyThe ValmyCan effective altruism be redeemed? Podcast: The Gray Area with Sean Illing Episode: Can effective altruism be redeemed?Release date: 2023-01-23Get Podcast Transcript →powered by Listen411 - fast audio-to-text and summarizationGuest host Sigal Samuel talks with Holden Karnofsky about effective altruism, a movement flung into public scrutiny with the collapse of Sam Bankman-Fried and his crypto exchange, FTX. They discuss EA’s approach to charitable giving, the relationship between effective altruism and the moral philosophy of utilitarianism, and what reforms might be needed for the future of the movement.Note: In Augu...2023-01-261h 03The Gray Area with Sean IllingThe Gray Area with Sean IllingCan effective altruism be redeemed?Guest host Sigal Samuel talks with Holden Karnofsky about effective altruism, a movement flung into public scrutiny with the collapse of Sam Bankman-Fried and his crypto exchange, FTX. They discuss EA’s approach to charitable giving, the relationship between effective altruism and the moral philosophy of utilitarianism, and what reforms might be needed for the future of the movement.Note: In August 2022, Bankman-Fried’s philanthropic family foundation, Building a Stronger Future, awarded Vox’s Future Perfect a grant for a 2023 reporting project. That project is now on pause.Host: Sigal Samuel (@SigalSamuel), Senior...2023-01-231h 03Dwarkesh PodcastDwarkesh PodcastHolden Karnofsky - Transformative AI & Most Important CenturyHolden Karnofsky is the co-CEO of Open Philanthropy and co-founder of GiveWell. He is also the author of one of the most interesting blogs on the internet, Cold Takes.We discuss:* Are we living in the most important century?* Does he regret OpenPhil’s 30 million dollar grant to OpenAI in 2016?* How does he think about AI, progress, digital people, & ethics?Highly recommend!Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.Timestamps(0:00:00) - Intro...2023-01-031h 56Dwarkesh PodcastDwarkesh PodcastHolden Karnofsky - Transformative AI & Most Important CenturyHolden Karnofsky is the co-CEO of Open Philanthropy and co-founder of GiveWell. He is also the author of one of the most interesting blogs on the internet, Cold Takes.We discuss:* Are we living in the most important century?* Does he regret OpenPhil’s 30 million dollar grant to OpenAI in 2016?* How does he think about AI, progress, digital people, & ethics?Highly recommend!Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript here.Timestamps(0:00:00) - Intro(0:00:58) - The Most Important Century(0:06:44) - The Weirdness of Our Time(0:21:20) - The Industrial Revolution (0:35:40) - AI Suc...2023-01-031h 56TYPE III AUDIO (All episodes)TYPE III AUDIO (All episodes)"High-level hopes for AI alignment" by Holden Karnofsky---client: ea_forumproject_id: curatedfeed_id: ai, ai_safety, ai_safety__governance, ai_safety__technicalnarrator: not_t3aqa: not_t3a---In previous pieces, I argued that there's a real and large risk of AI systems' aiming to defeat all of humanity combined - and succeeding.I first argued that this sort of catastrophe would be likely without specific countermeasures to prevent it. I then argued that countermeasures could be challenging, due to some key difficulties of AI safety research.2022-12-2223 minEA TalksEA TalksRowing and Steering the Effective Altruism Movement | Joshua Monrad | EAGxOxford 22How much should the effective altruism community focus on becoming more influential, and how much should it focus on how to use its influence? In this talk, Joshua Monrad (Future of Humanity Institute) will discuss this question. Borrowing Holden Karnofsky’s analogy of “rowing and steering,” the talk will present arguments in favour of ‘steering’ more than we currently are. Finally, Joshua will discuss red-teaming and critical analyses as an example of something that could improve the direction of the EA movement.View the original talk and video here.Effective Altruism is a social movement dedicated...2022-12-0928 minWild with Sarah WilsonWild with Sarah WilsonHOLDEN KARNOFSKY: The most important century is now. BlimeyThis episode continues the fascinating-slash-frightening journey I’ve been on with you, to understand what we should prioritise as we face potential existential end times. Today’s guest, Harvard researcher and philanthropist Holden Karnofsky, brings the AI, effective altruism, longtermism and anti-growth debates together with the clarion call: “This is our moment, this century is make-or-break, pay attention people!” It’s not an idle or hysterical call, it’s one that Holden has researched extensively and is backed by global leaders in the space. As some background: Holden founded Givewell, the charity evaluator that has raised more than $US...2022-11-2252 minHigh Impact Medicine PodcastHigh Impact Medicine PodcastDr Nadia Montazeri - Community Building & Effective AltruismToday’s conversation is with Nadia Montazeri, who completed her medical school in Bonn, after which she worked as a psychiatry resident in Bern, Switzerland. Subsequently, she held the role of program director at Effective Altruism Switzerland. Effective Altruism (EA) is an organisation that uses evidence and reason to figure out how to most effectively make the world a better place. As part of her role, she coordinated the strategy for EA movement building in Switzerland. We discuss: - the transferable skills she got from her medical training; - the intricacies of community bu...2022-11-0328 minTYPE III AUDIO (All episodes)TYPE III AUDIO (All episodes)LessWrong: "How might we align transformative AI if it’s developed very soon?" by Holden Karnofsky---narrator_time: 4h30mnarrator: pwqa: kmfeed_id: ai, ai_safety, ai_safety__technical, ai_safety__governanceclient: lesswrong---https://www.lesswrong.com/posts/rCJQAkPTEypGjSJ8X/how-might-we-align-transformative-ai-if-it-s-developed-very This post is part of my AI strategy nearcasting series: trying to answer key strategic questions about transformative AI, under the assumption that key events will happen very soon, and/or in a world that is otherwise very similar to today's. This post gives my understanding of what the set of available strategies for aligning transformative AI...2022-11-031h 39EA TalksEA TalksBig needs in longtermism and how to help with them | Holden Karnofsky | EA Global: SF 22Holden talks about what he most wishes more people were doing today (career-wise), from a longtermist perspective, and then take questions on that topic or any others people want to ask about. View the original talk and video here. Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer projects. EA Global conferences are gatherings for E...2022-09-031h 15The Ethical LifeThe Ethical LifeHow worried should we be about artificial intelligence?Episode 45: A few weeks ago, a Google engineer was placed on administrative leave after he said a chatbot created by his employer had achieved sentience. The company said its systems imitated conversational exchanges but did not have consciousness. It also said the employee violated its confidentiality policy. The question about whether computers have a soul is still open to debate, but what’s clear is that they’re getting better each year and processing large amounts of data. Rick Kyte and Scott Rada discus how worried should we be about the future of artificial intelligence. Links...2022-07-0645 minEA TalksEA TalksCareers strategies and building aptitudes | Habiba Islam | EA Global - London 2021What career strategy should you follow if you’re aiming for impact? Holden Karnofsky recently wrote up his perspective on career choice for longtermists on the EA forum - focusing on building aptitudes. Habiba will run through that framework and highlight some things she personally thinks are useful and some reservations she has.This talk was taken from EA Global: London 2021. Click here to watch the talk with the PowerPoint presentation.Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer pr...2022-06-0359 minEA TalksEA Talks80,000 Hours careers workshop | Habiba Islam | EA Global: London 2021What career strategy should you follow if you’re aiming for impact? Holden Karnofsky recently wrote up his perspective on career choice for longtermists on the EA forum - focusing on building aptitudes. Habiba will run through that framework and highlight some things she personally thinks are useful and some reservations she has.This talk was taken from EA Global: London 2021. Click here to watch the talk with the PowerPoint presentation. Effective Altruism is a social movement dedicated to finding ways to do the most good possible, whether through charitable donations, career choices, or volunteer pro...2022-05-2832 min