podcast
details
.com
Print
Share
Look for any podcast host, guest or anyone
Search
Showing episodes and shows of
Oliver Habryka
Shows
Complex Systems with Patrick McKenzie (patio11)
Bits and bricks: Oliver Habryka on LessWrong, LightHaven, and community infrastructure
Patrick McKenzie (patio11) is joined by Oliver Habryka, who runs Lightcone Infrastructure—the organization behind both the LessWrong forum and the Lighthaven conference venue in Berkeley. They explore how LessWrong became one of the most intellectually consequential forums on the internet, the surprising challenges of running a hotel with fractal geometry, and why Berkeley's building regulations include an explicit permission to plug in a lamp. The conversation ranges from fire codes that inadvertently shape traffic deaths, to nonprofit fundraising strategies borrowed from church capital campaigns, to why coordination is scarcer than money in philanthropy.–Full transcript avai...
2025-10-09
1h 16
The Bayesian Conspiracy
228 – The Deep Lore of LightHaven, with Oliver Habryka
Oliver tells us how Less Wrong instantiated itself into physical reality, along with a bit of deep lore of foundational Rationalist/EA orgs. Donate to LightCone (caretakers of both LessWrong and LightHaven) here! LINKS LessWrong LightHaven Oliver’s very in-depth post on the funding situation (again) Donate Eneasz’s nominated 2023 post/story on LessWrong (you can also read […]
2024-12-24
2h 06
The Bayesian Conspiracy
228 – The Deep Lore of LightHaven, with Oliver Habryka
Oliver tells us how Less Wrong instantiated itself into physical reality, along with a bit of deep lore of foundational Rationalist/EA orgs. Donate to LightCone (caretakers of both LessWrong and LightHaven) here! LINKS LessWrong LightHaven Oliver’s very in-depth post on the funding situation (again) Donate Eneasz’s nominated 2023 post/story on LessWrong (you can also read […]
2024-12-24
2h 06
The Bayesian Conspiracy
228 – The Deep Lore of LightHaven, with Oliver Habryka
Oliver tells us how Less Wrong instantiated itself into physical reality, along with a bit of deep lore of foundational Rationalist/EA orgs. Donate to LightCone (caretakers of both LessWrong and LightHaven) here! LINKS LessWrong LightHaven Oliver’s very in-depth post on the funding situation (again) Donate Eneasz’s nominated 2023 post/story on LessWrong (you can also read his review of it on his blog) Matt Freeman’s post “The Parable of the King and the Random Process” 00:00:05 – The Birth of LightHaven 00:23:20 – The FTX Collapse 00:38:08 – The deal with Lev...
2024-12-24
2h 06
TYPE III AUDIO (All episodes)
"Enemies vs Malefactors" by Nate Soares
---client: lesswrongproject_id: curatednarrator: pwqa: kmnarrator_time: 1h15mqa_time: 0h15m---Status: some mix of common wisdom (that bears repeating in our particular context), and another deeper point that I mostly failed to communicate.Short versionHarmful people often lack explicit malicious intent. It’s worth deploying your social or community defenses against them anyway. I recommend focusing less on intent and more on patterns of harm.(Credit to my explicit articulation of this idea goes in la...
2023-03-14
09 min
LessWrong (Curated & Popular)
"Enemies vs Malefactors" by Nate Soares
https://www.lesswrong.com/posts/zidQmfFhMgwFzcHhs/enemies-vs-malefactorsStatus: some mix of common wisdom (that bears repeating in our particular context), and another deeper point that I mostly failed to communicate.Short versionHarmful people often lack explicit malicious intent. It’s worth deploying your social or community defenses against them anyway. I recommend focusing less on intent and more on patterns of harm.(Credit to my explicit articulation of this idea goes in large part to Aella, and also in part to Oliver Habryka.)
2023-03-14
09 min
The Filan Cabinet
6 - Oliver Habryka on LessWrong and other projects
In this episode I speak with Oliver Habryka, head of Lightcone Infrastructure, the organization that runs the internet forum LessWrong, about his projects in the rationality and existential risk spaces. Topics we talk about include: How did LessWrong get revived? How good is LessWrong? Is there anything that beats essays for making intellectual contributions on the internet? Why did the team behind LessWrong pivot to property development? What does the FTX situation tell us about the wider LessWrong and Effective Altruism communities? What projects could help improve the world's rationality? Oli on LessWrong Oli on...
2023-02-05
1h 58
The Nonlinear Library: LessWrong Top Posts
The Case for Extreme Vaccine Effectiveness by Ruby
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio.This is: The Case for Extreme Vaccine Effectiveness, published by Ruby on the LessWrong.I owe tremendous acknowledgments to Kelsey Piper, Oliver Habryka, Greg Lewis, and Ben Shaya. This post is built on their arguments and feedback (though I may have misunderstood them).Update, May 13I first wrote this post before investigating the impact of covid variants on vaccine effectiveness, listing the topic as a major caveat to my conclusions. I have now spent enough time (not...
2021-12-11
42 min
The Nonlinear Library: Alignment Forum Top Posts
My current framework for thinking about AGI timelines by Alex Zhu
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: My current framework for thinking about AGI timelines, published by Alex Zhu on the AI Alignment Forum. At the beginning of 2017, someone I deeply trusted said they thought AGI would come in 10 years, with 50% probability. I didn't take their opinion at face value, especially since so many experts seemed confident that AGI was decades away. But the possibility of imminent apocalypse seemed plausible enough and important enough that I decided to prioritize...
2021-12-05
05 min
The Nonlinear Library: Alignment Forum Top Posts
Welcome & FAQ! by Ruben Bloom, Oliver Habryka
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Welcome & FAQ!, published by by Ruben Bloom, Oliver Habryka on the AI Alignment Forum. The AI Alignment Forum was launched in 2018. Since then, several hundred researchers have contributed approximately two thousand posts and nine thousand comments. Nearing the third birthday of the Forum, we are publishing this updated and clarified FAQ. Minimalist, watercolor sketch of humanity spreading across the stars by VQGAN I have a practical question concerning a site feature.
2021-12-05
11 min
The Nonlinear Library: Alignment Forum Top Posts
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22] by Oliver Habryka, Buck Shlegeris
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22], published by Oliver Habryka, Buck Shlegeris on the AI Alignment Forum. We (Redwood Research and Lightcone Infrastructure) are organizing a bootcamp to bring people interested in AI Alignment up-to-speed with the state of modern ML engineering. We expect to invite about 20 technically talented effective altruists for three weeks of intense learning to Berkeley, taught by engineers working at AI Alignment...
2021-12-05
03 min
The Nonlinear Library: Alignment Forum Top Posts
Writeup: Progress on AI Safety via Debate by Beth Barnes, Paul Christiano
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Writeup: Progress on AI Safety via Debate, published by Beth Barnes, Paul Christiano on the AI Alignment Forum. This is a writeup of the research done by the "Reflection-Humans" team at OpenAI in Q3 and Q4 of 2019. During that period we investigated mechanisms that would allow evaluators to get correct and helpful answers from experts, without the evaluators themselves being expert in the domain of the questions. This follows from the original work...
2021-12-05
52 min
The Nonlinear Library: Alignment Forum Top Posts
Introducing the AI Alignment Forum (FAQ) by Oliver Habryka, Ben Pace, Raymond Arnold, Jim Babcock
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Introducing the AI Alignment Forum (FAQ), published by Oliver Habryka, Ben Pace, Raymond Arnold, Jim Babcock on the AI Alignment Forum. After a few months of open beta, the AI Alignment Forum is ready to launch. It is a new website built by the team behind LessWrong 2.0, to help create a new hub for technical AI Alignment research and discussion. This is an in-progress FAQ about the new Forum. What are the...
2021-12-05
10 min
The Nonlinear Library: Alignment Section
Writeup: Progress on AI Safety via Debate by Beth Barnes, Paul Christiano
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Writeup: Progress on AI Safety via Debate, published by Beth Barnes, Paul Christiano on the AI Alignment Forum. This is a writeup of the research done by the "Reflection-Humans" team at OpenAI in Q3 and Q4 of 2019. During that period we investigated mechanisms that would allow evaluators to get correct and helpful answers from experts, without the evaluators themselves being expert in the domain of the questions. This follows from the original work on AI...
2021-11-19
52 min
The Nonlinear Library: Alignment Section
Writeup: Progress on AI Safety via Debate by Beth Barnes, Paul Christiano
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Writeup: Progress on AI Safety via Debate, published by Beth Barnes, Paul Christian on the AI Alignment Forum. This is a writeup of the research done by the "Reflection-Humans" team at OpenAI in Q3 and Q4 of 2019. During that period we investigated mechanisms that would allow evaluators to get correct and helpful answers from experts, without the evaluators themselves being expert in the domain of the questions. This follows from the original work on AI...
2021-11-17
53 min
The Nonlinear Library: Alignment Section
Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22] by Oliver Habryka, Buck Shlegeris
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Apply to the ML for Alignment Bootcamp (MLAB) in Berkeley [Jan 3 - Jan 22], published by Oliver Habryka, Buck Shlegeris on the AI Alignment Forum. We (Redwood Research and Lightcone Infrastructure) are organizing a bootcamp to bring people interested in AI Alignment up-to-speed with the state of modern ML engineering. We expect to invite about 20 technically talented effective altruists for three weeks of intense learning to Berkeley, taught by engineers working at AI Alignment organizations. The...
2021-11-16
03 min
EA Talks
EAG 2018 SF: Trusting experts
If you have one opinion, and the prevailing experts have a different opinion, should you assume that you’re incorrect? And if so, how can you determine who’s an expert, and whether or not you count as one yourself? In this whiteboard discussion from Effective Altruism Global 2018: San Francisco, Gregory Lewis and Oliver Habryka offer their contrasting perspectives.To learn more about effective altruism, visithttp://www.effectivealtruism.orgTo read a transcript of this talk, visithttp://www.effectivealtruism.org/arti…This talk was filmed at EA Global. Find out ho...
2019-02-28
58 min