podcast
details
.com
Print
Share
Look for any podcast host, guest or anyone
Search
Showing episodes and shows of
Michael Trazzi
Shows
EA Forum Podcast (All audio)
“Things I Learned Making The SB-1047 Documentary” by Michaël Trazzi
Last week I published a 30-minute documentary about SB-1047 I had been working on since September. Here's what I learned. Story of the bill The bill had to go through many committees before passing the California Senate and Assembly, eventually reaching the Governor's desk. Most people started paying attention in May-September 2024, but the story of the bill starts in September 2023, when Senator Scott Wiener published an intent bill to gather support (co-sponsors), before publishing SB-1047 in February 2024. He actually had conversations about AI Safety as early as March 2023. AI Labs & Covered Models...
2025-05-13
05 min
"The Cognitive Revolution" | AI Builders, Researchers, and Live Player Analysis
Leading Indicators of AI Danger: Owain Evans on Situational Awareness & Out-of-Context Reasoning, from The Inside View
In this special crossover episode of The Cognitive Revolution, Nathan introduces a conversation from The Inside View featuring Owain Evans, AI alignment researcher at UC Berkeley's Center for Human Compatible AI. Evans and host Michael Trazzi delve into critical AI safety topics, including situational awareness and out-of-context reasoning. Discover Evans' groundbreaking work on the reversal curse and connecting the dots, exploring how large language models process and infer information. This timely discussion highlights the importance of situational awareness in AI systems, particularly in light of recent advancements in AI capabilities. Don't miss this insightful exploration of the evolving relationship...
2024-10-16
2h 26
The Inside View
Owain Evans - AI Situational Awareness, Out-of-Context Reasoning
Owain Evans is an AI Alignment researcher, research associate at the Center of Human Compatible AI at UC Berkeley, and now leading a new AI safety research group. In this episode we discuss two of his recent papers, “Me, Myself, and AI: The Situational Awareness Dataset (SAD) for LLMs” and “Connecting the Dots: LLMs can Infer and Verbalize Latent Structure from Disparate Training Data”, alongside some Twitter questions. LINKS Patreon: https://www.patreon.com/theinsideview Manifund: https://manifund.org/projects/making-52-ai-alignment-video-explainers-and-podcasts Ask questions: https://twitter.com/MichaelTraz...
2024-08-23
2h 15
The Inside View
[Crosspost] Adam Gleave on Vulnerabilities in GPT-4 APIs (+ extra Nathan Labenz interview)
This is a special crosspost episode where Adam Gleave is interviewed by Nathan Labenz from the Cognitive Revolution. At the end I also have a discussion with Nathan Labenz about his takes on AI. Adam Gleave is the founder of Far AI, and with Nathan they discuss finding vulnerabilities in GPT-4's fine-tuning and Assistant PIs, Far AI's work exposing exploitable flaws in "superhuman" Go AIs through innovative adversarial strategies, accidental jailbreaking by naive developers during fine-tuning, and more. OUTLINE (00:00) Intro (02:57) NATHAN...
2024-05-17
2h 16
The Inside View
Ethan Perez on Selecting Alignment Research Projects (ft. Mikita Balesni & Henry Sleight)
Ethan Perez is a Research Scientist at Anthropic, where he leads a team working on developing model organisms of misalignment. Youtube: https://youtu.be/XDtDljh44DM Ethan is interviewed by Mikita Balesni (Apollo Research) and Henry Sleight (Astra Fellowship)) about his approach in selecting projects for doing AI Alignment research. A transcript & write-up will be available soon on the alignment forum.
2024-04-09
36 min
The Inside View
Emil Wallner on Sora, Generative AI Startups and AI optimism
Emil is the co-founder of palette.fm (colorizing B&W pictures with generative AI) and was previously working in deep learning for Google Arts & Culture. We were talking about Sora on a daily basis, so I decided to record our conversation, and then proceeded to confront him about AI risk. Patreon: https://www.patreon.com/theinsideview Sora: https://openai.com/sora Palette: https://palette.fm/ Emil: https://twitter.com/EmilWallner OUTLINE (00:00) this is not a podcast (01:50...
2024-02-20
1h 42
The Inside View
Evan Hubinger on Sleeper Agents, Deception and Responsible Scaling Policies
Evan Hubinger leads the Alignment stress-testing at Anthropic and recently published "Sleeper Agents: Training Deceptive LLMs That Persist Through Safety Training". In this interview we mostly discuss the Sleeper Agents paper, but also how this line of work relates to his work with Alignment Stress-testing, Model Organisms of Misalignment, Deceptive Instrumental Alignment or Responsible Scaling Policies. Paper: https://arxiv.org/abs/2401.05566 Transcript: https://theinsideview.ai/evan2 Manifund: https://manifund.org/projects/making-52-ai-alignment-video-explainers-and-podcasts Donate: https://theinsideview.ai/donate Patreon: https://www.patreon.com/theinsideview OUTLINE (00:00) Intro (00:20) What are Sl...
2024-02-12
52 min
The Inside View
[Jan 2023] Jeffrey Ladish on AI Augmented Cyberwarfare and compute monitoring
Jeffrey Ladish is the Executive Director of Palisade Research which aimes so "study the offensive capabilities or AI systems today to better understand the risk of losing control to AI systems forever". He previously helped build out the information security program at Anthropic. Audio is a edit & re-master of the Twitter Space on "AI Governance and cyberwarfare" that happened a year ago. Posting now because I have only recently discovered how to get the audio & video from Twitter spaces and (most of) the arguments are still relevant today Jeffrey would probably have a lot...
2024-01-27
33 min
The Inside View
Holly Elmore on pausing AI
Holly Elmore is an AI Pause Advocate who has organized two protests in the past few months (against Meta's open sourcing of LLMs and before the UK AI Summit), and is currently running the US front of the Pause AI Movement. Prior to that, Holly previously worked at a think thank and has a PhD in evolutionary biology from Harvard. [Deleted & re-uploaded because there were issues with the audio] Youtube: https://youtu.be/5RyttfXTKfs Transcript: https://theinsideview.ai/holly Outline
2024-01-22
1h 40
The Inside View
Holly Elmore on Pausing AI Development
Holly Elmore is an AI Pause Advocate who has organized two protests in the past few months (against Meta's open sourcing of LLMs and before the UK AI Summit), and is currently running the US front of the Pause AI Movement. Prior to that, Holly previously worked at think thank Rethink Priorities and has a PhD in evolutionary biology from Harvard. Youtube: https://youtu.be/iO9jceWSkdk Transcript: https://theinsideview.ai/holly Outline (00:00) Holly, Pause, Protests (05:08) Without...
2024-01-21
1h 42
The Inside View
Podcast Retrospective and Next Steps
https://youtu.be/Fk2MrpuWinc
2024-01-09
1h 03
The Inside View
Kellin Pelrine on beating the strongest go AI
Youtube: https://youtu.be/_ANvfMblakQ Part 1 (about the paper): https://youtu.be/Tip1Ztjd-so Paper: https://arxiv.org/pdf/2211.00241 Patreon: https://www.patreon.com/theinsideview
2023-10-04
18 min
The Inside View
Paul Christiano's views on "doom" (ft. Robert Miles)
Youtube: https://youtu.be/JXYcLQItZsk Paul Christiano's post: https://www.lesswrong.com/posts/xWMqsvHapP3nwdSW8/my-views-on-doom
2023-09-30
04 min
The Inside View
Neel Nanda on mechanistic interpretability, superposition and grokking
Neel Nanda is a researcher at Google DeepMind working on mechanistic interpretability. He is also known for his YouTube channel where he explains what is going on inside of neural networks to a large audience. In this conversation, we discuss what is mechanistic interpretability, how Neel got into it, his research methodology, his advice for people who want to get started, but also papers around superposition, toy models of universality and grokking, among other things. Youtube: https://youtu.be/cVBGjhN4-1g Transcript: https://theinsideview.ai/neel
2023-09-21
2h 04
The Inside View
Joscha Bach on how to stop worrying and love AI
Joscha Bach (who defines himself as an AI researcher/cognitive scientist) has recently been debating existential risk from AI with Connor Leahy (previous guest of the podcast), and since their conversation was quite short I wanted to continue the debate in more depth. The resulting conversation ended up being quite long (over 3h of recording), with a lot of tangents, but I think this gives a somewhat better overview of Joscha’s views on AI risk than other similar interviews. We also discussed a lot of other topics, that you can find in the ou...
2023-09-08
2h 54
The Inside View
Erik Jones on Automatically Auditing Large Language Models
Erik is a Phd at Berkeley working with Jacob Steinhardt, interested in making generative machine learning systems more robust, reliable, and aligned, with a focus on large language models.In this interview we talk about his paper "Automatically Auditing Large Language Models via Discrete Optimization" that he presented at ICML. Youtube: https://youtu.be/bhE5Zs3Y1n8 Paper: https://arxiv.org/abs/2303.04381 Erik: https://twitter.com/ErikJones313 Host: https://twitter.com/MichaelTrazzi Patreon: https://www.patreon.com/theinsideview Outline
2023-08-11
22 min
The Inside View
Dylan Patel on the GPU Shortage, Nvidia and the Deep Learning Supply Chain
Dylan Patel is Chief Analyst at SemiAnalysis a boutique semiconductor research and consulting firm specializing in the semiconductor supply chain from chemical inputs to fabs to design IP and strategy. The SemiAnalysis substack has ~50,000 subscribers and is the second biggest tech substack in the world. In this interview we discuss the current GPU shortage, why hardware is a multi-month process, the deep learning hardware supply chain and Nvidia's strategy. Youtube: https://youtu.be/VItz2oEq5pA Transcript: https://theinsideview.ai/dylan
2023-08-09
12 min
The Inside View
Tony Wang on Beating Superhuman Go AIs with Advesarial Policies
Tony is a PhD student at MIT, and author of "Advesarial Policies Beat Superhuman Go AIs", accepted as Oral at the International Conference on Machine Learning (ICML). Paper: https://arxiv.org/abs/2211.00241 Youtube: https://youtu.be/Tip1Ztjd-so
2023-08-04
03 min
The Inside View
David Bau on Editing Facts in GPT, AI Safety and Interpretability
David Bau is an Assistant Professor studying the structure and interpretation of deep networks, and the co-author on "Locating and Editing Factual Associations in GPT" which introduced Rank-One Model Editing (ROME), a method that allows users to alter the weights of a GPT model, for instance by forcing it to output that the Eiffel Tower is in Rome. David is a leading researcher in interpretability, with an interest in how this could help AI Safety. The main thesis of David's lab is that understanding the rich internal structure of deep networks is a grand and fundamental research question with...
2023-08-01
24 min
The Inside View
Alexander Pan on the MACHIAVELLI benchmark
I've talked to Alexander Pan, 1st year at Berkeley working with Jacob Steinhardt about his paper "Measuring Trade-Offs Between Rewards and Ethical Behavior in the MACHIAVELLI Benchmark" accepted as oral at ICML. Youtube: https://youtu.be/MjkSETpoFlY Paper: https://arxiv.org/abs/2304.03279
2023-07-26
20 min
The Inside View
Vincent Weisser on Funding AI Alignment Research
Vincent is currently spending his time supporting AI alignment efforts, as well as investing across AI, semi, energy, crypto, bio and deeptech. His mission is to improve science, augment human capabilities, have a positive impact, help reduce existential risks and extend healthy human lifespan. Youtube: https://youtu.be/weRoJ8KN2f0 Outline (00:00) Why Is Vincent Excited About the ICML Conference (01:30) Vincent's Background In AI Safety (02:23) Funding AI Alignment Through Crypto, Bankless (03:35) Taxes When Donating Crypto (04:09) Alignment Efforts Vincent Is...
2023-07-24
18 min
The Inside View
[JUNE 2022] Aran Komatsuzaki on Scaling, GPT-J and Alignment
Aran Komatsuzaki is a ML PhD student at GaTech and lead researcher at EleutherAI where he was one of the authors on GPT-J. In June 2022 we recorded an episode on scaling following up on the first Ethan Caballero episode (where we mentioned Aran as an influence on how Ethan started thinking about scaling). Note: For some reason I procrastinated on editing the podcast, then had a lot of in-person podcasts so I left this one as something to edit later, until the date was so distant from June 2022 that I thought publishing did not...
2023-07-19
1h 17
The Inside View
Nina Rimsky on AI Deception and Mesa-optimisation
Nina is a software engineer at Stripe currently working with Evan Hubinger (Anthropic) on AI Deception and Mesa Optimization. I met her at a party two days ago and I found her explanation of AI Deception really clear so I thought I should have her explain it on camera. Youtube: https://youtu.be/6ngasL054wM Twitter: https://twitter.com/MichaelTrazzi Patreon: https://www.patreon.com/theinsideview
2023-07-18
55 min
The Inside View
Curtis Huebner on Doom, AI Timelines and Alignment at EleutherAI
Curtis, also known on the internet as AI_WAIFU, is the head of Alignment at EleutherAI. In this episode we discuss the massive orders of H100s from different actors, why he thinks AGI is 4-5 years away, why he thinks we're 90% "toast", his comment on Eliezer Yudkwosky's Death with Dignity, and what kind of Alignment projects is currently going on at EleutherAI, especially a project with Markov chains and the Alignment test project that he is currently leading. Youtube: https://www.youtube.com/watch?v=9s3XctQOgew Transcript: https://theinsideview.ai...
2023-07-16
1h 29
The Inside View
Eric Michaud on scaling, grokking and quantum interpretability
Eric is a PhD student in the Department of Physics at MIT working with Max Tegmark on improving our scientific/theoretical understanding of deep learning -- understanding what deep neural networks do internally and why they work so well. This is part of a broader interest in the nature of intelligent systems, which previously led him to work with SETI astronomers, with Stuart Russell's AI alignment group (CHAI), and with Erik Hoel on a project related to integrated information theory. Transcript: https://theinsideview.ai/eric Youtube: https://youtu.be/BtHMIQs_5Nw
2023-07-12
48 min
The Inside View
Jesse Hoogland on Developmental Interpretability and Singular Learning Theory
Jesse Hoogland is a research assistant at David Krueger's lab in Cambridge studying AI Safety. More recently, Jesse has been thinking about Singular Learning Theory and Developmental Interpretability, which we discuss in this episode. Before he came to grips with existential risk from AI, he co-founded a health-tech startup automating bariatric surgery patient journeys. (00:00) Intro (03:57) Jesse’s Story And Probability Of Doom (06:21) How Jesse Got Into Singular Learning Theory (08:50) Intuition behind SLT: the loss landscape (12:23) Does SLT actually predict anything? Phase Transitions (14:37) Why care ab...
2023-07-06
43 min
The Inside View
Clarifying and predicting AGI by Richard Ngo
Explainer podcast for Richard Ngo's "Clarifying and predicting AGI" post on Lesswrong, which introduces the t-AGI framework to evaluate AI progress. A system is considered t-AGI if it can outperform most human experts, given time t, on most cognitive tasks. This is a new format, quite different from the interviews and podcasts I have been recording in the past. If you enjoyed this, let me know in the YouTube comments, or on twitter, @MichaelTrazzi. Youtube: https://youtu.be/JXYcLQItZsk Clarifying and predicting AGI: https://www.alignmentforum.org/posts/BoA3agdkAzL6HQtQP/clarifying-and-predicting-agi
2023-05-10
04 min
The Inside View
Alan Chan And Max Kauffman on Model Evaluations, Coordination and AI Safety
Max Kaufmann and Alan Chan discuss the evaluation of large language models, AI Governance and more generally the impact of the deployment of foundational models. is currently a Research Assistant to Owain Evans, mainly thinking about (and fixing) issues that might arise as we scale up our current ML systems, but also interested in issues arising from multi-agent failures and situational awareness. Alan is PhD student at Mila advised by Nicolas Le Roux, with a strong interest in AI Safety, AI Governance and coordination. He has also recently been working with David Krueger and helped me with so...
2023-05-06
1h 13
The Inside View
Breandan Considine on Neuro Symbolic AI, Coding AIs and AI Timelines
Breandan Considine is a PhD student at the School of Computer Science at McGill University, under the supervision of Jin Guo and Xujie Si). There, he is building tools to help developers locate and reason about software artifacts, by learning to read and write code. I met Breandan while doing my "scale is all you need" series of interviews at Mila, where he surprised me by sitting down for two hours to discuss AGI timelines, augmenting developers with AI and neuro symbolic AI. A fun fact that many noticed while watching the "Scale Is All You Need change my...
2023-05-04
1h 45
The Inside View
Christoph Schuhmann on Open Source AI, Misuse and Existential risk
Christoph Schuhmann is the co-founder and organizational lead at LAION, the non-profit who released LAION-5B, a dataset of 5,85 billion CLIP-filtered image-text pairs, 14x bigger than LAION-400M, previously the biggest openly accessible image-text dataset in the world. Christoph is being interviewed by Alan Chan, PhD in Machine Learning at Mila, and friend of the podcast, in the context of the NeurIPS "existential risk from AI greater than 10% change my mind". youtube: https://youtu.be/-Mzfru1r_5s transcript: https://theinsideview.ai/christoph OUTLINE (00:00...
2023-05-01
32 min
The Inside View
Simeon Campos on Short Timelines, AI Governance and AI Alignment Field Building
Siméon Campos is the founder of EffiSciences and SaferAI, mostly focusing on alignment field building and AI Governance. More recently, he started the newsletter Navigating AI Risk on AI Governance, with a first post on slowing down AI. Note: this episode was recorded in October 2022 so a lot of the content being discussed references what was known at the time, in particular when discussing GPT-3 (insteaed of GPT-3) or ACT-1 (instead of more recent things like AutoGPT). Transcript: https://theinsideview.ai/simeon Host: https://twitter.com/MichaelTrazzi
2023-04-29
2h 03
The Inside View
Collin Burns On Discovering Latent Knowledge In Language Models Without Supervision
Collin Burns is a second-year ML PhD at Berkeley, working with Jacob Steinhardt on making language models honest, interpretable, and aligned. In 2015 he broke the Rubik’s Cube world record, and he's now back with "Discovering latent knowledge in language models without supervision", a paper on how you can recover diverse knowledge represented in large language models without supervision. Transcript: https://theinsideview.ai/collin Paper: https://arxiv.org/abs/2212.03827 Lesswrong post: https://bit.ly/3kbyZML Host: https://twitter.com/MichaelTrazzi Collin: https://twitter.com...
2023-01-17
2h 34
The Inside View
Victoria Krakovna–AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment
Victoria Krakovna is a Research Scientist at DeepMind working on AGI safety and a co-founder of the Future of Life Institute, a non-profit organization working to mitigate technological risks to humanity and increase the chances of a positive future. In this interview we discuss three of her recent LW posts, namely DeepMind Alignment Team Opinions On AGI Ruin Arguments, Refining The Sharp Left Turn Threat Model and Paradigms of AI Alignment. Transcript: theinsideview.ai/victoria Youtube: https://youtu.be/ZpwSNiLV-nw OUTLINE (00:00) Intro (00:48) DeepMind A...
2023-01-12
1h 52
The Inside View
David Krueger–Coordination, Alignment, Academia
David Krueger is an assistant professor at the University of Cambridge and got his PhD from Mila. His research group focuses on aligning deep learning systems, but he is also interested in governance and global coordination. He is famous in Cambridge for not having an AI alignment research agenda per se, and instead he tries to enable his seven PhD students to drive their own research. In this episode we discuss AI Takeoff scenarios, research going on at David's lab, Coordination, Governance, Causality, the public perception of AI Alignment research and how to change it.
2023-01-07
2h 45
The Inside View
Ethan Caballero–Broken Neural Scaling Laws
Ethan Caballero is a PhD student at Mila interested in how to best scale Deep Learning models according to all downstream evaluations that matter. He is known as the fearless leader of the "Scale Is All You Need" movement and the edgiest person at MILA. His first interview is the second most popular interview on the channel and today he's back to talk about Broken Neural Scaling Laws and how to use them to superforecast AGI. Youtube: https://youtu.be/SV87S38M1J4 Transcript: https://theinsideview.ai/ethan2 OUTLINE
2022-11-03
23 min
The Inside View
Irina Rish–AGI, Scaling and Alignment
Irina Rish a professor at the Université de Montréal, a core member of Mila (Quebec AI Institute), and the organizer of the neural scaling laws workshop towards maximally beneficial AGI. In this episode we discuss Irina's definition of Artificial General Intelligence, her takes on AI Alignment, AI Progress, current research in scaling laws, the neural scaling laws workshop she has been organizing, phase transitions, continual learning, existential risk from AI and what is currently happening in AI Alignment at Mila. Transcript: theinsideview.ai/irina Youtube: https://youtu.be/ZwvJn4x714s ...
2022-10-18
1h 26
The Inside View
Shahar Avin–Intelligence Rising, AI Governance
Shahar is a senior researcher at the Center for the Study of Existential Risk in Cambridge. In his past life, he was a Google Engineer, though right now he spends most of your time thinking about how to prevent the risks that occur if companies like Google end up deploying powerful AI systems, by organizing AI Governance role-playing workshops. In this episode, we talk about a broad variety of topics, including how we could apply the lessons from running AI Governance workshops to governing transformative AI, AI Strategy, AI Governance, Trustworthy AI Development and end up answering...
2022-09-23
2h 04
The Inside View
Katja Grace on Slowing Down AI, AI Expert Surveys And Estimating AI Risk
Katja runs AI Impacts, a research project trying to incrementally answer decision-relevant questions about the future of AI. She is well known for a survey published in 2017 called, When Will AI Exceed Human Performance? Evidence From AI Experts and recently published a new survey of AI Experts: What do ML researchers think about AI in 2022. We start this episode by discussing what Katja is currently thinking about, namely an answer to Scott Alexander on why slowing down AI Progress is an underexplored path to impact. Youtube: https://youtu.be/rSw3UVDZge0 Audio & Transcript: https://theinsideview.a...
2022-09-16
1h 41
The Inside View
Markus Anderljung–AI Policy
Markus Anderljung is the Head of AI Policy at the Centre for Governance of AI in Oxford and was previously seconded to the UK government office as a senior policy specialist. In this episode we discuss Jack Clark's AI Policy takes, answer questions about AI Policy from Twitter and explore what is happening in the AI Governance landscape more broadly. Youtube: https://youtu.be/DD303irN3ps Transcript: https://theinsideview.ai/markus Host: https://twitter.com/MichaelTrazzi Markus: https://twitter.com/manderljung OUTLINE (00:00) Highlights & Intro (00:57) J...
2022-09-09
1h 43
The Inside View
Alex Lawsen—Forecasting AI Progress
Alex Lawsen is an advisor at 80,000 hours, released an Introduction to Forecasting Youtube Series and has recently been thinking about forecasting AI progress, why you cannot just "update all the way bro" (discussed in my latest episode with Connor Leahy) and how to develop inside views about AI Alignment in general. Youtube: https://youtu.be/vLkasevJP5c Transcript: https://theinsideview.ai/alex Host: https://twitter.com/MichaelTrazzi Alex: https://twitter.com/lxrjl OUTLINE (00:00) Intro (00:31) How Alex Ended Up Making Forecasting Videos (02:43) Why You S...
2022-09-06
1h 04
The Inside View
Robert Long–Artificial Sentience
Robert Long is a research fellow at the Future of Humanity Institute. His work is at the intersection of the philosophy of AI Safety and consciousness of AI. We talk about the recent LaMDA controversy, Ilya Sutskever's slightly conscious tweet, the metaphysics and philosophy of consciousness, artificial sentience, and how a future filled with digital minds could get really weird. Youtube: https://youtu.be/K34AwhoQhb8 Transcript: https://theinsideview.ai/roblong Host: https://twitter.com/MichaelTrazzi Robert: https://twitter.com/rgblong Robert's blog: https://experiencemachines.substack.com O...
2022-08-28
1h 46
The Inside View
Ethan Perez–Inverse Scaling, Language Feedback, Red Teaming
Ethan Perez is a research scientist at Anthropic, working on large language models. He is the second Ethan working with large language models coming on the show but, in this episode, we discuss why alignment is actually what you need, not scale. We discuss three projects he has been pursuing before joining Anthropic, namely the Inverse Scaling Prize, Red Teaming Language Models with Language Models, and Training Language Models with Language Feedback. Ethan Perez: https://twitter.com/EthanJPerez Transcript: https://theinsideview.ai/perez Host: https://twitter.com/MichaelTrazzi O...
2022-08-24
2h 01
The Inside View
Robert Miles–Youtube, AI Progress and Doom
Robert Miles has been making videos for Computerphile, then decided to create his own Youtube channel about AI Safety. Lately, he's been working on a Discord Community that uses Stampy the chatbot to answer Youtube comments. We also spend some time discussing recent AI Progress and why Rob is not that optimistic about humanity's survival. Transcript: https://theinsideview.ai/rob Youtube: https://youtu.be/DyZye1GZtfk Host: https://twitter.com/MichaelTrazzi Rob: https://twitter.com/robertskmiles OUTLINE (00:00:00) Intro (00:02:25) Youtube (00:28:30) Stampy (00:51:24) AI P...
2022-08-19
2h 51
The Inside View
Connor Leahy–EleutherAI, Conjecture
Connor was the first guest of this podcast. In the last episode, we talked a lot about EleutherAI, a grassroot collective of researchers he co-founded, who open-sourced GPT-3 size models such as GPT-NeoX and GPT-J. Since then, Connor co-founded Conjecture, a company aiming to make AGI safe through scalable AI Alignment research. One of the goals of Conjecture is to reach a fundamental understanding of the internal mechanisms of current deep learning models using interpretability techniques. In this episode, we go through the famous AI Alignment compass memes, discuss Connor’s inside views about AI progress, how he a...
2022-07-22
2h 57
The Inside View
Raphaël Millière Contra Scaling Maximalism
Raphaël Millière is a Presidential Scholar in Society and Neuroscience at Columbia University. He has previously completed a PhD in philosophy in Oxford, is interested in the philosophy of mind, cognitive science, and artificial intelligence, and has recently been discussing at length the current progress in AI with popular Twitter threads on GPT-3, Dalle-2 and a thesis he called “scaling maximalism”. Raphaël is also co-organizing with Gary Marcus a workshop about compositionality in AI at the end of the month. Transcript: https://theinsideview.ai/raphael Video: https://youtu.be/2EHWzK10k...
2022-06-24
2h 27
The Inside View
Blake Richards–AGI Does Not Exist
Blake Richards is an Assistant Professor in the Montreal Neurological Institute and the School of Computer Science at McGill University and a Core Faculty Member at MiLA. He thinks that AGI is not a coherent concept, which is why he ended up on a recent AGI political compass meme. When people asked on Twitter who was the edgiest people at MiLA, his name got actually more likes than Ethan, so hopefully, this podcast will help re-establish the truth. Transcript: https://theinsideview.ai/blake Video: https://youtu.be/kWsHS7tXjSU Outline: (01:03) Highlights
2022-06-14
1h 15
Out Of The Blank
#1123 - Roman V. Yampolskiy
Roman V. Yampolskiy is a computer scientist at the University of Louisville, known for his work on behavioral biometrics, security of cyberworlds and artificial intelligence safety. Yampolskiy has warned of the possibility of existential risk from advanced artificial intelligence, and has advocated research into "boxing" artificial intelligence. More broadly, Yampolskiy and his collaborator, Michaël Trazzi, have proposed introducing "Achilles heels" into potentially dangerous AI,--- Support this podcast: https://anchor.fm/out-of-the-blank-podcast/support
2022-06-04
56 min
The Inside View
Ethan Caballero–Scale is All You Need
Ethan is known on Twitter as the edgiest person at MILA. We discuss all the gossips around scaling large language models in what will be later known as the Edward Snowden moment of Deep Learning. On his free time, Ethan is a Master’s degree student at MILA in Montreal, and has published papers on out of distribution generalization and robustness generalization, accepted both as oral presentations and spotlight presentations at ICML and NeurIPS. Ethan has recently been thinking about scaling laws, both as an organizer and speaker for the 1st Neural Scaling Laws Workshop. Transcript: https://th...
2022-05-05
51 min
The Inside View
10. Peter Wildeford on Forecasting
Peter is the co-CEO of Rethink Priorities, a fast-growing non-profit doing research on how to improve the long-term future. On his free time, Peter makes money in prediction markets and is quickly becoming one of the top forecasters on Metaculus. We talk about the probability of London getting nuked, Rethink Priorities and why EA should fund projects that scale. Check out the video and transcript here: https://theinsideview.github.io/peter
2022-04-13
51 min
The Inside View
9. Emil Wallner on Building a €25000 Machine Learning Rig
Emil is a resident at the Google Arts & Culture Lab were he explores the intersection between art and machine learning. He recently built his own Machine Learning server, or rig, which costed him €25000. Emil's Story: https://www.emilwallner.com/p/ml-rig Youtube: https://youtu.be/njbPpxhE6W0 00:00 Intro 00:23 Building your own rig 06:11 The Nvidia GPU rder hack 15:51 Inside Emil's rig 21:31 Motherboard 23:55 Cooling and datacenters 29:36 Deep Learning lessons from owning your hardware 36:20 Shared resources vs. personal GPUs 39:12 RAM, ch...
2022-03-23
56 min
The Inside View
8. Sonia Joseph on NFTs, Web 3 and AI Safety
Sonia is a graduate student applying ML to neuroscience at MILA. She was previously applying deep learning to neural data at Janelia, an NLP research engineer at a startup and graduated in computational neuroscience at Princeton University. Anonymous feedback: https://app.suggestionox.com/r/xOmqTW Twitter: https://twitter.com/MichaelTrazzi Sonia's December update: https://t.co/z0GRqDTnWm Sonia's Twitter: https://twitter.com/soniajoseph_ Orthogonality Thesis: https://www.youtube.com/watch?v=hEUO6pjwFOo Paperclip game: https://www.decisionproblem.com/paperclips/ Ngo & Yudkowsky on feedback loop...
2021-12-22
1h 25
The Nonlinear Library: Alignment Forum Top Posts
A Gym Gridworld Environment for the Treacherous Turn by Michaël Trazzi
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: A Gym Gridworld Environment for the Treacherous Turn, published by Michaël Trazzi on the AI Alignment Forum. This is a linkpost for EDIT: posted here for feedback and discussion. I plan to continue working on different models/environments, so feel free to suggest improvements. (tl;dr: In an attempt to better understand the treacherous turn, I created a gridworld environment where an agent learns to deceive an overseer by adopting a...
2021-12-03
04 min
The Inside View
7. Phil Trammell on Economic Growth under Transformative AI
Phil Trammell is an Oxford PhD student in economics and research associate at the Global Priorities Institute. Phil is one of the smartest person I know, when considering the intersection of the long-term future and economic growth. Funnily enough, Phil was my roomate, a few years ago in Oxford, and last time I called him he casually said that he had written an extensive report on the econ of AI. A few weeks ago, I decided that I would read that report (which actually is a literature review), and that I would translate everything that I learn along the...
2021-10-24
2h 09
The Inside View
6. Slava Bobrov on Brain Computer Interfaces
In this episode I discuss Brain Computer Interfaces with Slava Bobrov, a self-taught Machine Learning Engineer applying AI to neural biosignals to control robotic limbs. This episode will be of special interest to you if you're an engineer who wants to get started with brain computer interfaces, or just broadly interested in how this technology could enhance human intelligence. Fun fact: most of the questions I asked were sent by my Twitter followers, or come from a Discord I co-created on Brain Computer Interfaces. So if you want your questions to be on the next video or you're genuinely...
2021-10-06
1h 39
The Inside View
5. Charlie Snell on DALL-E and CLIP
We talk about AI generated art with Charlie Snell, a Berkeley student who wrote extensively about AI art for ML@Berkeley's blog (https://ml.berkeley.edu/blog/). We look at multiple slides with art throughout our conversation, so I highly recommend watching the video (https://www.youtube.com/watch?v=gcwidpxeAHI). In the first part we go through Charlie's explanations of DALL-E, a model trained end-to-end by OpenAI to generate images from prompts. We then talk about CLIP + VQGAN, where CLIP is another model by OpenAI matching prompts and images, and VQGAN is a state-of-the art GAN...
2021-09-16
2h 53
The Inside View
4. Sav Sidorov on Learning, Contrarianism and Robotics
I interview Sav Sidorov about top-down learning, contrarianism, religion, university, robotics, ego , education, twitter, friends, psychedelics, B-values and beauty. Highlights & Transcript: https://insideview.substack.com/p/sav Watch the video: https://youtu.be/_Y6_TakG3d0
2021-09-06
3h 06
The Inside View
3. Evan Hubinger on Takeoff speeds, Risks from learned optimization & Interpretability
We talk about Evan’s background @ MIRI & OpenAI, Coconut, homogeneity in AI takeoff, reproducing SoTA & openness in multipolar scenarios, quantilizers & operationalizing strategy stealing, Risks from learned optimization & evolution, learned optimization in Machine Learning, clarifying Inner AI Alignment terminology, transparency & interpretability, 11 proposals for safe advanced AI, underappreciated problems in AI Alignment & surprising advances in AI.
2021-06-08
1h 44
The Sav Sidorov Podcast
02 | Michaël Trazzi | Learning by Doing, Perfecting Habits and the Future of AI
Michael is a programmer, blogger and overall interesting personality. We talk about his experience learning at School 42 - a school that focuses on learning by doing above all else, about habits and how to do effective work, as well as artificial intelligence. Michaël's Twitter: @MichaelTrazzi Michaël's blog posts: How to Deep Write in a Shallow World & The 5 Tribes of the ML world. Listen to his podcast, The Inside View. Find me on Twitter: @savsidorov My Blog: savsidorov.substack.com ...
2021-05-06
1h 40
The Sav Sidorov Podcast
02 | Michaël Trazzi | Learning by Doing, Perfecting Habits and the Future of AI
Listen now (100 min) | Michael is a programmer, blogger and overall interesting personality. We talk about his experience learning at School 42 - a school that focuses on learning by doing above all else, about habits and how to do effective work, as well as artificial intelligence. Subscribe at savsidorov.substack.com
2021-05-06
1h 40
The Inside View
2. Connor Leahy on GPT3, EleutherAI and AI Alignment
In the first part of the podcast we chat about how to speed up GPT-3 training, how Conor updated on recent announcements of large language models, why GPT-3 is AGI for some specific definitions of AGI [1], the obstacles in plugging planning to GPT-N and why the brain might approximate something like backprop. We end this first chat with solomonoff priors [2], adversarial attacks such as Pascal Mugging [3], and whether direct work on AI Alignment is currently tractable. In the second part, we chat about his current projects at EleutherAI [4][5], multipolar scenarios and reasons to work on technical AI Alignment research.
2021-05-04
1h 28
The Inside View
1. Does the world really need another podcast?
In this first episode I'm the one being interviewed. Questions: - Does the world really needs another podcast? - Why call your podcast superintelligence? - What is the Inside view? The Outside view? - What could be the impact of podcast conversations? - Why would a public discussion on superintelligence be different? - What are the main reasons we listen to podcasts at all? - Explaining GPT-3 and how we could scale to GPT-4 - Could GPT-N write a PhD thesis? - What would a superintelligence need on...
2021-04-25
25 min