Look for any podcast host, guest or anyone
Showing episodes and shows of

Andrey Fradkin And Seth Benzell

Shows

Justified PosteriorsJustified PosteriorsWhat can we learn from AI exposure measures?In a Justified Posteriors first, hosts Seth Benzell and Andrey Fradkin sit down with economist Daniel Rock, assistant professor at Wharton and AI2050 Schmidt Science Fellow, to unpack his groundbreaking research on generative AI, productivity, exposure scores, and the future of work. Through a wide-ranging and insightful conversation, the trio examines how exposure to AI reshapes job tasks and why the difference between exposure and automation matters deeply.Links to the referenced papers, as well as a lightly edited transcript of our conversation, with timestamps are below:Timestamps:[00:08] – Meet Daniel Rock[02:04] – Why AI? The...2025-07-281h 11Justified PosteriorsJustified PosteriorsA Resource Curse for AI?In this episode of Justified Posteriors, we tackle the provocative essay “The Intelligence Curse” by Luke Drago and Rudolf Laine. What if AI is less like a productivity booster and more like oil in a failed state? Drawing from economics, political theory, and dystopian sci-fi, we explore the analogy between AI-driven automation and the classic resource curse.* [00:03:30] Introducing The Intelligence Curse – A speculative essay that blends LessWrong rationalism, macroeconomic theory, and political pessimism.* [00:07:55] Running through the six economic mechanisms behind the curse, including volatility, Dutch disease, and institutional decay.* [00:13:10] Prior #1: Will AI-enabled automation make e...2025-07-141h 09Justified PosteriorsJustified PosteriorsRobots for the retired?In this episode of Justified Posteriors, we examine the paper "Demographics and Automation" by economists Daron Acemoglu and Pascual Restrepo. The central hypothesis of this paper is that aging societies, facing a scarcity of middle-aged labor for physical production tasks, are more likely to invest in industrial automation.Going in, we were split. One of us thought the idea made basic economic sense, while the other was skeptical, worrying that a vague trend of "modernity" might be the real force causing both aging populations and a rise in automation. The paper threw a mountain of data at...2025-06-301h 00Justified PosteriorsJustified PosteriorsWhen Humans and Machines Don't Say What They ThinkAndrey and Seth examine two papers exploring how both humans and AI systems don't always say what they think. They discuss Luca Braghieri's study on political correctness among UC San Diego students, which finds surprisingly small differences (0.1-0.2 standard deviations) between what students report privately versus publicly on hot-button issues. We then pivot to Anthropic's research showing that AI models can produce chain-of-thought reasoning that doesn't reflect their actual decision-making process. Throughout, we grapple with fundamental questions about truth, social conformity, and whether any intelligent system can fully understand or honestly represent its own thinking.Timestamps (Transcript...2025-06-171h 09Justified PosteriorsJustified PosteriorsScaling Laws Meet PersuasionIn this episode, we tackle the thorny question of AI persuasion with a fresh study: "Scaling Language Model Size Yields Diminishing Returns for Single-Message Political Persuasion." The headline? Bigger AI models plateau in their persuasive power around the 70B parameter mark—think LLaMA 2 70B or Qwen-1.5 72B.As you can imagine, this had us diving deep into what this means for AI safety concerns and the future of digital influence. Seth came in worried that super-persuasive AIs might be the top existential risk (60% confidence!), while Andrey was far more skeptical (less than 1%).Before jumping into th...2025-06-0340 minJustified PosteriorsJustified PosteriorsTechno-prophets try macroeconomics: are they hallucinating?In this episode, we tackle a brand new paper from the folks at Epoch AI called the "GATE model" (Growth and AI Transition Endogenous model). It makes some bold claims. The headline grabber? Their default scenario projects a whopping 23% global GDP growth in 2027! As you can imagine, that had us both (especially Andrey) practically falling out of our chairs. Before diving into GATE, Andrey shared a bit about the challenge of picking readings for his PhD course on AGI and business – a tough task when the future hasn't happened yet! Then, we broke down the GATE model it...2025-05-191h 06Justified PosteriorsJustified PosteriorsDid Meta's Algorithms Swing the 2020 Election?We hear it constantly: social media algorithms are driving polarization, feeding us echo chambers, and maybe even swinging elections. But what does the evidence actually say? In the darkest version of this narrative, social media platform owners are shadow king-makers and puppet masters who can select the winner of close election by selectively promoting narratives. Amorally, they disregard the heightened political polarization and mental anxiety which are the consequence of their manipulations of the public psyche. In this episode, we dive into an important study published in Science (How do social media feed algorithms affect...2025-05-0550 minJustified PosteriorsJustified PosteriorsClaude Just Refereed the Anthropic Economic IndexIn this episode of Justified Posteriors, we dive into the paper "Which Economic Tasks Are Performed with AI: Evidence from Millions of Claude Conversations." We analyze Anthropic's effort to categorize how people use their Claude AI assistant across different economic tasks and occupations, examining both the methodology and implications with a critical eye.We came into this discussion expecting coding and writing to dominate AI usage patterns—and while the data largely confirms this, our conversation highlights several surprising insights. Why are computer and mathematical tasks so heavily overrepresented, while office administrative work lag behind? What explains th...2025-04-211h 01Justified PosteriorsJustified PosteriorsHow much should we invest in AI safety?In this episode, we tackle one of the most pressing questions of our technological age: how much risk of human extinction should we accept in exchange for unprecedented economic growth from AI?The podcast explores research by Stanford economist Chad Jones, who models scenarios where AI might deliver a staggering 10% annual GDP growth but carry a small probability of triggering an existential catastrophe. We dissect how our risk tolerance depends on fundamental assumptions about utility functions, time horizons, and what actually constitutes an "existential risk."We discuss how Jones’ model presents some stark calculations: with ce...2025-04-071h 09Justified PosteriorsJustified PosteriorsCan AI make better decisions than an ER doctor?Dive into the intersection of economics and healthcare with our latest podcast episode. How much can AI systems enhance high-stakes medical decision-making? In this episode, we explore the implications of a research paper titled “Diagnosing Physician Error: A Machine Learning Approach to Low Value Health Care” by Sendhil Mullainathan and Ziad Obermeyer.The paper argues that physicians often make predictable and costly errors in deciding who to test for heart attacks. The authors claim that incorporating machine learning could significantly improve the efficiency and outcome of such tests, reducing the cost per life year saved while maintaining or i...2025-03-2443 minJustified PosteriorsJustified PosteriorsIf the Robots Are Coming, Why Aren't Interest Rates Higher?In this episode, we tackle an intriguing question inspired by a recent working paper: If artificial general intelligence (AGI) is imminent, why are real interest rates so low?The discussion centers on the provocative paper, "Transformative AI, Existential Risk, and Real Interest Rates", authored by Trevor Chow, Basil Halperin, and Jay Zachary Maslisch. This research argues that today's historically low real interest rates signal market skepticism about the near-term arrival of transformative AI—defined here as either technology generating massive economic growth (over 30% annually) or catastrophic outcomes like total extinction.We found ourselves initially at od...2025-03-1159 minJustified PosteriorsJustified PosteriorsHigh Prices, Higher Welfare? The Auto Industry as a Case StudyDoes the U.S. auto industry prioritize consumers or corporate profits? In this episode of Justified Posteriors, hosts Seth Benzell and Andrey Fradkin explore the evidence behind this question through the lens of the research paper “The Evolution of Market Power and the U.S. Automobile Industry” by Paul Grieco, Charles Murry, and Ali Yurukoglu.Join them as they unpack trends in car prices, market concentration, and consumer surplus, critique the methodology, and consider how competition and innovation shape the auto industry. Could a different competitive structure have driven even greater innovation? Tune in to find out!2025-02-2453 minJustified PosteriorsJustified PosteriorsScaling Laws in AIDoes scaling alone hold the key to transformative AI?In this episode of Justified Posteriors, we dive into the topic of scaling laws in artificial intelligence (AI), discussing a set of paradigmatic papers.We discuss the idea that as more compute, data, and parameters are added to machine learning models, their performance improves predictably. Referencing several pivotal papers, including early works from OpenAI and empirical studies, they explore how scaling laws translate to model performance and potential economic value. We also debate the ultimate usefulness and limitations of scaling laws, considering whether purely increasing compute...2025-02-1159 minJustified PosteriorsJustified PosteriorsIs Social Media a Trap?Are we trapped by the social media we love? In this episode of the “Justified Posteriors” podcast, hosts Seth Benzell and Andrey Fradkin discuss a research paper examining the social and economic impacts of TikTok and Instagram usage among college students. The paper, authored by Leonardo Bursztyn, Benjamin Handel, Rafael Jimenez, and Christopher Roth, suggests that these platforms may create a “collective trap” where users prefer a world where no one used social media, despite the platforms' popularity. Through surveys, the researchers found that students place significant value on these platforms but also experience negative social externalities. The discussion explores...2025-01-2637 minJustified PosteriorsJustified PosteriorsBeyond Task ReplacementIn this episode, we discuss Artificial Intelligence Technologies and Aggregate Growth Prospects by Timothy Bresnahan.* We contrast Tim Bresnahan's paper on AI's impact on economic growth, with Daron Acemoglu's task-replacement focused approach from the previous episode.* Bresnahan argues that AI's main economic benefits will come through:* Reorganizing organizations and tasks* Capital deepening (improving existing machine capabilities)* Creating new products and services rather than simply replacing human jobs* We discuss examples from big tech companies:* Amazon's product recommendations* Google's search capabilities* Voice...2025-01-1144 minJustified PosteriorsJustified PosteriorsThe Simple Macroeconomics of AIWill AI's impact be as modest as predicted, or could it exceed expectations in reshaping economic productivity? In this episode, hosts Seth Benzell and Andrey Fradkin discuss the paper “The Simple Macroeconomics of AI” by Daron Acemoglu, an economist and an institute professor at MIT.Additional notes from friend of the podcast Daniel Rock of Wharton, coauthor of “GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models” one of the papers cited in the show, and a main data source for Acemoglu’s paper: (1) Acemoglu does not use the paper’s ‘main’ est...2024-12-2153 minJustified PosteriorsJustified PosteriorsSituational AwarenessHow close are we to AGI, and what might its impact be on the global stage? In this episode, hosts Seth Benzell and Andrey Fradkin tackle the high-stakes world of artificial intelligence, focusing on the transformative potential of Artificial General Intelligence (AGI). The conversation is based on Leopold Aschenbrenner’s essay 'Situational Awareness', which argues that AI's development follows a predictable scaling law that allow for reliable projections about when AGI will emerge. The hosts also discuss Leopold’s thoughts on the geopolitical implications of AGI, including the influence of AI on military and social conflicts.Sp...2024-12-071h 02