podcast
details
.com
Print
Share
Look for any podcast host, guest or anyone
Search
Showing episodes and shows of
Arvind Narayanan
Shows
Capitalisn't
What Everyone’s Getting Wrong About AI, with Arvind Narayanan
Every major technological revolution has come with a bubble: railroads, electricity, dot-com. Is it AI’s turn? With investments skyrocketing and market valuations reaching the trillions, the stakes are enormous. But are we witnessing a genuine revolution—or the early stages of a spectacular crash?Princeton professor Arvind Narayanan joins Luigi Zingales and Bethany McLean to explain why he believes AI’s transformative impact is overstated. Drawing on his book AI Snake Oil, co-authored with Sayash Kapoor, Narayanan argues that capitalism’s incentives can distort technological progress, pushing hype faster than reality can deliver. They examine how deregula...
2025-10-16
48 min
Resources Radio
Measuring Emissions from Liquefied Natural Gas, with Arvind Ravikumar
In this week’s episode, host Daniel Raimi talks with Arvind Ravikumar, an assistant professor at the University of Texas at Austin, about recent federal deregulation of methane emissions in the United States; specifically, the effects on methane emissions from the production of natural gas and liquefied natural gas. Ravikumar highlights some of his recent research, which explores how all steps in the supply chain of natural gas can affect emissions intensity—including transportation of the energy source to end users—and the variation in methane emissions across countries from their natural gas supply chains. References and recommendations: “Tracking U.S. Lique...
2025-10-14
33 min
Poets & Thinkers
AI as Normal Technology: On superintelligence delusion, bogus claims and a humanistic AI future with Prof. Arvind Narayanan
What if the race toward “superintelligence” is misguided and what does a more humanistic vision for AI adoption actually look like? In this episode of Poets & Thinkers, we dive deep into the intersection of artificial intelligence, culture, and human agency with Prof. Arvind Narayanan, a computer science professor at Princeton University whose work has fundamentally challenged how we think about AI’s role in society. Named on TIME’s inaugural list of 100 most influential people in AI, Arvind brings decades of research experience studying the gap between tech industry promises and real-world impacts.Arvind takes us beyond the hype...
2025-09-02
46 min
How to Fix the Internet
Separating AI Hope from AI Hype
If you believe the hype, artificial intelligence will soon take all our jobs, or solve all our problems, or destroy all boundaries between reality and lies, or help us live forever, or take over the world and exterminate humanity. That’s a pretty wide spectrum, and leaves a lot of people very confused about what exactly AI can and can’t do. In this episode, we’ll help you sort that out: For example, we’ll talk about why even superintelligent AI cannot simply replace humans for most of what we do, nor can it perfect or ruin our world un...
2025-08-13
39 min
New Books in Technology
On Bullshit in AI
Today we’re continuing our series on Harry Frankfurt’s seminal work, On Bullshit. I have the privilege to speak with Arvind Narayanan co-author of the book AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference (Princeton University Press, 2024). Arvind is the perfect guest to explore the subject of bullshit in AI as AI Snake Oil takes on the ridiculous hype ascribed to the promise of AI. AI chatbots often hallucinate and many of the promoters of AI engage in the art of bullshit when selling people on wild and crazy AI ap...
2025-07-31
20 min
Princeton UP Ideas Podcast
On Bullshit in AI
Today we’re continuing our series on Harry Frankfurt’s seminal work, On Bullshit. I have the privilege to speak with Arvind Narayanan co-author of the book AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference (Princeton University Press, 2024). Arvind is the perfect guest to explore the subject of bullshit in AI as AI Snake Oil takes on the ridiculous hype ascribed to the promise of AI. AI chatbots often hallucinate and many of the promoters of AI engage in the art of bullshit when selling people on wild and crazy AI ap...
2025-07-31
18 min
New Books in Science
On Bullshit in AI
Today we’re continuing our series on Harry Frankfurt’s seminal work, On Bullshit. I have the privilege to speak with Arvind Narayanan co-author of the book AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference (Princeton University Press, 2024). Arvind is the perfect guest to explore the subject of bullshit in AI as AI Snake Oil takes on the ridiculous hype ascribed to the promise of AI. AI chatbots often hallucinate and many of the promoters of AI engage in the art of bullshit when selling people on wild and crazy AI ap...
2025-07-31
20 min
New Books in Big Ideas
On Bullshit in AI
Today we’re continuing our series on Harry Frankfurt’s seminal work, On Bullshit. I have the privilege to speak with Arvind Narayanan co-author of the book AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference (Princeton University Press, 2024). Arvind is the perfect guest to explore the subject of bullshit in AI as AI Snake Oil takes on the ridiculous hype ascribed to the promise of AI. AI chatbots often hallucinate and many of the promoters of AI engage in the art of bullshit when selling people on wild and crazy AI ap...
2025-07-31
20 min
New Books in Politics and Polemics
On Bullshit in AI
Today we’re continuing our series on Harry Frankfurt’s seminal work, On Bullshit. I have the privilege to speak with Arvind Narayanan co-author of the book AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference (Princeton University Press, 2024). Arvind is the perfect guest to explore the subject of bullshit in AI as AI Snake Oil takes on the ridiculous hype ascribed to the promise of AI. AI chatbots often hallucinate and many of the promoters of AI engage in the art of bullshit when selling people on wild and crazy AI ap...
2025-07-31
20 min
NBN Book of the Day
On Bullshit in AI
Today we’re continuing our series on Harry Frankfurt’s seminal work, On Bullshit. I have the privilege to speak with Arvind Narayanan co-author of the book AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference (Princeton University Press, 2024). Arvind is the perfect guest to explore the subject of bullshit in AI as AI Snake Oil takes on the ridiculous hype ascribed to the promise of AI. AI chatbots often hallucinate and many of the promoters of AI engage in the art of bullshit when selling people on wild and crazy AI ap...
2025-07-31
20 min
New Books in Science, Technology, and Society
On Bullshit in AI
Today we’re continuing our series on Harry Frankfurt’s seminal work, On Bullshit. I have the privilege to speak with Arvind Narayanan co-author of the book AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference (Princeton University Press, 2024). Arvind is the perfect guest to explore the subject of bullshit in AI as AI Snake Oil takes on the ridiculous hype ascribed to the promise of AI. AI chatbots often hallucinate and many of the promoters of AI engage in the art of bullshit when selling people on wild and crazy AI ap...
2025-07-31
20 min
New Books in Public Policy
On Bullshit in AI
Today we’re continuing our series on Harry Frankfurt’s seminal work, On Bullshit. I have the privilege to speak with Arvind Narayanan co-author of the book AI Snake Oil: What Artificial Intelligence Can Do, What it Can’t, and How to Tell the Difference (Princeton University Press, 2024). Arvind is the perfect guest to explore the subject of bullshit in AI as AI Snake Oil takes on the ridiculous hype ascribed to the promise of AI. AI chatbots often hallucinate and many of the promoters of AI engage in the art of bullshit when selling people on wild and crazy AI ap...
2025-07-31
20 min
Interpreting India
Beyond Superintelligence: A Realist's Guide to AI
The episode begins with Kapoor explaining the origins of AI Snake Oil, tracing it back to his PhD research at Princeton on AI's limited predictive capabilities in social science domains. He shares how he and co-author Arvind Narayanan uncovered major methodological flaws in civil war prediction models, which later extended to other fields misapplying machine learning.The conversation then turns to the disconnect between academic findings and media narratives. Kapoor critiques the hype cycle around AI, emphasizing how its real-world adoption is slower, more fragmented, and often augmentative rather than fully automating human labor. He cites the...
2025-07-10
39 min
Bankless
LIMITLESS - AI DEBATE: Runaway Superintelligence or Normal Technology? | Daniel Kokotajlo vs Arvind
Two visions for the future of AI clash in this debate between Daniel Kokotajlo and Arvind Narayanan.Is AI a revolutionary new species destined for runaway superintelligence, or just another step in humanity’s technological evolution—like electricity or the internet?Daniel, a former OpenAI researcher and author of AI 2027, argues for a fast-approaching intelligence explosion. Arvind, a Princeton professor and co-author of AI Snake Oil, contends that AI is powerful but ultimately controllable and slow to reshape society. Moderated by Ryan and David, this conversation dives into the crux of capability vs. power, economic tran...
2025-06-04
00 min
Limitless Podcast
AI DEBATE: Runaway Superintelligence or Normal Technology? | Daniel Kokotajlo vs Arvind Narayanan
Two visions for the future of AI clash in this debate between Daniel Kokotajlo and Arvind Narayanan. Is AI a revolutionary new species destined for runaway superintelligence, or just another step in humanity’s technological evolution—like electricity or the internet? Daniel, a former OpenAI researcher and author of AI 2027, argues for a fast-approaching intelligence explosion. Arvind, a Princeton professor and co-author of AI Snake Oil, contends that AI is powerful but ultimately controllable and slow to reshape society. Moderated by Ryan and David, this conversation dives into the crux of capability vs. power, economic transf...
2025-06-04
1h 09
Sunchaser
Lessons From - AI Snake Oil by Arvind Narayanan and Sayash Kapoor
This time, we dive into the essential insights from "AI Snake Oil" by Princeton University's Arvind Narayanan and Sayash Kapoor.This book is a vital guide for anyone looking to navigate the complex world of artificial intelligence beyond the headlines. I unpack their groundbreaking analysis of three distinct types of AI, explaining why the first type works, why the second type only sort of works, and why the third type doesn't work at all.This discussion will equip you with an advanced level of understanding to discern legitimate AI applications from exaggerated claims, helping you...
2025-05-29
52 min
Tech Reviews
Navigating the Realities of AI: Snake Oil and Beyond
Professor Arvind Narayanan discusses the realities and misconceptions surrounding Artificial Intelligence (AI), particularly in the context of his book "AI Snake Oil." The discussion emphasizes the distinction between predictive AI, which often involves making consequential decisions about individuals with questionable accuracy and ethical implications, and generative AI, which, despite its limitations and irresponsible deployment practices, demonstrates potential for productive and even enjoyable applications. It expresses a shared interest in promoting a clear, rigorous, and technically informed understanding of AI development and deployment, advocating for steering technology towards socially beneficial directions while acknowledging potential harms, including impacts on labor and the...
2025-05-07
15 min
The enTalkenator Podcast
A roundtable on three AI articles: Normal tech, biological, or jagged super-intelligence
This episode begins with a brief summary (created by enTalkenator after asking it to generate a summary of multiple, related articles) of three recent articles and then (at 00:14:39) proceeds to a roundtable discussion, generated by modifying the built-in Critics’ Roundtable template. These are the articles discussed:Arvind Narayanan & Sayash Kapoor, AI As Normal Technology, https://knightcolumbia.org/content/ai-as-normal-technology.Jack Lindsey at al., On the Biology of a Large Language Model, https://transformer-circuits.pub/2025/attribution-graphs/biology.html.Ethan Mollick, On Jagged AGI: o3, Gemini 2.5, and Everything After, https://www.oneusefulthing.org/p/on-jagged-agi-o3-g...
2025-04-24
1h 01
How to Fix the Internet
Coming Soon: How to Fix the Internet Season Six
Now more than ever, we need to build, reinforce, and protect the tools and technology that support our freedom. EFF’s How to Fix the Internet returns with another season full of forward-looking and hopeful conversations with the smartest and most creative leaders, activists, technologists, policy makers, and thinkers around. People who are working to create a better internet – and world – for all of us.Co-hosts Executive Director Cindy Cohn and Activism Director Jason Kelley will speak with people like journalist Molly White, reproductive rights activist Kate Bertash, press freedom advocate Harlo Holmes, the Tor Project’s Isabela...
2025-04-23
01 min
In Conversation with Nathalie Nahai
149. Navigating The Complexities of AI: Ethics, Data Quality & Critical Thinking / Ismael Kherroubi Garcia
Today I have the pleasure of interviewing Ismael Kherroubi Garcia, the Founder & CEO of Kairoi, the AI Ethics & Research Governance Consultancy; the Founder of the Responsible Artificial Intelligence Network (RAIN) organised by Fellows of the RSA; and Associate Director of We and AI, promoting AI literacy for social inclusion. Ismael’s work in responsible AI has allowed him to draw on his diverse experiences, having worked across fintech, museums, theatre and academia, in sales, human resources and research; and at organisations as diverse as Bloomberg, the Royal Opera House and NASA.Ismael's training in Business Management and Phil...
2025-04-20
1h 10
Hard Fork
Meta on Trial + Is A.I. a ‘Normal’ Technology? + HatGPT
This week Meta is on trial, in a landmark case over whether it illegally snuffed out competition when it acquired Instagram and WhatsApp. We discuss some of the most surprising revelations from old email messages made public as evidence in the case, and explain why we think the F.T.C.’s argument has gotten weaker in the years since the lawsuit was filed. Then we hear from Princeton computer scientist Arvind Narayanan on why he believes it will take decades, not years, for A.I. to transform society in the ways the big A.I. labs predict. And fi...
2025-04-18
1h 20
The enTalkenator Podcast
Workshop on “Why an overreliance on AI-driven modelling is bad for science” by Arvind Narayanan and Sayash Kapoor
Arvind Narayanan and Sayash Kapoor, Why an overreliance on AI-driven modelling is bad for science, available at https://www.nature.com/articles/d41586-025-01067-2.This is a synthetic academic workshop generated using enTalkenator (selecting Google Gemini 2.5 Pro Experimental).The article discussed appeared as a comment in the journal Nature on April 7, 2025, with the subhead: “Without clear protocols to catch errors, artificial intelligence’s growing role in science could do more harm than good.”
2025-04-09
43 min
The Next Big Idea
The Next Big Idea Daily: What Can AI Really Do?
Two of TIME’s 100 Most Influential People in AI share what you need to know about AI — and how to defend yourself against bogus AI claims and products.📱 Follow The Next Big Idea Daily on Apple Podcasts, Spotify, or wherever you listen📕 AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan & Sayash Kapoor📩 Want to transform your day in just 10 minutes? Sign up for our Book of the Day newsletter, and you’ll get daily, bite‑sized insights from the best new nonfiction books —...
2025-03-17
15 min
The Next Big Idea Daily
What Can AI Really Do?
Two of TIME’s 100 Most Influential People in AI share what you need to know about AI — and how to defend yourself against bogus AI claims and products.📕 AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan & Sayash Kapoor📱 Follow The Next Big Idea Daily on Apple Podcasts, Spotify, or wherever you listen📩 Want more bite-sized insights from the best new nonfiction delivered straight to your inbox? Sign up for our Book of the Day newsletter Learn more about your ad...
2025-03-17
16 min
log out
#001: AI Snake Oil – Arvind Narayanan on AI Hype, Hopes, and False Promises
Is AI really as powerful as we’re told, or are we falling for hype-driven illusions?In this episode, Siara speaks with Arvind Narayanan, co-author of AI Snake Oil, breaks down the myths shaping the AI industry—from hiring algorithms to criminal justice and beyond. He explains how AI is often marketed as a revolutionary solution when, in reality, many of these technologies don’t—and can’t—work as promised.We explore the regulatory landscape, the potential for an AI bubble, and why artificial general intelligence (AGI) may be further away than we think. Arvind also...
2025-03-13
1h 04
Agents of Intelligence
Beyond the Algorithm: Fairness and Machine Learning
Join us for a deep dive into the insights of Fairness and Machine Learning by Solon Barocas, Moritz Hardt, and Arvind Narayanan. We’ll explore how fairness is defined and measured, the legal and societal context that shapes it, and the power of causality and counterfactual reasoning in identifying discrimination. From testing bias in practice to understanding “merit and desert,” this episode unpacks the limitations and opportunities of automated decision-making—and why machine learning should never be viewed as a simple replacement for human judgment.
2025-02-19
22 min
Project Gamma
Episode 5 | AI-Engineered: Building Applications, Tackling Tech Debt, and Securing Software
Join host Warner Moore on Project Gamma as he chats with tech veteran Jason Montgomery about how GenAI is revolutionizing engineering and cybersecurity. Discover how GenAI accelerates development, builds applications from scratch, and creates AI agents to tackle technical debt. We also explore strategies for introducing GenAI into your engineering team, modern security practices, and the essentials of DevSecOps.Podcast topics:Using GenAI to accelerate engineeringBuilding an application from scratch using GenAICreating AI agents to tackle technical debtIntroducing GenAI to your engineering organizationSecurity...
2025-02-18
42 min
L'IA aujourd'hui !
L'IA aujourd'hui épisode du 2025-02-18
Bonjour et bienvenue dans le podcast de l'IA par l’IA qui vous permet de rester à la page ! Aujourd’hui : OpenAI dévoile ses nouveaux modèles, les défis de l'IA prédictive, et les innovations dans les modèles de langage. C’est parti !Commençons par OpenAI, qui a récemment annoncé les modèles GPT-4.5 et GPT-5. Sam Altman, PDG d'OpenAI, a révélé que GPT-4.5, surnommé Orion, sera le dernier modèle sans "chaîne de pensée". GPT-5 intégrera des technologies avancées, y compris le modèle o3, pour offrir une expérience unifiée. Les ut...
2025-02-18
03 min
The Jim Rutt Show
EP 283 Brian Chau on the Trump Administration and AI
Jim talks with Brian Chau about what the new administration could mean for AI development. They discuss recent actions by the Tump administration including repealing Biden's executive order & the Stargate infrastructure project, Biden's impact on AI, the formation of the Alliance for the Future, regulatory bureaucracy, state patchwork laws, censorship, the Gemini controversy & DEI in AI, safety restrictions in chat models, the meaning of DeepSeek, economic implications of model distillation, historical analogies for AI development, national security & sovereignty implications, 3 main lanes for AI development, democratized access vs gatekeeping, trust issues, "AI" vs "LLMs," and much more. Episode Transcript Alliance for t...
2025-02-11
1h 08
AI and Us: Exploring Our Future
AI Snake Oil: Separating Fact from Fiction
Join us for an illuminating conversation with Arvind Narayanan, Princeton University Computer Science Professor and Director of Princeton's Center for Information Technology Policy, as he unpacks the complex world of artificial intelligence claims and reality. In this episode, Narayanan discusses his new book "AI Snake Oil," offering critical insights into distinguishing genuine AI advances from overblown marketing promises.Key highlights include:The distinction between predictive AI and generative AIReal-world examples of AI snake oil in hiring, criminal justice, and educationHow institutional flaws contribute to the proliferation of questionable AI productsPractical frameworks for evaluating AI claims and...
2025-02-09
15 min
The Editor's Half Hour
Responsible AI Content Policies
AI is transforming the way we edit, write, and create—but where do we draw the ethical line? In this episode, we’re diving deep into the ethics of AI in publishing to explore some of the biggest ethical questions editors and writers face in the age of AI. Are AI tools a helpful assistant or a threat to creativity? How do we ensure fairness, accuracy, and integrity in AI-assisted editing? I interview Amy Frushour Kelly, editor and AI content policy consultant who is an advocate for the ethical use of generative AI in the...
2025-02-01
43 min
Unsupervised Learning with Jacob Effron
Ep 54: Princeton Researcher Arvind Narayanan on the Limitations of Agent Evals, AI’s Societal Impact & Important Lessons from History
Arvind Narayanan is one of the leading voices in AI when it comes to cutting through the hype. As a Princeton professor and co-author of AI Snake Oil, he’s one of the most thoughtful voices cautioning against both unfounded fears and overblown promises in AI. In this episode, Arvind dissects the future of AI in education, its parallels to past tech revolutions, and how our jobs are already shifting toward managing these powerful tools. Some of our favorite take-aways: [0:00] Intro[0:46] Reasoning Models and Their Uneven Progress[2:46] Challenges in AI Benchmarks and Real-World App...
2025-01-30
57 min
CXOTalk
AI Snake Oil: Princeton Professor Exposes AI Truths | #867
In CXOTalk episode 867, Princeton professor Arvind Narayanan, co-author of AI Snake Oil, reveals why many AI products fail to deliver on their promises and how leaders can distinguish hype-driven solutions from those that create value. Exploring the landscape of AI advancements, deceptions, and limitations, Narayanan explains how to detect genuine AI innovations from overhyped and potentially harmful applications. We discuss real-world examples, ethical concerns, and the role of policy and regulation in mitigating AI snake oil. Tune in to learn actionable insights for consumers and businesses and explore how AI reshapes industries while posing unique challenges and opportunities. #enterpriseai...
2025-01-28
57 min
Deep Dive Book Summaries
AI Snake Oil
Arvind Narayanan and Sayash Kapoor's book AI Snake Oil: Hype, Myths and the Future of Artificial Intelligence, critically examines the hype surrounding artificial intelligence, differentiating between genuinely useful AI applications and those that are ineffective or misleading. The book explores various AI types—generative, predictive, and content moderation—analyzing their capabilities and limitations through real-world examples. It exposes the pervasive myths and biases surrounding AI, often fueled by corporate interests and media misrepresentation. Ultimately, the authors advocate for responsible AI development and deployment, proposing regulatory frameworks and alternative decision-making models to mitigate potential harms and promote fairness. They emphasize the i...
2025-01-15
18 min
unSILOed with Greg LaBlanc
497. Spotting The Difference Between AI Innovation and AI Snake Oil feat. Arvind Narayanan
Where is the line between fact and fiction in the capabilities of AI? Which predictions or promises about the future of AI are reasonable and which are the creations of hype for the benefit of the industry or the company making outsized claims?Arvind Narayanan is a professor of computer science at Princeton University, the director of the Center for Information Technology Policy, and an author. His latest book is AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.Greg and Arvind discuss the misconceptions about AI...
2025-01-08
46 min
Daybreak
AI Snakeoil ft. Chloe Moa — Monday, Dec. 16
Today, we cover AI policy with professors Arvind Narayanan and Sayash Kapoor, an update with the ongoing issue of mysterious objects flying over New York and New Jersey, and a number of serious weather incidents across the US over the weekend.
2024-12-16
04 min
Techtonic with Mark Hurst | WFMU
Arvind Narayanan, author, "AI Snake Oil" from Dec 9, 2024
Arvind Narayanan, author, "AI Snake Oil" Tomaš Dvořák - "Game Boy Tune" - "Mark's intro" - "Interview with Arvind Narayanan" [0:04:54] - "Mark's comments" [0:47:52] Waveshaper - "E.P.R.O.M." [0:55:33] https://www.wfmu.org/playlists/shows/146887
2024-12-10
00 min
Thinkers & Ideas
AI Snake Oil with Sayash Kapoor
In AI Snake Oil: What AI Can Do, What It Can’t, and How to Tell the Difference, Sayash Kapoor and his co-author Arvind Narayanan provide an essential understanding of how AI works and why some applications remain fundamentally beyond its capabilities.Kapoor was included in TIME’s inaugural list of the 100 most influential people in AI. As a researcher at Princeton University’s Center for Information Technology Policy, he examines the societal impacts of AI, with a focus on reproducibility, transparency, and accountability in AI systems. In his new book, he cuts through the hype to help r...
2024-12-03
27 min
We Are Not Saved
Mid-length Non-fiction Book Reviews: Volume 2
AI Snake Oil: What Artificial Intelligence Can Do, What It Can't, and How to Tell the Difference by: Arvind Narayanan and Sayash Kapoor Country Driving: A Journey Through China from Farm to Factory by: Peter Hessler On Grand Strategy by: John Lewis Gaddis Leisure: The Basis of Culture by: Josef Pieper Anatomy of the State by: Murray Rothbard The ONE Thing: The Surprisingly Simple Truth Behind Extraordinary Results Alone on the Ice: The Greatest Survival Story in the History of Exploration by: David Roberts The Kille...
2024-11-30
24 min
Books In Dialogue
S4 - 1 - Decoding AI: Navigating the Hype and Reality of Artificial Intelligence
In this episode of Books in Dialogue, we explore AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference by Arvind Narayanan and Sayash Kapoor. This insightful book demystifies artificial intelligence, distinguishing genuine capabilities from exaggerated claims. Join us as we discuss how to critically assess AI technologies, understand their real-world applications, and recognize the limitations that often go unnoticed. Whether you're an AI enthusiast or a cautious skeptic, this conversation offers valuable perspectives on navigating the complex landscape of artificial intelligence.
2024-11-29
30 min
A Song Called Life
Episode #201: Dr. Arvind Narayanan
In Episode #201, computer scientist Arvind Narayanan joins Osi to discuss the myths and realities of AI, highlighting its limitations, potential harms, and the dangers of unchecked big tech influence. Arvind Narayanan is a computer scientist and professor at Princeton University, renowned for his pioneering research on the de-anonymization of data. He is the co-author of the book AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, which explores the promises and limitations of AI technology.
2024-11-27
51 min
El Brieff
Los aranceles de Sheinbaum: Las noticias para este miércoles
En el episodio de hoy, analizamos las tensiones comerciales entre México y Estados Unidos tras las amenazas arancelarias de Donald Trump y la respuesta de la presidenta Claudia Sheinbaum. También abordamos investigaciones sobre corrupción en Nuevo León, avances en la despenalización del aborto en el Estado de México, y los desafíos económicos y políticos que enfrenta el país.En el ámbito internacional, destacamos el anuncio del alto el fuego entre Israel y Hezbolá, la reacción de China a las políticas de Trump, y los últimos aconteci...
2024-11-27
12 min
Untangled
Is AI snake oil?
Hi, I’m Charley, and this is Untangled, a newsletter about our sociotechnical world, and how to change it.* Come work with me! The initiative I lead at Data & Society is hiring for a Community Manager. Learn more here.* Check out my new course, Sociotechnical Systems Change in Practice. The first cohort will take place on January 11 and 12, and you can sign up here.* Last week I interviewed Mozilla’s Jasmine Sun and Nik Marda on the potential of public AI, and the week prior I shared my conversation with AI reporter Karen Hao on O...
2024-11-24
41 min
Something You Should Know
The Real and False Promises of AI & What They Really Ate at the First Thanksgiving
How many photographs have been taken worldwide in the history of photography? And how many just this year? These are a few of the fascinating facts that begin this episode that I know you’ll end up repeating at upcoming holiday parties that will make you sound so interesting! Source: John Mitchinson author of 1227 Quite Interesting Facts to Blow Your Socks Off (https://amzn.to/4fP4vaX).To hear it said, artificial intelligence is the greatest thing in the world or the beginning of the end of civilization. So, what’s the truth about AI? What can it d...
2024-11-21
50 min
Bible News Prophecy Interviews
Artificial Intelligence Deadly Perils?
What are some of the benefits and risks of Artificial Intelligence (AI) that Arvind Narayanan, Professor of Computer Science at Princeton University mentioned at the Hindustan Times Leadership Summit. Will government regulations eliminate the risks? Why or why not? What are AI 'hallucinations'? Did a Google Gemini AI chat bot tell a student that he was a waste and should "Please die," despite programming that was supposed to prevent such? Was the response actually "non-sensical" …
2024-11-18
00 min
Láncreakció
#183 - Radikális MI-optimizmus vagy kígyóolaj-szindróma?
Élmény olyan szövegeket olvasni, amelyeket igazi MI szakértők írtak és nem álltak meg a közhelyeknél.Ezen a héten két ilyet is találtunk: Darius Amodei, az Anthropic általában világvége-váró CEO-ja most egy radikálisan optimista esszét írt arról, milyen jócselekedeteket várhatnánk az MI-től egy ideális világban. Két kutató a Princetonról (Arvind Narayanan és Sayash Kapoor) pedig arról írt könyvet "AI Snake Oil" címmel, hogy mit tud az MI és mire nem jó egyáltalán. Két átgondolt és gondolatébresztő szöveg...
2024-11-01
42 min
Interconnects
Interviewing Arvind Narayanan on making sense of AI hype
Arvind Narayanan is a leading voice disambiguating what AI does and does not do. His work, with Sayash Kapoor at AI Snake Oil, is one of the few beacons of reasons in a AI media ecosystem with quite a few bad Apples. Arvind is a professor of computer science at Princeton University and the director of the Center for Information Technology Policy. You can learn more about Arvind and his work on his website, X, or Google Scholar.This episode is all in on figuring out what current LLMs do and don’t do. We cover AGI, ag...
2024-10-17
54 min
Doom Debates
“AI Snake Oil” Prof. Arvind Narayanan Can't See AGI Coming | Liron Reacts
Today I’m reacting to Arvind Narayanan’s interview with Robert Wright on the Nonzero podcast: https://www.youtube.com/watch?v=MoB_pikM3NYDr. Narayanan is a Professor of Computer Science and the Director of the Center for Information Technology Policy at Princeton. He just published a new book called AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.Arvind claims AI is “normal technology like the internet”, and never sees fit to bring up the impact or urgency of AGI. So I’ll take it upon...
2024-10-13
1h 12
Economics Explored
UBI Experiment: Success or Failure? Insights from Sam Altman’s Trial - EP257
In this episode, Gene Tunny dives into a recent Universal Basic Income (UBI) experiment funded by Sam Altman, CEO of OpenAI. Gene explores the key findings of the randomised controlled trial and discusses whether the positive outcomes are enough to convince sceptics. Are UBI recipients more financially secure, or are there deeper concerns about its impact on labour force participation and long-term wealth? Get Gene’s balanced analysis of this major UBI trial and its broader implications.If you have any questions, comments, or suggestions for Gene, please email him at contact@economicsexplored.com or send a voi...
2024-10-10
45 min
The TWIML AI Podcast (formerly This Week in Machine Learning & Artificial Intelligence)
AI Agents: Substance or Snake Oil with Arvind Narayanan
Today, we're joined by Arvind Narayanan, professor of Computer Science at Princeton University to discuss his recent works, AI Agents That Matter and AI Snake Oil. In “AI Agents That Matter”, we explore the range of agentic behaviors, the challenges in benchmarking agents, and the ‘capability and reliability gap’, which creates risks when deploying AI agents in real-world applications. We also discuss the importance of verifiers as a technique for safeguarding agent behavior. We then dig into the AI Snake Oil book, which uncovers examples of problematic and overhyped claims in AI. Arvind shares various use cases of failed applicat...
2024-10-07
54 min
Factually! with Adam Conover
Two Computer Scientists Debunk A.I. Hype with Arvind Narayanan and Sayash Kapoor
The AI hype train has officially left the station, and it's speeding so fast it might just derail. This isn't because of what AI can actually do, it's all because of how it's marketed. This week, Adam sits with Arvind Narayanan and Sayash Kapoor, computer scientists at Princeton and co-authors of "AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference." Together, they break down everything from tech that's labeled as "AI" but really isn’t, to surprising cases where so-called "AI" is actually just low-paid human labor in disguise. Find Arvi...
2024-10-02
1h 11
BigIdeas.FM: Audiobooks delivered as conversational podcasts!
AI Snake Oil: AI won't get you laid, but..
[Big ideas from the recently launched book, AI Snake Oil by award winning researchers Arvind Narayanan and Sayash Kapoor]Get the book: https://amzn.to/4ecK6MF----------Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere―and few things are surrounded by so much hype, misinformation, and misunderstanding.In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of how AI works and why...
2024-10-01
13 min
The Tech Policy Press Podcast
AI Snake Oil: Separating Hype from Reality
Arvind Narayanan and Sayash Kapoor are the authors of AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, published September 24 by Princeton University Press. In this conversation, Justin Hendrix focuses in particular on the book's Chapter 6, "Why Can't AI Fix Social Media?"
2024-09-29
35 min
Enjoy This Vivid Full Audiobook And Feel The Difference.
AI Snake Oil by Sayash Kapoor, Arvind Narayanan
Please visithttps://thebookvoice.com/podcasts/2/audible/16274to listen full audiobooks. Title: AI Snake Oil Author: Sayash Kapoor, Arvind Narayanan Narrator: Landon Woodson Format: mp3 Length: 9 hrs and 58 mins Release date: 09-24-24 Ratings: 4.5 out of 5 stars, 71 ratings Genres: Computer Science Publisher's Summary: Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of h...
2024-09-24
9h 58
AI DAILY: Breaking AI News Handpicked For The Curious Mind
SUPERINTELLIGENCE BY 2034
Like this? Get AIDAILY, delivered to your inbox, every weekday. Subscribe to our newsletter at https://aidaily.us OpenAI CEO Predicts AI Superintelligence in a Decade Sam Altman, CEO of OpenAI, predicts that AI superintelligence could emerge within the next 10 years, marking the start of "The Intelligence Age." While acknowledging potential labor market disruptions, he envisions AI revolutionizing fields like healthcare and education, driving global prosperity. Altman urges caution but remains optimistic about AI’s societal impact. California Implements AI Laws on Deepfakes, Political Ads, and Robocalls California Go...
2024-09-24
03 min
Immerse Yourself In This Ground-Breaking Full Audiobook And Feel The Difference.
AI Snake Oil by Sayash Kapoor, Arvind Narayanan
Please visithttps://thebookvoice.com/podcasts/2/audible/16274to listen full audiobooks. Title: AI Snake Oil Author: Sayash Kapoor, Arvind Narayanan Narrator: Landon Woodson Format: mp3 Length: 9 hrs and 58 mins Release date: 09-24-24 Ratings: 4.5 out of 5 stars, 71 ratings Genres: Computer Science Publisher's Summary: Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of h...
2024-09-24
9h 58
Relish: This Best-Selling Full Audiobook For Curious Minds.
AI Snake Oil by Sayash Kapoor, Arvind Narayanan
Please visithttps://thebookvoice.com/podcasts/2/audible/17670to listen full audiobooks. Title: AI Snake Oil Author: Sayash Kapoor, Arvind Narayanan Narrator: Landon Woodson Format: mp3 Length: 9 hrs and 58 mins Release date: 09-24-24 Ratings: 4.5 out of 5 stars, 71 ratings Genres: History & Culture Publisher's Summary: Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut through the confusion to give you an essential understanding of h...
2024-09-24
9h 58
Uncover A Audiobooks Digest Pack Built To Gain Confidence.
Sayash Kapoor, Arvind Narayanan interprets the audiobook AI Snake Oil
Please visit https://thebookvoice.com/podcasts/2/audible/16274 to listen full audiobooks. Title: AI Snake Oil Author: Sayash Kapoor, Arvind Narayanan Narrator: Landon Woodson Format: mp3 Length: 9 hrs and 58 mins Release date: 09-24-24 Ratings: 4.5 out of 5 stars, 71 ratings Genres: Computer Science Publisher's Summary: Confused about AI and worried about what it means for your future and the future of the world? You’re not alone. AI is everywhere—and few things are surrounded by so much hype, misinformation, and misunderstanding. In AI Snake Oil, computer scientists Arvind Narayanan and Sayash Kapoor cut...
2024-09-24
9h 58
Ground Truths
AI Snake Oil—A New Book by 2 Princeton University Computer Scientists
Arvind Narayanan and Sayash Kapoor are well regarded computer scientists at Princeton University and have just published a book with a provocative title, AI Snake Oil. Here I’ve interviewed Sayash and challenged him on this dismal title, for which he provides solid examples of predictive AI’s failures. Then we get into the promise of generative AI.Full videos of all Ground Truths podcasts can be seen on YouTube here. The audios are also available on Apple and Spotify.Transcript with links to audio and external links to key...
2024-09-24
39 min
Tech Policy Podcast
385: AI Snake Oil
Sayash Kapoor (Princeton) discusses the incoherence of precise p(doom) predictions and the pervasiveness of AI “snake oil.” Check out his and Arvind Narayanan’s new book, AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.Topics include:What’s a prediction, really?p(doom): your guess is as good as anyone’sFreakishly chaotic creatures (us, that is)AI can’t predict the impact of AIGaming AI with invisible inkLife is luck—let’s act like itSuperintelligence (us, that is)The bitter lessonAI danger: sweat the small stuffLinks:...
2024-09-23
53 min
Masters of Privacy
Daniel Jaye: non-deprecated cookies (II), hyper-federated data, p3p and publishers
This is our second interview analyzing the impact of Google’s decision not to deprecate third-party cookies on its Chrome browser. Daniel Jaye is a seasoned technology industry executive and currently is CEO and founder of Aqfer, a Marketing Data Platform on top of which businesses can build their own MarTech and AdTech solutions. Daniel has provided strategic, tactical and technology advisory services to a wide range of marketing technology and big data companies. Clients have included Brave Browser, Altiscale, ShareThis, Ghostery, OwnerIQ, Netezza, Akamai, and Tremor Media. He was the founder and CEO of Kor...
2024-09-16
23 min
Future Tense
AI snake oil — its limits, risks, and its thirst for resources
Chat GPT pioneer, Sam Altman, reckons democratic countries will need to re-write their social contracts once AI reaches its full potential, such is its power to shape the future. But to quote a famous political aphorism: "he would say that, wouldn't he?" Princeton computer scientist, Arvind Narayanan, joins us to talk about the hype, the reality and the true limits of Artificial Intelligence. His new book is called "AI Snake Oil: What Artificial Intelligence Can Do, What it Can't, and How to Tell the Difference". Also, AI's dirty secret – it's a huge consumer of both power and water. And th...
2024-09-13
29 min
The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: OpenAI's Newest Board Member, Zico Colter on The Biggest Bottlenecks to the Performance of Foundation Models | The Biggest Questions and Concerns in AI Safety | How to Regulate an AI-Centric World
Zico Colter is a Professor and the Director of the Machine Learning Department at Carnegie Mellon University. His research spans several topics in AI and machine learning, including work in AI safety and robustness, LLM security, the impact of data on models, implicit models, and more. He also serves on the Board of OpenAI, as a Chief Expert for Bosch, and as Chief Technical Advisor to Gray Swan, a startup in the AI safety space. In Today's Episode with Zico Colter We Discuss: 1. Model Performance: What are the Bottlenecks:...
2024-09-04
1h 00
Doom Debates
Arvind Narayanan's Makes AI Sound Normal | Liron Reacts
Today I’m reacting to the 20VC podcast with Harry Stebbings and Princeton professor Arvind Narayanan.Prof. Narayanan is known for his critical perspective on the misuse and over-hype of artificial intelligence, which he often refers to as “AI snake oil”. Narayanan’s critiques aim to highlight the gap between what AI can realistically achieve, and the often misleading promises made by companies and researchers. I analyze Arvind’s takes on the comparative dangers of AI and nuclear weapons, the limitations of current AI models, and AI’s trajectory toward being a commodity rather than a superintel...
2024-08-29
1h 09
The Twenty Minute VC (20VC): Venture Capital | Startup Funding | The Pitch
20VC: AI Scaling Myths: More Compute is not the Answer | The Core Bottlenecks in AI Today: Data, Algorithms and Compute | The Future of Models: Open vs Closed, Small vs Large with Arvind Narayanan, Professor of Computer Science @ Princeton
Arvind Narayanan is a professor of Computer Science at Princeton and the director of the Center for Information Technology Policy. He is a co-author of the book AI Snake Oil and a big proponent of the AI scaling myths around the importance of just adding more compute. He is also the lead author of a textbook on the computer science of cryptocurrencies which has been used in over 150 courses around the world, and an accompanying Coursera course that has had over 700,000 learners. In Today's Episode with Arvind Narayanan We Discuss:
2024-08-28
51 min
Scaling Theory
#9 – Arvind Narayanan: Myths and Policies in Scaling AI
My guest is Arvind Narayanan, a Professor of Computer Science at Princeton University, and the director of the Center for Information Technology Policy, also at Princeton. Arvind is renowned for his work on the societal impacts of digital technologies, including his textbook on fairness and machine learning, his online course on cryptocurrencies, his research on data de-anonymization, dark patterns, and more. He has already amassed over 30,000 citations on Google Scholar. In just a few days, in late September 2024, Arvind will release a book co-authored with Sayash Kapoor titled “AI Snake Oil: What Artificial Intelligence Can Do, What It...
2024-08-26
48 min
Fluidity
Artificial Neurons Considered Harmful, Part 2
The conclusion of this chapter. So-called “neural networks” are extremely expensive, poorly understood, unfixably unreliable, deceptive, data hungry, and inherently limited in capabilities. In short: they are bad. https://betterwithout.ai/artificial-neurons-considered-harmful Sayash Kapoor and Arvind Narayanan’s "The bait and switch behind AI risk prediction tools": https://aisnakeoil.substack.com/p/the-bait-and-switch-behind-ai-risk A video titled "Latent Space Walk": https://www.youtube.com/watch?v=bPgwwvjtX_g Another video showing a walk through latent space: https://www.youtube.com/watch?v=YnXiM97ZvOM You can support the podcast and get episodes a week early, by supporti...
2024-08-11
28 min
Doom Debates
P(Doom) Estimates Shouldn't Inform Policy??
Princeton Comp Sci Ph.D. candidate Sayash Kapoor co-authored a blog post last week with his professor Arvind Narayanan called "AI Existential Risk Probabilities Are Too Unreliable To Inform Policy".While some non-doomers embraced the arguments, I see it as contributing nothing to the discourse besides demonstrating a popular failure mode: a simple misunderstanding of the basics of Bayesian epistemology.I break down Sayash's recent episode of Machine Learning Street Talk point-by-point to analyze his claims from the perspective of the one true epistemology: Bayesian epistemology.00:00 Introduction03:40 Bayesian Reasoning04:33...
2024-08-05
52 min
CSAIL Alliances Podcasts
Hype vs. Reality: the Current State of AI with Arvind Narayanan
Princeton Professor Arvind Narayanan, author of "AI Snake Oil," sheds light on the stark contrast between the public perception and actual capabilities of AI. In this podcast, he explores the significant gap between the excitement surrounding AI and its current limitations. Find out more about CSAIL Alliances, as well as a full transcript of this podcast, at https://cap.csail.mit.edu/podcasts/current-state-ai-arvind-narayanan. If you would like to learn more about CSAIL's Professional Development Courses, visit here: https://cap.csail.mit.edu/events-professional-programs. Podcast listeners save 10% on courses with code MITXPOD10.
2024-08-05
33 min
Machine Learning Street Talk (MLST)
Sayash Kapoor - How seriously should we take AI X-risk? (ICML 1/13)
How seriously should governments take the threat of existential risk from AI, given the lack of consensus among researchers? On the one hand, existential risks (x-risks) are necessarily somewhat speculative: by the time there is concrete evidence, it may be too late. On the other hand, governments must prioritize — after all, they don’t worry too much about x-risk from alien invasions. MLST is sponsored by Brave: The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perf...
2024-07-28
49 min
The Tech Aunties
AI's Unintended Consequences: A Data Privacy Architect's Take with Swati Popuri
Is AI pure hype? Or will it truly create a brighter digital future? In this episode, recorded live at SXSW, data privacy architect Swati Popuri sheds light on AI's incredible potential and unintended consequences. Swati provides expert insights on navigating AI's impact, from its power to streamline our lives to the risks of deepfakes and bias. Don't miss the lively Q&A! Don’t miss the Tech Aunties Podcast – Subscribe now to stay updated on all things related to technology, responsible AI, and product marketing. Expert insights like those from Swati Popuri will help guid...
2024-04-17
46 min
The Boring AppSec Podcast
S1E01 - Asset Inventory
Welcome to the Boring AppSec Podcast! In Episode 1, we discuss software inventories. What they are, why we need them, and what are our favorite ways to build them. References: We will try and add information about all the references we make here. Please enter rabbit holes at will :) Cartography - https://github.com/lyft/cartography GenAI + Cartography https://shinobi.security/#how-it-works https://github.com/samvas-codes/cspm-gpt Commercial asset inventory mentioned on the show: https://www.jupiterone.com/ Talk by Sandesh and Satyaki on automating asset inventory generation at Razorpay: https://www.youtube.com/watch?v=8q42...
2024-03-04
44 min
JAMA+ AI Conversations
AI Monitoring to Reduce Data-Based Disparities
Amid the surging buzz around artificial intelligence (AI), can we trust the Al hype, and more importantly, are we ready for its implications? In this Q&A, Arvind Narayanan, PhD, a professor of computer science at Princeton, joins JAMA's Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, to discuss the exploration of Al's fairness, transparency, and accountability. Related Content: How to Navigate the Pitfalls of AI Hype in Health Care
2024-01-03
25 min
JAMA Medical News
AI and Clinical Practice—AI Monitoring to Reduce Data-Based Disparities
Amid the surging buzz around artificial intelligence (AI), can we trust the Al hype, and more importantly, are we ready for its implications? In this Q&A, Arvind Narayanan, PhD, a professor of computer science at Princeton, joins JAMA's Editor in Chief Kirsten Bibbins-Domingo, PhD, MD, MAS, to discuss the exploration of Al's fairness, transparency, and accountability. Related Content: How to Navigate the Pitfalls of AI Hype in Health Care
2024-01-03
25 min
AI and the Future of Work: Artificial Intelligence in the Workplace, Business, Ethics, HR, and IT for AI Enthusiasts, Leaders and Academics
Josh Drean, co-founder of The Work3 Institute, author, and future of work authority, discusses human-centric employment in the era of AI
We've met future of work visionaries recently like Gary Bolles, Dr. John Boudreau, Mark McCrindle, and Josh Bersin. All have shared unique perspectives on how AI is redefining the employee experience. Today's guest belongs on that Mt. Rushmore of future of work luminaries.Josh Drean is Co-founder & Director of Employee Experience at The Work3 Institute, AI + Work Advisor at the Harvard Innovation Labs, and Co-author of Employment is Dead (Harvard Business Review Press, 2024)His work has been featured in Harvard Business Review, Forbes, Fast Company, and The Economist and he has made appearances on The...
2023-12-11
42 min
The Ezra Klein Show
A Lot Has Happened in A.I. Let’s Catch Up.
Thursday marked the one-year anniversary of the release of ChatGPT. A lot has happened since. OpenAI, the makers of ChatGPT, recently dominated headlines again after the nonprofit board of directors fired C.E.O. Sam Altman, only for him to return several days later.But that drama isn’t actually the most important thing going on in the A.I. world, which hasn’t slowed down over the past year, even as people are still discovering ChatGPT for the first time and reckoning with all of its implications.Tech journalists Kevin Roose and Casey Newton are...
2023-12-01
1h 10
Tech Mirror
Privacy: Move Fast and Regulate It
Curious about the future of privacy in Australia? Join us as we delve into the world of privacy regulation in Australia. Our expert panel shares their thoughts on the Government’s response to the Privacy Act Review Report. It’s a follow-up to episode #22 ‘Privacy is Not Dead’. Returning guest Anna Johnston, founder and Principal of Salinger Privacy, is joined by Ryan Black, Head of Policy for the Tech Council of Australia, and Kate Bower, a fellow at the UTS Human Technology Institute, currently on sabbatical from CHOICE as Consumer Data Advocate. The panel discus...
2023-11-27
55 min
One Thing Today in Tech
US President Biden moves to establish AI guardrails with Executive Order
In today’s episode we take a quick look at news of US President Joe Biden’s executive order to regulate AI, but first one other headline that’s caught everyone’s attention at home. Headlines Several politicians from various opposition parties in India have been sent notifications by Apple that they were being targeted by “state-sponsored attackers,” according to multiple media reports. Among those who may have been targeted are members of parliament including TMC's Mahua Moitra, Shiv Sena (UBT's) Priyanka Chaturvedi, Congress's Pawan Khera and Shashi Tharoor, AAP's Raghav Chadha, and CPIM's Sitar...
2023-11-01
05 min
Science Vs
AI: Is It Out Of Control?
Artificial Intelligence seems more human-like and capable than ever before — but how did it get so good so quickly? Today, we’re pulling back the curtain to find out exactly how AI works. And we'll dig into one of the biggest problems that scientists are worried about here: The ability of AI to trick us. We talk to Dr. Sasha Luccioni and Professor Seth Lazar about the science.This episode contains explicit language.There’s also a brief mention of suicide, so please take care when listening. Here are some c...
2023-06-08
42 min
Tech45
#607: De opgesmukte maan
Follow-up Ooit gehoord van ‘Glitch tokens’? ”PsyNetMessage”, “RawDownload”, “RandomRedditorWithNo” Wat is er nieuw aan GPT-4? Clippy komt terug! Microsoft gaat GPT4 verwerken in Office OpenAI is niet meer open Arvind Narayanan - Decoding the Hype About AI Allen Pike - A 175-Billion-Parameter Goldfish Onderwerpen Er zijn nieuwe Sonos-speakers: Sonos Era 100 en Era 300. Nep! Die Samsung-foto’s ‘Space Zoom’ van de maan! Maar is dat wel een probeem? Kwetsbaarheden in ‘Androidland’: Baseband in Samsung’s Exynos chip is kwetsbaar & ‘aCropalypse’: De ingebouwde foto markup app op Pixel telefoons heeft een kwetsbaarheid die weggesneden (cropped) data lekt. De Amerikaanse ruimtevaartorganisati...
2023-03-22
57 min
Distributed AI Research Institute
Mystery AI Hype Theater 3000, Episode 3: "Can machines learn how to behave?" (Part 3)
Emily and Alex finally finish their reading of "Can machines learn how to behave?" by Blaise Aguera y Arcas. Citations: Inioluwa Deborah Raji, Emily M. Bender, Amandalynne Paullada, Emily Denton, Alex Hanna. "AI and the Everything in the Whole Wide World Benchmark" https://arxiv.org/abs/2111.15366 Janet Abbate. "Recoding Gender" https://mitpress.mit.edu/9780262304535/ Mar Hicks. "Programmed Inequality: How Britain Discarded Women Technologists and Lost Its Edge in Computing" https://mitpress.mit.edu/9780262535182/ Morgan Ames. "The Charisma Machine: A Deep Dive into One Laptop per Child"
2022-11-23
00 min
Notes From The Electronic Cottage | WERU 89.9 FM Blue Hill, Maine Local News and Public Affairs Archives
Notes from the Electronic Cottage 11/10/22: AI Snake Oil
Producer/Host: Jim Campbell Suppose you were an academic who posted some slides from a lecture on your university’s archive page and suppose that tens of thousands of people found them and downloaded them and 2 million people read your Twitter feed on the subject. Would you be surprised? This really happened to Avrind Narayanan and therein lies the source of a book in progress and blog underway entitled “AI Snake Oil.” That title alone should make it worth listening to today’s episode of the Electronic Cottage. Here are links to the sources mentioned in the program: AI Snakeoil...
2022-11-10
08 min
Reimagining the Internet
63 See Through AI Hype with Arvind Narayanan
Arvind Narayanan is a Princeton computer science professor who wants to make it easy for you to cut through the AI. In a fascinating and plain old helpful interview, Arvid runs through all the big claims made about AI today and makes them very simple to understand.
2022-10-05
36 min
Crossed Wires
Dub Dub Dee Cee
It’s been two weeks since Apple’s 2022 WWDC keynote, so myself and our newest team member, Jae, sat down to dissect some of the key announcements and the things that really got us excited. Because we’ve had a bit of time to do some more research on some of the new features, I hope that comes across and you enjoy our thoughts and observations. There’s lots we just didn’t have time to cover, for example there’s some very exciting things happening with HomeKit and Matter, but you can rest assured plans are in...
2022-06-20
1h 51
Brave New World -- hosted by Vasant Dhar
Solon Barocas on Removing Bias From Machines
Machine Learning can improve decision making in a big way -- but it can also reproduce human biases and discrimination. Solon Barocas joins Vasant Dhar in episode 24 of Brave New World to discuss the challenges of solving this problem. Useful resources: 1. Solon Barocas at his website, Cornell, Google Scholar and Twitter. 2. Fairness and Machine Learning -- Solon Barocas, Moritz Hardt and Arvind Narayanan. 3. Danger Ahead: Risk Assessment and the Future of Bail Reform: John Logan Koepke and David G. Robinson. 4. Fairness and Utilization in Allocating Resources with Uncertain Demand -- Kate Donahue and Jon Kleinberg. 5. Profiles, Probabilities, and Stereotypes...
2021-10-28
57 min
Cookies: Tech Security & Privacy
How Consumer Tech Can Manipulate You (and Take Your Data): Arvind Narayanan, associate professor of computer science, Princeton University, Part One (premiere episode)
While we're using electronic gadgets, apps, platforms and websites, they are often using us as well, including tracking our personal data. The premiere episode of our new podcast features Arvind Narayanan, associate professor of computer science here at the Princeton University School of Engineering and Applied Science. He is a widely recognized expert in the area of information privacy and fairness in machine learning. This conversation was so good, we split it into two episodes. This is the first half of our conversation. In this half, he discusses “cross-device tracking,” i...
2020-09-15
00 min
Cookies: Tech Security & Privacy
Why Online Media Platforms Get You Hooked: Arvind Narayanan, associate professor of computer science, Princeton University, Part Two
This is the second half of our conversation with Arvind Narayanan, associate professor of computer science here at the Princeton University School of Engineering and Applied Science. He is a widely recognized expert in the area of information privacy and fairness in machine learning, with a huge Twitter following and a knack for explaining tech privacy matters in terms anyone can understand. In this half of our conversation, he talks about why he’s so active on Twitter, but not the Facebook platforms. He talks about his research into...
2020-09-15
00 min
Shift Happens
Der große KI-Bluff
Viele Unternehmen versprechen bessere Entscheidungen und fairere Auswahlprozesse dank künstlicher Intelligenz. Expert*innen befürchten: Das Potenzial von KI wird überschätzt. Auch, weil viel Unklarheit darüber herrscht, was KI eigentlich kann und was nicht. Aus der Überzeichnung der Möglichkeiten wird ein Geschäftsmodell, das sogar die Entwicklung von KI behindern könnte. Die Präsentation von Arvind Narayanan, auf die wir uns im Podcast beziehen, findet ihr hier: https://www.cs.princeton.edu/~arvindn/talks/MIT-STS-AI-snakeoil.pdf Unsere Buchtipps aus dieser Folge: Katharina Zweig: "Ein Algorithmus kennt kein Taktgefühl" Gary Marcus & Ernest Davies: "Rebooting AI: Building Ar...
2019-12-13
29 min
Three Minute Book Review
Bitcoin & Cryptocurrency Technologies, by Arvind Narayanan et al
I readable textbook on the workings of Bitcoin and related technologies. The information is still relevant to Bitcoin (minus the lightning network upgrade), but only lightly covers other crypto technologies that have risen since the publishing of this book. Overall, I recommend! Book: https://www.amazon.com/Bitcoin-Cryptocurrency-Technologies-Comprehensive-Introduction/dp/0691171696 Class: https://www.coursera.org/learn/cryptocurrency This video is also available on YouTube and BitChute.
2018-09-17
03 min
More Than Just Code podcast - iOS and Swift development, news and advice
Episode 182: Bob Hamilton's Ride
This week we look back at Saul Mora's Core Data trick at RWDevcon 2015. We follow up on Apple's 2018 Q1 earnings, the last of a super cycle and parking self-driving slippers. We discuss how Strava's heat map exposes popular exercise locations. We also talk about Mycroft AI's privacy-first smart speaker. We discuss app rejections stemming from the use of Apple's emojis in screenshots. Picks: Activating an Audio Session, Tim Horton's Scroll Up To WinSupport More Than Just Code podcast - iOS and Swift development, news and adviceLinks:Tim Mitra on Twitter: "Let’s se...
2018-02-10
1h 22
What's The Point
WTP Best Of: Internet Tracking
Jody interviews Arvind Narayanan about the latest in online tracking, and what you can do to shield yourself.
2017-03-23
32 min
What's The Point
.59 Your Browser's Fingerprint
A new survey of one million websites reveals the latest tricks being used to track your online behavior. Arvind Narayanan of Princeton University discusses his research.
2016-09-01
31 min
The Privacy Advisor Podcast
The Privacy Advisor Podcast: Arvind Narayanan
Arvind Narayanan discusses his privacy research at Princeton University, most recently on web tracking.
2016-09-01
29 min
Epicenter - Learn about Crypto, Blockchain, Ethereum, Bitcoin and Distributed Technologies
Gavin Andresen: On the Blocksize and Bitcoin’s Governance
As the debate about the blocksize continues to roar through the Bitcoin community, Gavin Andresen joins us to take a step back and ask the big questions: How should these decisions be made in the first place? What does the governance of Bitcoin look like now and what do we want it to look like in the future? In a challenging time for Bitcoin, it’s a critical discussion to have with the Chief Scientist of the Bitcoin Foundation and successor of Satoshi. We cover everything from the current blocksize debate to the nature of forks to ho...
2015-08-31
1h 18
Center for Internet and Society
Arvind Narayanan - Hearsay Culture Show #238 - KZSU-FM (Stanford)
I am pleased to post Show # 238, May 27, my interview with Prof. Arvind Narayanan of Princeton University on Bitcoin, cryptography, privacy and web transparency. Arvind does a range of information policy-related research and writing as a professor affiliated with Princeton's Center for Information Technology Policy (CITP). [Note: I am a Visiting Research Collaborator at CITP]. Through studying the operation of and security challenges surrounding the cryptocurrency Bitcoin, Arvind has been able to assess cryptography as a privacy-enhancing and dis-intermediating technology. To that end, we had a wide-ranging discussion, from the varied roles of cryptography to commercial surveillance and transparency. Because Arvind...
2015-05-27
00 min
Hearsay Culture Network
Arvind Narayanan – Hearsay Culture Show #238
I am pleased to post Show # 238, May 27, my interview with Prof. Arvind Narayanan of Princeton University on Bitcoin, cryptography, privacy and web transparency. Arvind does a range […]
2015-05-27
57 min
Hearsay Culture Radio
Arvind Narayanan – Hearsay Culture Show #238
I am pleased to post Show # 238, May 27, my interview with Prof. Arvind Narayanan of Princeton University on Bitcoin, cryptography, privacy and web transparency. Arvind does a range […]
2015-05-27
57 min