Listen

Description

What happens when an AI system claims to maximize human welfare — but systematically harms specific individuals?

In this episode, Aakarsh Sharma sits down with Peter Singer — Princeton philosopher, author of Animal Liberation, and co-creator of Effective Altruism — to examine whether AI is truly doing utilitarian ethics, or just borrowing its language.

They get into COMPAS, Optum, and Allegheny — three real AI systems that fail people at the individual level. Peter explains why training data built on historical bias can't produce fair outcomes, and draws a sharp line between what utilitarianism actually demands versus what these systems deliver.

Plus: Peter reveals he plans to interview Claude — Anthropic's AI — on his own podcast, asking whether it's conscious. And he doesn't rule out artificial consciousness in the next 10 to 20 years.

Topics covered: — AI bias and the training data problem — The Rawlsian critique of utilitarian AI — The drowning child problem and proximity bias — PeterSinger.ai and its limitations — Effective Altruism vs. immediate AI harms — AI accountability and the reasoning chain

Peter Singer is the Ira W. DeCamp Chair Emeritus in Bioethics at Princeton University, named one of TIME's 100 Most Influential People, and called "the most influential living philosopher" by The New Yorker.

Subscribe to Human Layer AI for conversations at the intersection of AI, ethics, and human judgment.