Listen

Description

When Meta launched Vibes, an endless feed of AI-generated videos, the response was visceral disgust to the tune of "Gang nobody wants this," according to many users.

Yet OpenAI's Sora hit number one on the App Store within forty-eight hours of release. Whatever we say we want diverges sharply from what we actually consume, and that divergence reveals something troubling about where we may be headed.

Twenty-four centuries ago, Plato warned that consuming imitations corrupts our ability to recognize truth. His hierarchy placed reality at the top, physical objects as imperfect copies below, and artistic representations at the bottom ("thrice removed from truth").

AI content extends this descent in ways Plato couldn't have imagined. Machines learn from digital copies of photographs of objects, then train on their own outputs, creating copies of copies of copies. Each iteration moves further from anything resembling reality.

Cambridge and Oxford researchers recently proved Plato right through mathematics. They discovered "model collapse," showing that when AI trains on AI-generated content, quality degrades irreversibly.

Stanford found GPT-4's coding ability dropped eighty-one percent in three months, precisely when AI content began flooding training datasets. Rice University called it "Model Autophagy Disorder," comparing it to digital mad cow disease.

The deeper problem is what consuming this collapsed content does to us. Neuroscience reveals that mere exposure to something ten to twenty times makes us prefer it.

Through perceptual narrowing, we literally lose the ability to perceive distinctions we don't regularly encounter. Research on human-AI loops found that when humans interact with biased AI, they internalize and amplify those biases, even when explicitly warned about the effect.

Not all AI use is equally harmful. Human-curated, AI-assisted work often surpasses purely human creation. But you won't encounter primarily curated content. You'll encounter infinite automated feeds optimized for engagement, not quality.

Plato said recognizing imitations was the only antidote, but recognition may come too late. The real danger is not ignorance, of knowing something is synthetic and scrolling anyway.

Key Topics:

• Is AI Slop Bad for Me? (00:00)

• Imitations All the Way Down (03:52)

• AI-Generated Content: The Fourth Imitation (06:20)

• When AI Forgets the World (07:35)

• Habituation as Education (11:42)

• How the Brain Learns to Love the Mediocre (15:18)

• The Real Harm of AI Slop (18:49)

• Conclusion: Plato’s Warning and Looking Forward (22:52)

More info, transcripts, and references can be found at ethical.fm