Listen

Description

We tend to think words get their meaning from the world.You see a dog, you learn the word “dog,” and now the word points to the thing.

But large language models don’t see dogs. Or anything at all.They’ve learned only from text — just patterns of words predicting other words.

This week, I wrote about what that means for AI, language, and us.It’s a deep dive into the grounding problem — a philosophical puzzle that’s suddenly become very practical.

Some researchers think models like ChatGPT are edging closer to real understanding.Others say they’re just spinning in circles — stuck in a world of words about words.

So here’s the puzzle: Can a model talk about reality without ever touching it?



This is a public episode. If you would like to discuss this with other subscribers or get access to bonus episodes, visit suzitravis.substack.com