Listen

Description

In this article, SE Gyges argues that the widely cited “stochastic parrots” critique of large language models is not only outdated but actively harmful to serious discussion of AI. The piece examines how the argument misunderstands modern AI systems, ignores advances like multimodal training and reinforcement learning, and rests on a narrow definition of “meaning.” By walking through both empirical evidence and conceptual flaws in the original claim, Gyges contends that dismissing LLMs as mere parrots prevents society from grappling with the real ethical and political challenges posed by systems that demonstrably do work.

* 00:00 - Introduction

* 02:31 - Even If True, The Argument Is Irrelevant

* 03:32 - The Argument Doesn’t Apply to Any Major Model Since 2023

* 06:45 - The Argument Was Already Obsolete When Published

* 08:05 - The Argument Is Empirically False

* 08:19 - The Octopus Test

* 12:05 - The Platonic Representation Hypothesis

* 13:24 - Form Carries Meaning

* 15:48 - The Argument Is Badly Constructed

* 16:07 - Parrots Are Amazing, Actually

* 16:56 - The Definition of Meaning Is Circular

* 19:28 - Conclusion

https://open.substack.com/pub/verysane/p/polly-wants-a-better-argument?utm_campaign=post-expanded-share&utm_medium=web



This is a public episode. If you'd like to discuss this with other subscribers or get access to bonus episodes, visit askwhocastsai.substack.com/subscribe