Listen

Description

Large Language Models: Building Blocks & Challenges Hosted by Nathan Rigoni

In this episode we dive into the heart of today’s AI—large language models (LLMs). What makes these gigantic text‑predictors tick, and why do they sometimes hallucinate or run into bias? We’ll explore how LLMs are trained, what “next‑token prediction” really means, and the tricks (chain‑of‑thought prompting, reinforcement learning) that turn a raw predictor into a problem‑solving assistant. Can a model that has never seen a question truly reason to an answer, or is it just clever memorization?

What you will learn

Resources mentioned

Why this episode matters
Understanding LLM architecture demystifies why these models can generate coherent prose, write code, or answer complex queries—yet also why they can hallucinate, misinterpret spatial concepts, or inherit cultural bias. Grasping these strengths and limits is essential for anyone building AI products, evaluating model outputs, or simply wanting to use LLMs responsibly.

Subscribe for more AI deep dives, visit www.phronesis‑analytics.com, or email nathan.rigoni@phronesis‑analytics.com.

Keywords: large language models, next‑token prediction, tokenizer, transformer, chain of thought, REACT framework, reinforcement learning, context window, AI hallucination, model bias.