Listen

Description

This episode explores Recursive Language Models (RLMs), a groundbreaking inference strategy that enables large language models to process prompts two orders of magnitude beyond their standard context windows. We discuss how RLMs treat long inputs as part of an external environment, using a Python REPL to programmatically decompose data and recursively call the model over specific snippets. Learn how this approach effectively overcomes "context rot" to significantly outperform base models and existing scaffolds on complex, information-dense tasks.