Listen

Description

Join us as we dive into a groundbreaking study that systematically investigates the strengths and fundamental limitations of Large Reasoning Models (LRMs), the cutting-edge AI systems behind advanced "thinking" mechanisms like Chain-of-Thought with self-reflection.

Moving beyond traditional, often contaminated, mathematical and coding benchmarks, this research uses controllable puzzle environments like the Tower of Hanoi, Checker Jumping, River Crossing, and Blocks World to precisely manipulate problem complexity and offer unprecedented insights into how LRMs "think".

You'll discover surprising findings, including:

Three distinct performance regimes:

This suggests a fundamental inference-time scaling limitation in their reasoning capabilities relative to problem complexity.

This episode challenges prevailing assumptions about LRM capabilities and raises crucial questions about their true reasoning potential, paving the way for future investigations into more robust AI reasoning.

Disclaimer: This podcast is generated by Roger Basler de Roca (contact) by the use of AI. The voices are artificially generated and the discussion is based on public research data. I do not claim any ownership of the presented material as it is for education purpose only.

⁠https://rogerbasler.ch/en/contact/