Listen

Description

This transcript explores the skepticism regarding rapid AI progress, arguing that current scaling methods lack the human-like ability to learn on the job. While labs invest heavily in reinforcement learning to pre-bake specific skills into models, the author suggests this approach highlights a failure in generalized intelligence. True AGI requires continual learning, allowing agents to gain expertise through experience and situational judgment rather than rigid training cycles. The text concludes that while transformative economic impact is likely, it will be delayed until models can independently master new tasks without constant human intervention. Despite impressive benchmarks, the absence of trillions in revenue indicates that AI has not yet reached the level of a human knowledge worker.