This October 15, 2025 collaboration between Meta, UT Austin, UCL, UC Berkeley, Harvard University, and Periodic Labs details a systematic study on scaling compute for reinforcement learning (RL) in large language models (LLMs), aiming to bring predictability to the RL training phase. The authors introduce a principled framework that uses a sigmoidal curve to model the relationship between compute (GPU Hours) and performance (pass rate), enabling the prediction of asymptotic performance ($A$) and compute efficiency ($B$). Through extensive ablations, the research identifies ScaleRL, a robust recipe that combines best practices in asynchronous training, loss functions (CISPO), and precision fixes, demonstrating its superior scalability and stability up to 100,000 GPU-hours. Figures illustrate the predictable scaling curves for ScaleRL compared to prevalent RL methods, showing how factors like batch size, generation length, and model size influence both efficiency and the final performance ceiling.
Source:
https://arxiv.org/pdf/2510.13786