Listen

Description

This September 26 2025 paper is an excerpt from a research paper introducing a variational reasoning framework designed to enhance the reasoning capabilities of language models (LLMs). This framework conceptualizes thinking traces as latent variables and uses variational inference to optimize them, building upon the Evidence Lower Bound (ELBO) and extending it with a tighter, multi-trace IWAE-style bound. Crucially, the paper proposes a forward-KL objective for stabilizing the training of the variational posterior, which samples high-quality thinking paths. The research also interprets existing methods like Rejection Sampling Finetuning (RFT) and binary-reward Reinforcement Learning (RL) as local forward-KL objectives, highlighting a previously unrecognized bias toward easier questions in these traditional approaches. Empirical validation on the Qwen 2.5 and Qwen 3 models across diverse benchmarks confirms that this principled probabilistic perspective leads to consistent performance improvements and greater training stability compared to strong baselines.

Source:

https://arxiv.org/pdf/2509.22637