This August 2025 paper introduces UQ, a novel evaluation framework designed to challenge large language models (LLMs) with complex, unsolved questions sourced from platforms like Stack Exchange, where no definitive ground truth answers currently exist. The framework consists of three main components: UQ-Dataset, a collection of 500 hand-filtered, difficult, and unsolved questions; UQ-Validators, a set of LLM-based validation strategies that assess candidate solutions by leveraging the observation that models are often better at verifying answers than generating them; and UQ-Platform, which facilitates community engagement and human verification. The paper highlights the generator-validator gap, demonstrating that LLMs show improved performance in validating answers as their capabilities increase, and emphasizes that UQ aims to accelerate research in domains lacking clear ground-truth verification. Ultimately, UQ offers a new paradigm for evaluating advanced AI by focusing on realistic, open-ended problems that push the boundaries of current model capabilities.
Source:
https://arxiv.org/pdf/2508.17580