Evaluating robot policies is hard. Every lab has a different robot; reproducible evaluations are really challenging. This makes it hard for us to know which methods for learning robot policies are likely to perform the best in real-world scenarios. Taking a page from LLM evaluations like Chatbot Arena, RoboArena aims to address this problem through crowdsourcing evaluations with a network of different evaluators.
Watch Episode #34 of RoboPapers, hosted by Chris Paxton and Michael Cho, now to learn more from authors Pranav Atreya and Karl Pertsch.
Abstract:
Comprehensive, unbiased, and comparable evaluation of modern generalist policies is uniquely challenging: existing approaches for robot benchmarking typically rely on heavy standardization, either by specifying fixed evaluation tasks and environments, or by hosting centralized ‘’robot challenges’‘, and do not readily scale to evaluating generalist policies across a broad range of tasks and environments. In this work, we propose RoboArena, a new approach for scalable evaluation of generalist robot policies in the real world. Instead of standardizing evaluations around fixed tasks, environments, or locations, we propose to crowd-source evaluations across a distributed network of evaluators. Importantly, evaluators can freely choose the tasks and environments they evaluate on, enabling easy scaling of diversity, but they are required to perform double-blind evaluations over pairs of policies. Then, by aggregating preference feedback from pairwise comparisons across diverse tasks and environments, we can derive a ranking of policies. We instantiate our approach across a network of evaluators at seven academic institutions using the DROID robot platform. Through more than 600 pairwise real-robot evaluation episodes across seven generalist policies, we demonstrate that our crowd-sourced approach can more accurately rank the performance of existing generalist policies than conventional, centralized evaluation approaches, while being more scalable, resilient, and trustworthy. We open our evaluation network to the community and hope that it can enable more accessible comparisons of generalist robot policies.