Listen

Description

These sources collectively explore various approaches to evaluating and improving Large Language Models (LLMs). Several papers introduce new benchmark datasets designed to test LLMs on complex reasoning tasks, such as the "BIG-Bench Hard (BBH)" suite, the graduate-level "GPQA" questions in science, and "MuSR" for multistep soft reasoning in natural language narratives. A key technique discussed across these sources is Chain-of-Thought (CoT) prompting, which encourages LLMs to show their step-by-step reasoning, leading to improved performance, often surpassing human-rater averages on challenging tasks. Additionally, the "Instruction-Following Eval (IFEval)" introduces a reproducible benchmark for verifiable instructions, allowing for objective assessment of an LLM's ability to follow explicit directives. The "MMLU-Pro Benchmark" further contributes a large-scale dataset across diverse disciplines to rigorously assess model capabilities, emphasizing the need for robust evaluation metrics and challenging data to push the boundaries of AI reasoning.

Sources:

https://github.com/EleutherAI/lm-evaluation-harness

https://github.com/EleutherAI/lm-evaluation-harness/blob/main/lm_eval/tasks/leaderboard/README.md

https://arxiv.org/pdf/2103.03874 - Measuring Mathematical Problem Solving With the

MATH Dataset

https://arxiv.org/pdf/2210.09261 - Challenging BIG-Bench tasks and

whether chain-of-thought can solve them

https://arxiv.org/pdf/2310.16049 - MUSR: TESTING THE LIMITS OF CHAIN-OF-THOUGHT

WITH MULTISTEP SOFT REASONING

https://arxiv.org/pdf/2311.07911 - Instruction-Following Evaluation for Large Language

Models

https://arxiv.org/pdf/2311.12022 - GPQA: A Graduate-Level Google-Proof

Q&A Benchmark

https://arxiv.org/pdf/2406.01574 - MMLU-Pro: A More Robust and Challenging

Multi-Task Language Understanding Benchmark