Establishing Best Practices in Building Rigorous Agentic Benchmarks

15citations
arXiv:2507.02825
15
citations
#426
in NEURIPS 2025
of 5858 papers
26
Top Authors
7
Data Points

Abstract

Benchmarks are essential for quantitatively tracking progress in AI. As AI agents become increasingly capable, researchers and practitioners have introduced agentic benchmarks to evaluate agents on complex, real-world tasks. These benchmarks typically measure agent capabilities by evaluating task outcomes via specific reward designs. However, we show that many agentic benchmarks have issues in task setup or reward design. For example, SWE-bench Verified uses insufficient test cases, while TAU-bench counts empty responses as successful. Such issues can lead to under- or overestimation of agents' performance by up to 100% in relative terms. To make agentic evaluation rigorous, we introduce the Agentic Benchmark Checklist (ABC), a set of guidelines that we synthesized from our benchmark-building experience, a survey of best practices, and previously reported issues. When applied to CVE-Bench, a benchmark with a particularly complex evaluation design, ABC reduces the performance overestimation by 33%.

Citation History

Jan 26, 2026
0
Jan 26, 2026
12+12
Jan 27, 2026
12
Feb 3, 2026
13+1
Feb 13, 2026
15+2
Feb 13, 2026
15
Feb 13, 2026
15