Position: Explainable AI Cannot Advance Without Better User Studies

0citations
0
citations
#2278
in ICML 2025
of 3340 papers
4
Top Authors
1
Data Points

Abstract

In this position paper, we argue that user studies are key to understanding the value of explainable AI methods, because the end goal of explainable AI is to satisfy societal desiderata. We also argue that the current state of user studies is detrimental to the advancement of the field. We support this argument with a review of general and explainable AI-specific challenges, as well as an analysis of 607 explainable AI papers featuring user studies. We demonstrate how most user studies lack reproducibility, discussion of limitations, comparison with a baseline, or placebo explanations and are of low fidelity to real-world users and application context. This, combined with an overreliance on functional evaluation, results in a lack of understanding of the value explainable AI methods, which hinders the progress of the field. To address this issue, we call for higher methodological standards for user studies, greater appreciation of high-quality user studies in the AI community, and reduced reliance on functional evaluation.

Citation History

Jan 28, 2026
0