FACE: Faithful Automatic Concept Extraction

3
citations
#1580
in NEURIPS 2025
of 5858 papers
4
Top Authors
7
Data Points

Abstract

Interpreting deep neural networks through concept-based explanations offers a bridge between low-level features and high-level human-understandable semantics. However, existing automatic concept discovery methods often fail to align these extracted concepts with the model's true decision-making process, thereby compromising explanation faithfulness. In this work, we propose FACE (Faithful Automatic Concept Extraction), a novel framework that augments Non-negative Matrix Factorization (NMF) with a Kullback-Leibler (KL) divergence regularization term to ensure alignment between the model's original and concept-based predictions. Unlike prior methods that operate solely on encoder activations, FACE incorporates classifier supervision during concept learning, enforcing predictive consistency and enabling faithful explanations. We provide theoretical guarantees showing that minimizing the KL divergence bounds the deviation in predictive distributions, thereby promoting faithful local linearity in the learned concept space. Systematic evaluations on ImageNet, COCO, and CelebA datasets demonstrate that FACE outperforms existing methods across faithfulness and sparsity metrics.

Citation History

Jan 26, 2026
0
Jan 26, 2026
2+2
Jan 27, 2026
2
Feb 3, 2026
3+1
Feb 13, 2026
3
Feb 13, 2026
3
Feb 13, 2026
3