"explainable ai" Papers

68 papers found • Page 2 of 2

EiG-Search: Generating Edge-Induced Subgraphs for GNN Explanation in Linear Time

Shengyao Lu, Bang Liu, Keith Mills et al.

ICML 2024arXiv:2405.01762
7
citations

Enhance Sketch Recognition’s Explainability via Semantic Component-Level Parsing

Guangming Zhu, Siyuan Wang, Tianci Wu et al.

AAAI 2024paperarXiv:2312.07875
2
citations

Faithful and Efficient Explanations for Neural Networks via Neural Tangent Kernel Surrogate Models

Andrew Engel, Zhichao Wang, Natalie Frank et al.

ICLR 2024spotlightarXiv:2305.14585
7
citations

Faithful Model Explanations through Energy-Constrained Conformal Counterfactuals

Patrick Altmeyer, Mojtaba Farmanbar, Arie Van Deursen et al.

AAAI 2024paperarXiv:2312.10648
7
citations

Gaussian Process Neural Additive Models

Wei Zhang, Brian Barr, John Paisley

AAAI 2024paperarXiv:2402.12518
12
citations

Generating In-Distribution Proxy Graphs for Explaining Graph Neural Networks

Zhuomin Chen, Jiaxing Zhang, Jingchao Ni et al.

ICML 2024arXiv:2402.02036
7
citations

Good Teachers Explain: Explanation-Enhanced Knowledge Distillation

Amin Parchami, Moritz Böhle, Sukrut Rao et al.

ECCV 2024arXiv:2402.03119
19
citations

Graph Neural Network Explanations are Fragile

Jiate Li, Meng Pang, Yun Dong et al.

ICML 2024arXiv:2406.03193
18
citations

Keep the Faith: Faithful Explanations in Convolutional Neural Networks for Case-Based Reasoning

Tom Nuno Wolf, Fabian Bongratz, Anne-Marie Rickmann et al.

AAAI 2024paperarXiv:2312.09783
8
citations

Layer-Wise Relevance Propagation with Conservation Property for ResNet

Seitaro Otsuki, Tsumugi Iida, Félix Doublet et al.

ECCV 2024arXiv:2407.09115
10
citations

Learning Performance Maximizing Ensembles with Explainability Guarantees

Vincent Pisztora, Jia Li

AAAI 2024paperarXiv:2312.12715

Manifold Integrated Gradients: Riemannian Geometry for Feature Attribution

Eslam Zaher, Maciej Trzaskowski, Quan Nguyen et al.

ICML 2024arXiv:2405.09800
9
citations

On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box

Yi Cai, Gerhard Wunder

ICML 2024arXiv:2308.09381
3
citations

Position: Do Not Explain Vision Models Without Context

Paulina Tomaszewska, Przemyslaw Biecek

ICML 2024arXiv:2404.18316
1
citations

Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations for Vision Foundation Models

Hengyi Wang, Shiwei Tan, Hao Wang

ICML 2024arXiv:2406.12649
9
citations

Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer

Junyi Wu, Bin Duan, Weitai Kang et al.

CVPR 2024arXiv:2403.14552
16
citations

Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA

Chengen Lai, Shengli Song, Shiqi Meng et al.

AAAI 2024paperarXiv:2312.13594
10
citations

Using Stratified Sampling to Improve LIME Image Explanations

Muhammad Rashid, Elvio G. Amparore, Enrico Ferrari et al.

AAAI 2024paperarXiv:2403.17742
7
citations