Paper "explainable ai" Papers
20 papers found
Conference
AudioGenX: Explainability on Text-to-Audio Generative Models
Hyunju Kang, Geonhee Han, Yoonjae Jeong et al.
Constructing Fair Latent Space for Intersection of Fairness and Explainability
Hyungjun Joo, Hyeonggeun Han, Sehwan Kim et al.
Explaining Decisions of Agents in Mixed-Motive Games
Maayan Orner, Oleg Maksimov, Akiva Kleinerman et al.
Exploiting Symmetries in MUS Computation
Ignace Bleukx, Hélène Verhaeghe, Bart Bogaerts et al.
Higher Order Structures for Graph Explanations
Akshit Sinha, Sreeram Vennam, Charu Sharma et al.
NOMATTERXAI: Generating “No Matter What” Alterfactual Examples for Explaining Black-Box Text Classification Models
Tuc Van Nguyen, James Michels, Hua Shen et al.
PrivateXR: Defending Privacy Attacks in Extended Reality Through Explainable AI-Guided Differential Privacy
Ripan Kumar Kundu, Istiak Ahmed, Khaza Anuarul Hoque
Towards Unifying Evaluation of Counterfactual Explanations: Leveraging Large Language Models for Human-Centric Assessments
Marharyta Domnich, Julius Välja, Rasmus Moorits Veski et al.
Understanding Emotional Body Expressions via Large Language Models
Haifeng Lu, Jiuyi Chen, Feng Liang et al.
Understanding Individual Agent Importance in Multi-Agent System via Counterfactual Reasoning
Jianming Chen, Yawen Wang, Junjie Wang et al.
Accelerating the Global Aggregation of Local Explanations
Alon Mor, Yonatan Belinkov, Benny Kimelfeld
Beyond TreeSHAP: Efficient Computation of Any-Order Shapley Interactions for Tree Ensembles
Maximilian Muschalik, Fabian Fumagalli, Barbara Hammer et al.
CGS-Mask: Making Time Series Predictions Intuitive for All
Feng Lu, Wei Li, Yifei Sun et al.
Enhance Sketch Recognition’s Explainability via Semantic Component-Level Parsing
Guangming Zhu, Siyuan Wang, Tianci Wu et al.
Faithful Model Explanations through Energy-Constrained Conformal Counterfactuals
Patrick Altmeyer, Mojtaba Farmanbar, Arie Van Deursen et al.
Gaussian Process Neural Additive Models
Wei Zhang, Brian Barr, John Paisley
Keep the Faith: Faithful Explanations in Convolutional Neural Networks for Case-Based Reasoning
Tom Nuno Wolf, Fabian Bongratz, Anne-Marie Rickmann et al.
Learning Performance Maximizing Ensembles with Explainability Guarantees
Vincent Pisztora, Jia Li
Towards More Faithful Natural Language Explanation Using Multi-Level Contrastive Learning in VQA
Chengen Lai, Shengli Song, Shiqi Meng et al.
Using Stratified Sampling to Improve LIME Image Explanations
Muhammad Rashid, Elvio G. Amparore, Enrico Ferrari et al.