Poster "explainable ai" Papers
44 papers found
Conference
$\mathcal{X}^2$-DFD: A framework for e$\mathcal{X}$plainable and e$\mathcal{X}$tendable Deepfake Detection
Yize Chen, Zhiyuan Yan, Guangliang Cheng et al.
Advancing Interpretability of CLIP Representations with Concept Surrogate Model
Nhat Hoang-Xuan, Xiyuan Wei, Wanli Xing et al.
AI2TALE: An Innovative Information Theory-based Approach for Learning to Localize Phishing Attacks
Van Nguyen, Tingmin Wu, Xingliang YUAN et al.
AIGI-Holmes: Towards Explainable and Generalizable AI-Generated Image Detection via Multimodal Large Language Models
Ziyin Zhou, Yunpeng Luo, Yuanchen Wu et al.
A Unified, Resilient, and Explainable Adversarial Patch Detector
Vishesh Kumar, Akshay Agarwal
Contimask: Explaining Irregular Time Series via Perturbations in Continuous Time
Max Moebus, Björn Braun, Christian Holz
Data-centric Prediction Explanation via Kernelized Stein Discrepancy
Mahtab Sarvmaili, Hassan Sajjad, Ga Wu
Derivative-Free Diffusion Manifold-Constrained Gradient for Unified XAI
Won Jun Kim, Hyungjin Chung, Jaemin Kim et al.
Explainable Reinforcement Learning from Human Feedback to Improve Alignment
Shicheng Liu, Siyuan Xu, Wenjie Qiu et al.
Explainably Safe Reinforcement Learning
Sabine Rieder, Stefan Pranger, Debraj Chakraborty et al.
FakeShield: Explainable Image Forgery Detection and Localization via Multi-modal Large Language Models
Zhipei Xu, Xuanyu Zhang, Runyi Li et al.
F-Fidelity: A Robust Framework for Faithfulness Evaluation of Explainable AI
Xu Zheng, Farhad Shirani, Zhuomin Chen et al.
Fish-Vista: A Multi-Purpose Dataset for Understanding & Identification of Traits from Images
Kazi Sajeed Mehrab, M. Maruf, Arka Daw et al.
HEIE: MLLM-Based Hierarchical Explainable AIGC Image Implausibility Evaluator
Fan Yang, Ru Zhen, Jianing Wang et al.
Interpreting Language Reward Models via Contrastive Explanations
Junqi Jiang, Tom Bewley, Saumitra Mishra et al.
LeapFactual: Reliable Visual Counterfactual Explanation Using Conditional Flow Matching
Zhuo Cao, Xuan Zhao, Lena Krieger et al.
LOKI: A Comprehensive Synthetic Data Detection Benchmark using Large Multimodal Models
Junyan Ye, Baichuan Zhou, Zilong Huang et al.
Minimizing False-Positive Attributions in Explanations of Non-Linear Models
Anders Gjølbye, Stefan Haufe, Lars Kai Hansen
Mol-LLaMA: Towards General Understanding of Molecules in Large Molecular Language Model
Dongki Kim, Wonbin Lee, Sung Ju Hwang
On Logic-based Self-Explainable Graph Neural Networks
Alessio Ragno, Marc Plantevit, Céline Robardet
PathFinder: A Multi-Modal Multi-Agent System for Medical Diagnostic Decision-Making Applied to Histopathology
Fatemeh Ghezloo, Saygin Seyfioglu, Rustin Soraki et al.
RadZero: Similarity-Based Cross-Attention for Explainable Vision-Language Alignment in Chest X-ray with Zero-Shot Multi-Task Capability
Jonggwon Park, Byungmu Yoon, Soobum Kim et al.
Reconsidering Faithfulness in Regular, Self-Explainable and Domain Invariant GNNs
Steve Azzolin, Antonio Longa, Stefano Teso et al.
Regression-adjusted Monte Carlo Estimators for Shapley Values and Probabilistic Values
R. Teal Witter, Yurong Liu, Christopher Musco
Representational Difference Explanations
Neehar Kondapaneni, Oisin Mac Aodha, Pietro Perona
Scalable, Explainable and Provably Robust Anomaly Detection with One-Step Flow Matching
Zhong Li, Qi Huang, Yuxuan Zhu et al.
Seeing Through Deepfakes: A Human-Inspired Framework for Multi-Face Detection
Juan Hu, Shaojing Fan, Terence Sim
Smoothed Differentiation Efficiently Mitigates Shattered Gradients in Explanations
Adrian Hill, Neal McKee, Johannes Maeß et al.
Sound Logical Explanations for Mean Aggregation Graph Neural Networks
Matthew Morris, Ian Horrocks
Start Smart: Leveraging Gradients For Enhancing Mask-based XAI Methods
Buelent Uendes, Shujian Yu, Mark Hoogendoorn
Towards Synergistic Path-based Explanations for Knowledge Graph Completion: Exploration and Evaluation
Tengfei Ma, Xiang song, Wen Tao et al.
VERA: Explainable Video Anomaly Detection via Verbalized Learning of Vision-Language Models
Muchao Ye, Weiyang Liu, Pan He
Attribution-based Explanations that Provide Recourse Cannot be Robust
Hidde Fokkema, Rianne de Heide, Tim van Erven
Counterfactual Metarules for Local and Global Recourse
Tom Bewley, Salim I. Amoukou, Saumitra Mishra et al.
EiG-Search: Generating Edge-Induced Subgraphs for GNN Explanation in Linear Time
Shengyao Lu, Bang Liu, Keith Mills et al.
Generating In-Distribution Proxy Graphs for Explaining Graph Neural Networks
Zhuomin Chen, Jiaxing Zhang, Jingchao Ni et al.
Good Teachers Explain: Explanation-Enhanced Knowledge Distillation
Amin Parchami, Moritz Böhle, Sukrut Rao et al.
Graph Neural Network Explanations are Fragile
Jiate Li, Meng Pang, Yun Dong et al.
Layer-Wise Relevance Propagation with Conservation Property for ResNet
Seitaro Otsuki, Tsumugi Iida, Félix Doublet et al.
Manifold Integrated Gradients: Riemannian Geometry for Feature Attribution
Eslam Zaher, Maciej Trzaskowski, Quan Nguyen et al.
On Gradient-like Explanation under a Black-box Setting: When Black-box Explanations Become as Good as White-box
Yi Cai, Gerhard Wunder
Position: Do Not Explain Vision Models Without Context
Paulina Tomaszewska, Przemyslaw Biecek
Probabilistic Conceptual Explainers: Trustworthy Conceptual Explanations for Vision Foundation Models
Hengyi Wang, Shiwei Tan, Hao Wang
Token Transformation Matters: Towards Faithful Post-hoc Explanation for Vision Transformer
Junyi Wu, Bin Duan, Weitai Kang et al.