Understanding the Uncertainty of LLM Explanations: A Perspective Based on Reasoning Topology

12citations
PDFProject
12
citations
#110
in COLM 2025
of 418 papers
6
Top Authors
1
Data Points

Abstract

Understanding the uncertainty in large language model (LLM) explanations is important for evaluating their faithfulness and reasoning consistency, thus providing insights into the reliability of LLM's output. In this work, we propose a novel framework that quantifies uncertainty in LLM explanations through a formal reasoning topology perspective. By designing a structural elicitation strategy, we can decompose the explanation into the knowledge and reasoning dimensions, which allows us to not only quantify reasoning uncertainty but also assess knowledge redundancy and provide interpretable insights into the model’s reasoning structure. Our method offers a systematic way to interpret the LLM reasoning process, analyze limitations, and provide guidance for enhancing robustness and faithfulness. This work pioneers the use of graph-structured uncertainty measurement in LLM explanations, offering a new perspective on evaluating and improving reasoning capabilities.

Citation History

Feb 13, 2026
12