Argumentative Large Language Models for Explainable and Contestable Claim Verification

21
citations
#168
in AAAI 2025
of 3028 papers
6
Top Authors
4
Data Points

Abstract

The profusion of knowledge encoded in large language models (LLMs) and their ability to apply this knowledge zero-shot in a range of settings makes them promising candidates for use in decision-making. However, they are currently limited by their inability to provide outputs which can be faithfully explained and effectively contested to correct mistakes. In this paper, we attempt to reconcile these strengths and weaknesses by introducing argumentative LLMs (ArgLLMs), a method for augmenting LLMs with argumentative reasoning. Concretely, ArgLLMs construct argumentation frameworks, which then serve as the basis for formal reasoning in support of decision-making. The interpretable nature of these argumentation frameworks and formal reasoning means that any decision made by ArgLLMs may be explained and contested. We evaluate ArgLLMs' performance experimentally in comparison with state-of-the-art techniques, in the context of the decision-making task of claim verification. We also define novel properties to characterise contestability and assess ArgLLMs formally in terms of these properties.

Citation History

Jan 28, 2026
0
Feb 13, 2026
21+21
Feb 13, 2026
21
Feb 13, 2026
21