Is Factuality Enhancement a Free Lunch For LLMs? Better Factuality Can Lead to Worse Context-Faithfulness

16citations
arXiv:2404.00216
16
citations
#1073
in ICLR 2025
of 3827 papers
8
Top Authors
7
Data Points

Abstract

As the modern tools of choice for text understanding and generation, large language models (LLMs) are expected to accurately output answers by leveraging the input context.This requires LLMs to possess both context-faithfulness and factual accuracy.While extensive efforts aim to reduce hallucinations through factuality enhancement methods, they also pose risks of hindering context-faithfulness, as factuality enhancement can lead LLMs to become overly confident in their parametric knowledge, causing them to overlook the relevant input context.In this work, we argue that current factuality enhancement methods can significantly undermine the context-faithfulness of LLMs.We first revisit the current factuality enhancement methods and evaluate their effectiveness in enhancing factual accuracy.Next, we evaluate their performance on knowledge editing tasks to assess the potential impact on context-faithfulness.The experimental results reveal that while these methods may yield inconsistent improvements in factual accuracy, they also cause a more severe decline in context-faithfulness, with the largest decrease reaching a striking 69.7\%.To explain these declines, we analyze the hidden states and logit distributions for the tokens representing new knowledge and parametric knowledge respectively, highlighting the limitations of current approaches.Our finding highlights the complex trade-offs inherent in enhancing LLMs.Therefore, we recommend that more research on LLMs' factuality enhancement make efforts to reduce the sacrifice of context-faithfulness.

Citation History

Jan 25, 2026
0
Jan 26, 2026
0
Jan 26, 2026
0
Jan 28, 2026
0
Feb 13, 2026
16+16
Feb 13, 2026
16
Feb 13, 2026
16