$\mu$KE: Matryoshka Unstructured Knowledge Editing of Large Language Models

0citations
PDFProject
0
citations
#337
in COLM 2025
of 418 papers
4
Top Authors
1
Data Points

Abstract

Large language models (LLMs) have emerged as powerful knowledge bases yet are limited by static training data, leading to issues such as hallucinations and safety risks. Editing a model’s internal knowledge through the locate-and-edit paradigm has proven a cost-effective alternative to retraining, though current unstructured approaches—especially window-based autoregressive methods—often disrupt the causal dependency between early memory updates and later output tokens. In this work, we first theoretically analyze these limitations and then introduce Matryoshka Unstructured Knowledge Editing (\toolname), a novel memory update mechanism that preserves such dependencies via a Matryoshka-style objective and adaptive loss coefficients. Empirical evaluations on two models across five benchmarks demonstrate that \toolname improves edit efficacy by up to 12.33\% over state-of-the-art methods, and remains robust when applied to diverse formatted edits, underscoring its potential for effective unstructured knowledge editing in LLMs.

Citation History

Feb 13, 2026
0