GRU: Mitigating the Trade-off between Unlearning and Retention for LLMs

10citations
arXiv:2503.09117
10
citations
#558
in ICML 2025
of 3340 papers
7
Top Authors
4
Data Points

Abstract

Large language model (LLM) unlearning has demonstrated its essential role in removing privacy and copyright-related responses, crucial for their legal and safe applications. However, the pursuit of complete unlearning often comes with substantial costs due to its compromises in their general functionality, leading to a notorious trade-off between unlearning and retention. It motivates this paper to explore enhanced unlearning schemes that can mitigate this trade-off. Specifically, we propose Gradient Rectified Unlearning (GRU), an improved framework that regulates the directions of gradient updates during the unlearning procedure such that their side impacts on other, unrelated responses can be minimized. GRU is easy and general to implement, demonstrating practical effectiveness across a variety of well-established unlearning benchmarks.

Citation History

Jan 28, 2026
0
Feb 13, 2026
10+10
Feb 13, 2026
10
Feb 13, 2026
10