"knowledge retention" Papers
12 papers found
Conference
Ask and Remember: A Questions-Only Replay Strategy for Continual Visual Question Answering
Imad Eddine MAROUF, Enzo Tartaglione, Stéphane Lathuilière et al.
ICCV 2025arXiv:2502.04469
1
citations
Catastrophic Failure of LLM Unlearning via Quantization
Zhiwei Zhang, Fali Wang, Xiaomin Li et al.
ICLR 2025arXiv:2410.16454
49
citations
CMT: A Memory Compression Method for Continual Knowledge Learning of Large Language Models
Dongfang Li, Zetian Sun, Xinshuo Hu et al.
AAAI 2025paperarXiv:2412.07393
5
citations
ELDER: Enhancing Lifelong Model Editing with Mixture-of-LoRA
Jiaang Li, Quan Wang, Zhongnan Wang et al.
AAAI 2025paperarXiv:2408.11869
2
citations
MergeBench: A Benchmark for Merging Domain-Specialized LLMs
Yifei He, Siqi Zeng, Yuzheng Hu et al.
NEURIPS 2025arXiv:2505.10833
10
citations
Progressive Homeostatic and Plastic Prompt Tuning for Audio-Visual Multi-Task Incremental Learning
Jiong Yin, Liang Li, Jiehua Zhang et al.
ICCV 2025arXiv:2507.21588
1
citations
Spurious Forgetting in Continual Learning of Language Models
Junhao Zheng, Xidi Cai, Shengjie Qiu et al.
ICLR 2025arXiv:2501.13453
30
citations
Toward Efficient Data-Free Unlearning
Chenhao Zhang, Shaofei Shen, Weitong Chen et al.
AAAI 2025paperarXiv:2412.13790
3
citations
Fine-tuning Reinforcement Learning Models is Secretly a Forgetting Mitigation Problem
Maciej Wołczyk, Bartłomiej Cupiał, Mateusz Ostaszewski et al.
ICML 2024spotlightarXiv:2402.02868
26
citations
Junk DNA Hypothesis: Pruning Small Pre-Trained Weights $\textit{Irreversibly}$ and $\textit{Monotonically}$ Impairs ``Difficult" Downstream Tasks in LLMs
Lu Yin, Ajay Jaiswal, Shiwei Liu et al.
ICML 2024
Modality Translation for Object Detection Adaptation without forgetting prior knowledge
Heitor Rapela Medeiros, Masih Aminbeidokhti, Fidel A Guerrero Pena et al.
ECCV 2024arXiv:2404.01492
4
citations
Semantic Segmentation in Multiple Adverse Weather Conditions with Domain Knowledge Retention
Xin Yang, Wending Yan, Yuan Yuan et al.
AAAI 2024paperarXiv:2401.07459
11
citations