Overcoming Generic Knowledge Loss with Selective Parameter Update

23citations
arXiv:2308.12462
23
citations
#1140
in CVPR 2024
of 2716 papers
4
Top Authors
3
Data Points

Abstract

Foundation models encompass an extensive knowledge base and offer remarkable transferability. However, this knowledge becomes outdated or insufficient over time. The challenge lies in continuously updating foundation models to accommodate novel information while retaining their original capabilities. Leveraging the fact that foundation models have initial knowledge on various tasks and domains, we propose a novel approach that, instead of updating all parameters equally, localizes the updates to a sparse set of parameters relevant to the task being learned. We strike a balance between efficiency and new task performance, while maintaining the transferability and generalizability of foundation models. We extensively evaluate our method on foundational vision-language models with a diverse spectrum of continual learning tasks. Our method achieves improvements on the accuracy of the newly learned tasks up to 7% while preserving the pretraining knowledge with a negligible decrease of 0.9% on a representative control set accuracy.

Citation History

Jan 28, 2026
0
Feb 13, 2026
23+23
Feb 13, 2026
23