BiLoRA: Almost-Orthogonal Parameter Spaces for Continual Learning

4citations
4
citations
#1555
in CVPR 2025
of 2873 papers
4
Top Authors
3
Data Points

Abstract

Continual learning requires models to learn tasks sequentially while maintaining a delicate balance between stability (retaining knowledge of previous tasks) and plasticity (adapting to new tasks). A key challenge is preventing interference between tasks which degrades performance when learning new tasks over previously learned tasks. Recent approaches leverage parameter-efficient fine-tuning (PEFT) which adapts pre-trained models by injecting a small number of learnable parameters. However, existing PEFT-based continual learning methods such as InfLoRA face fundamental limitations, i.e., they rely on complex optimization procedures to learn orthogonal task-specific spaces which is increasingly difficult as tasks accumulate. Thus, we propose a novel bilinear reformulation that fundamentally reimagines the task separation through fixed orthogonal bases. Our key insight is that by expanding the parameter space quadratically through two fixed bases, we can achieve "almost orthogonal" task subspaces probabilistically, eliminating the need for explicit interference elimination procedures. We provide theoretical guarantees that this approach reduces the probability of task interference from ${\mathcal{O}}\left({{{(k/d)}^2}}\right)$ to ${\mathcal{O}}\left({{{\left({k/{d^2}}\right)}^2}}\right)$, ensuring reliable task separation without complex optimization. Through extensive experiments on ImageNet-R, CIFAR-100, and DomainNet, we validate our theoretical bounds and demonstrate state-of-the-art performance with reduced parameter count. The code is available at: https://github.com/yifeiacc/BiLoRA.

Citation History

Jan 26, 2026
3
Feb 1, 2026
3
Feb 6, 2026
4+1