"catastrophic forgetting" Papers
127 papers found • Page 3 of 3
Conference
Locality Sensitive Sparse Encoding for Learning World Models Online
Zichen Liu, Chao Du, Wee Sun Lee et al.
MAGR: Manifold-Aligned Graph Regularization for Continual Action Quality Assessment
Kanglei Zhou, Liyuan Wang, Xingxing Zhang et al.
Mitigating Catastrophic Forgetting in Online Continual Learning by Modeling Previous Task Interrelations via Pareto Optimization
Yichen WU, Hong Wang, Peilin Zhao et al.
Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models
Didi Zhu, Zhongyi Sun, Zexi Li et al.
Multi-layer Rehearsal Feature Augmentation for Class-Incremental Learning
Bowen Zheng, Da-Wei Zhou, Han-Jia Ye et al.
Neighboring Perturbations of Knowledge Editing on Large Language Models
Jun-Yu Ma, Zhen-Hua Ling, Ningyu Zhang et al.
Non-Exemplar Domain Incremental Learning via Cross-Domain Concept Integration
Qiang Wang, Yuhang He, Songlin Dong et al.
Non-exemplar Online Class-Incremental Continual Learning via Dual-Prototype Self-Augment and Refinement
Fushuo Huo, Wenchao Xu, Jingcai Guo et al.
On the Diminishing Returns of Width for Continual Learning
Etash Guha, Vihan Lakshman
PromptCCD: Learning Gaussian Mixture Prompt Pool for Continual Category Discovery
Fernando Julio Cendra, Bingchen Zhao, Kai Han
PromptFusion: Decoupling Stability and Plasticity for Continual Learning
Haoran Chen, Zuxuan Wu, Xintong Han et al.
Quantized Prompt for Efficient Generalization of Vision-Language Models
Tianxiang Hao, Xiaohan Ding, Juexiao Feng et al.
Rapid Learning without Catastrophic Forgetting in the Morris Water Maze
Raymond L Wang, Jaedong Hwang, Akhilan Boopathy et al.
Reshaping the Online Data Buffering and Organizing Mechanism for Continual Test-Time Adaptation
Zhilin Zhu, Xiaopeng Hong, Zhiheng Ma et al.
SDDGR: Stable Diffusion-based Deep Generative Replay for Class Incremental Object Detection
JUNSU KIM, Hoseong Cho, Jihyeon Kim et al.
Select and Distill: Selective Dual-Teacher Knowledge Transfer for Continual Learning on Vision-Language Models
Yu-Chu Yu, Chi-Pin Huang, Jr-Jen Chen et al.
Self-Composing Policies for Scalable Continual Reinforcement Learning
Mikel Malagón, Josu Ceberio, Jose A Lozano
Stationary Latent Weight Inference for Unreliable Observations from Online Test-Time Adaptation
Jae-Hong Lee, Joon Hyuk Chang
STSP: Spatial-Temporal Subspace Projection for Video Class-incremental Learning
Hao CHENG, SIYUAN YANG, Chong Wang et al.
Task-aware Orthogonal Sparse Network for Exploring Shared Knowledge in Continual Learning
Yusong Hu, De Cheng, Dingwen Zhang et al.
Towards Continual Knowledge Graph Embedding via Incremental Distillation
Jiajun Liu, Ke Wenjun, Peng Wang et al.
Towards Continual Learning Desiderata via HSIC-Bottleneck Orthogonalization and Equiangular Embedding
Depeng Li, Tianqi Wang, Junwei Chen et al.
Towards Efficient Replay in Federated Incremental Learning
Yichen Li, Qunwei Li, Haozhao Wang et al.
Understanding Forgetting in Continual Learning with Linear Regression
Meng Ding, Kaiyi Ji, Di Wang et al.
UNIKD: UNcertainty-Filtered Incremental Knowledge Distillation for Neural Implicit Representation
Mengqi GUO, Chen Li, Hanlin Chen et al.
What How and When Should Object Detectors Update in Continually Changing Test Domains?
Jayeon Yoo, Dongkwan Lee, Inseop Chung et al.
What Will My Model Forget? Forecasting Forgotten Examples in Language Model Refinement
Xisen Jin, Xiang Ren