Poster "catastrophic forgetting" Papers
94 papers found • Page 2 of 2
Conference
Vision and Language Synergy for Rehearsal Free Continual Learning
Muhammad Anwar Masum, Mahardhika Pratama, Savitha Ramasamy et al.
Adapt without Forgetting: Distill Proximity from Dual Teachers in Vision-Language Models
MENGYU ZHENG, Yehui Tang, Zhiwei Hao et al.
An Effective Dynamic Gradient Calibration Method for Continual Learning
Weichen Lin, Jiaxiang Chen, Ruomin Huang et al.
Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning
XINYUAN GAO, Songlin Dong, Yuhang He et al.
Bridge Past and Future: Overcoming Information Asymmetry in Incremental Object Detection
QIJIE MO, Yipeng Gao, Shenghao Fu et al.
Bridging the Pathology Domain Gap: Efficiently Adapting CLIP for Pathology Image Analysis with Limited Labeled Data
Zhengfeng Lai, Joohi Chauhan, Brittany N. Dugger et al.
Class-Incremental Learning with CLIP: Adaptive Representation Adjustment and Parameter Fusion
Linlan Huang, Xusheng Cao, Haori Lu et al.
CLEO: Continual Learning of Evolving Ontologies
Shishir Muralidhara, Saqib Bukhari, Georg Dr. Schneider et al.
CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning
Junghun Oh, Sungyong Baik, Kyoung Mu Lee
CroMo-Mixup: Augmenting Cross-Model Representations for Continual Self-Supervised Learning
Erum Mushtaq, Duygu Nur Yaldiz, Yavuz Faruk Bakman et al.
Cs2K: Class-specific and Class-shared Knowledge Guidance for Incremental Semantic Segmentation
Wei Cong, Yang Cong, Yuyang Liu et al.
Defense without Forgetting: Continual Adversarial Defense with Anisotropic & Isotropic Pseudo Replay
Yuhang Zhou, Zhongyun Hua
Diffusion-Driven Data Replay: A Novel Approach to Combat Forgetting in Federated Class Continual Learning
Jinglin Liang, Jin Zhong, Hanlin Gu et al.
Disentangled Continual Graph Neural Architecture Search with Invariant Modular Supernet
Zeyang Zhang, Xin Wang, Yijian Qin et al.
Few-Shot Image Generation by Conditional Relaxing Diffusion Inversion
Yu Cao, Shaogang Gong
Flatness-aware Sequential Learning Generates Resilient Backdoors
Hoang Pham, The-Anh Ta, Anh Tran et al.
Generative Multi-modal Models are Good Class Incremental Learners
Xusheng Cao, Haori Lu, Linlan Huang et al.
Gradual Divergence for Seamless Adaptation: A Novel Domain Incremental Learning Method
Jeeveswaran Kishaan, Elahe Arani, Bahram Zonooz
Human Motion Forecasting in Dynamic Domain Shifts: A Homeostatic Continual Test-time Adaptation Framework
Qiongjie Cui, Huaijiang Sun, Bin Li et al.
Improving Plasticity in Online Continual Learning via Collaborative Learning
Maorong Wang, Nicolas Michel, Ling Xiao et al.
Layerwise Proximal Replay: A Proximal Point Method for Online Continual Learning
Jinsoo Yoo, Yunpeng Liu, Frank Wood et al.
Learning to Continually Learn with the Bayesian Principle
Soochan Lee, Hyeonseong Jeon, Jaehyeon Son et al.
Locality Sensitive Sparse Encoding for Learning World Models Online
Zichen Liu, Chao Du, Wee Sun Lee et al.
MAGR: Manifold-Aligned Graph Regularization for Continual Action Quality Assessment
Kanglei Zhou, Liyuan Wang, Xingxing Zhang et al.
Mitigating Catastrophic Forgetting in Online Continual Learning by Modeling Previous Task Interrelations via Pareto Optimization
Yichen WU, Hong Wang, Peilin Zhao et al.
Model Tailor: Mitigating Catastrophic Forgetting in Multi-modal Large Language Models
Didi Zhu, Zhongyi Sun, Zexi Li et al.
Multi-layer Rehearsal Feature Augmentation for Class-Incremental Learning
Bowen Zheng, Da-Wei Zhou, Han-Jia Ye et al.
Neighboring Perturbations of Knowledge Editing on Large Language Models
Jun-Yu Ma, Zhen-Hua Ling, Ningyu Zhang et al.
Non-Exemplar Domain Incremental Learning via Cross-Domain Concept Integration
Qiang Wang, Yuhang He, Songlin Dong et al.
On the Diminishing Returns of Width for Continual Learning
Etash Guha, Vihan Lakshman
PromptCCD: Learning Gaussian Mixture Prompt Pool for Continual Category Discovery
Fernando Julio Cendra, Bingchen Zhao, Kai Han
PromptFusion: Decoupling Stability and Plasticity for Continual Learning
Haoran Chen, Zuxuan Wu, Xintong Han et al.
Quantized Prompt for Efficient Generalization of Vision-Language Models
Tianxiang Hao, Xiaohan Ding, Juexiao Feng et al.
Rapid Learning without Catastrophic Forgetting in the Morris Water Maze
Raymond L Wang, Jaedong Hwang, Akhilan Boopathy et al.
Reshaping the Online Data Buffering and Organizing Mechanism for Continual Test-Time Adaptation
Zhilin Zhu, Xiaopeng Hong, Zhiheng Ma et al.
Select and Distill: Selective Dual-Teacher Knowledge Transfer for Continual Learning on Vision-Language Models
Yu-Chu Yu, Chi-Pin Huang, Jr-Jen Chen et al.
Self-Composing Policies for Scalable Continual Reinforcement Learning
Mikel Malagón, Josu Ceberio, Jose A Lozano
Stationary Latent Weight Inference for Unreliable Observations from Online Test-Time Adaptation
Jae-Hong Lee, Joon Hyuk Chang
STSP: Spatial-Temporal Subspace Projection for Video Class-incremental Learning
Hao CHENG, SIYUAN YANG, Chong Wang et al.
Task-aware Orthogonal Sparse Network for Exploring Shared Knowledge in Continual Learning
Yusong Hu, De Cheng, Dingwen Zhang et al.
Towards Efficient Replay in Federated Incremental Learning
Yichen Li, Qunwei Li, Haozhao Wang et al.
Understanding Forgetting in Continual Learning with Linear Regression
Meng Ding, Kaiyi Ji, Di Wang et al.
UNIKD: UNcertainty-Filtered Incremental Knowledge Distillation for Neural Implicit Representation
Mengqi GUO, Chen Li, Hanlin Chen et al.
What How and When Should Object Detectors Update in Continually Changing Test Domains?
Jayeon Yoo, Dongkwan Lee, Inseop Chung et al.