"low-rank adaptation" Papers

86 papers found • Page 2 of 2

SMoLoRA: Exploring and Defying Dual Catastrophic Forgetting in Continual Visual Instruction Tuning

Ziqi Wang, Chang Che, Qi Wang et al.

ICCV 2025arXiv:2411.13949
4
citations

S'MoRE: Structural Mixture of Residual Experts for Parameter-Efficient LLM Fine-tuning

Hanqing Zeng, Yinglong Xia, Zhuokai Zhao et al.

NEURIPS 2025arXiv:2504.06426
2
citations

StelLA: Subspace Learning in Low-rank Adaptation using Stiefel Manifold

Zhizhong Li, Sina Sajadmanesh, Jingtao Li et al.

NEURIPS 2025spotlightarXiv:2510.01938
4
citations

The Primacy of Magnitude in Low-Rank Adaptation

Zicheng Zhang, Haoran Li, Yifeng Zhang et al.

NEURIPS 2025spotlightarXiv:2507.06558
2
citations

Towards Higher Effective Rank in Parameter-Efficient Fine-tuning using Khatri-Rao Product

Paul Albert, Frederic Zhang, Hemanth Saratchandran et al.

ICCV 2025arXiv:2508.00230
4
citations

Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs

Sungmin Cha, Sungjun Cho, Dasol Hwang et al.

ICLR 2025arXiv:2408.06621
22
citations

Transformed Low-rank Adaptation via Tensor Decomposition and Its Applications to Text-to-image Models

Zerui Tao, Yuhta Takida, Naoki Murata et al.

ICCV 2025arXiv:2501.08727
3
citations

Turning the Tables: Enabling Backward Transfer via Causal-Aware LoRA in Continual Learning

Chaoyang Li, Runze Ye, Jianyang Qin et al.

NEURIPS 2025

Uni-LoRA: One Vector is All You Need

Kaiyang Li, Shaobo Han, Qing Su et al.

NEURIPS 2025spotlightarXiv:2506.00799
3
citations

You Only Communicate Once: One-shot Federated Low-Rank Adaptation of MLLM

Binqian Xu, Haiyang Mei, Zechen Bai et al.

NEURIPS 2025

You Only Sample Once: Taming One-Step Text-to-Image Synthesis by Self-Cooperative Diffusion GANs

Yihong Luo, Xiaolong Chen, Xinghua Qu et al.

ICLR 2025arXiv:2403.12931
20
citations

Accurate LoRA-Finetuning Quantization of LLMs via Information Retention

Haotong Qin, Xudong Ma, Xingyu Zheng et al.

ICML 2024arXiv:2402.05445
74
citations

AquaLoRA: Toward White-box Protection for Customized Stable Diffusion Models via Watermark LoRA

Weitao Feng, Wenbo Zhou, Jiyan He et al.

ICML 2024arXiv:2405.11135
53
citations

Asymmetry in Low-Rank Adapters of Foundation Models

Jiacheng Zhu, Kristjan Greenewald, Kimia Nadjahi et al.

ICML 2024arXiv:2402.16842
68
citations

BECoTTA: Input-dependent Online Blending of Experts for Continual Test-time Adaptation

Daeun Lee, Jaehong Yoon, Sung Ju Hwang

ICML 2024arXiv:2402.08712
20
citations

Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation

Can Yaras, Peng Wang, Laura Balzano et al.

ICML 2024arXiv:2406.04112
25
citations

Customize-A-Video: One-Shot Motion Customization of Text-to-Video Diffusion Models

Yixuan Ren, Yang Zhou, Jimei Yang et al.

ECCV 2024arXiv:2402.14780
48
citations

DoRA: Weight-Decomposed Low-Rank Adaptation

Shih-Yang Liu, Chien-Yi Wang, Hongxu Yin et al.

ICML 2024arXiv:2402.09353
706
citations

Dropout Mixture Low-Rank Adaptation for Visual Parameters-Efficient Fine-Tuning

Zhengyi Fang, Yue Wang, Ran Yi et al.

ECCV 2024
5
citations

E$^2$GAN: Efficient Training of Efficient GANs for Image-to-Image Translation

Yifan Gong, Zheng Zhan, Qing Jin et al.

ICML 2024arXiv:2401.06127

Exploiting Diffusion Prior for Generalizable Dense Prediction

Hsin-Ying Lee, Hung-Yu Tseng, Hsin-Ying Lee et al.

CVPR 2024arXiv:2311.18832
45
citations

Exploring Training on Heterogeneous Data with Mixture of Low-rank Adapters

Yuhang Zhou, Zhao Zihua, Siyuan Du et al.

ICML 2024arXiv:2406.09679
8
citations

Facial Affective Behavior Analysis with Instruction Tuning

Yifan Li, Anh Dao, Wentao Bao et al.

ECCV 2024arXiv:2404.05052
24
citations

Flora: Low-Rank Adapters Are Secretly Gradient Compressors

Yongchang Hao, Yanshuai Cao, Lili Mou

ICML 2024arXiv:2402.03293
96
citations

Frugal LMs Trained to Invoke Symbolic Solvers Achieve Parameter-Efficient Arithmetic Reasoning

Subhabrata Dutta, Ishan Pandey, Joykirat Singh et al.

AAAI 2024paperarXiv:2312.05571
7
citations

GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection

Jiawei Zhao, Zhenyu Zhang, Beidi Chen et al.

ICML 2024arXiv:2403.03507
371
citations

Harmonizing Generalization and Personalization in Federated Prompt Learning

Tianyu Cui, Hongxia Li, Jingya Wang et al.

ICML 2024arXiv:2405.09771
28
citations

Implicit Style-Content Separation using B-LoRA

Yarden Frenkel, Yael Vinker, Ariel Shamir et al.

ECCV 2024arXiv:2403.14572
117
citations

LAMPAT: Low-Rank Adaption for Multilingual Paraphrasing Using Adversarial Training

Khoi M. Le, Trinh Pham, Tho Quan et al.

AAAI 2024paperarXiv:2401.04348
11
citations

LoRA Training in the NTK Regime has No Spurious Local Minima

Uijeong Jang, Jason Lee, Ernest Ryu

ICML 2024arXiv:2402.11867
35
citations

Multi-Task Dense Prediction via Mixture of Low-Rank Experts

Yuqi Yang, Peng-Tao Jiang, Qibin Hou et al.

CVPR 2024arXiv:2403.17749
60
citations

Parameter-Efficient Fine-Tuning with Controls

Chi Zhang, Jingpu Cheng, Yanyu Xu et al.

ICML 2024

Parameter-Efficient Fine-Tuning with Discrete Fourier Transform

Ziqi Gao, Qichao Wang, Aochuan Chen et al.

ICML 2024arXiv:2405.03003
60
citations

Recovering the Pre-Fine-Tuning Weights of Generative Models

Eliahu Horwitz, Jonathan Kahana, Yedid Hoshen

ICML 2024arXiv:2402.10208
13
citations

Riemannian Preconditioned LoRA for Fine-Tuning Foundation Models

Fangzhao Zhang, Mert Pilanci

ICML 2024arXiv:2402.02347
35
citations

RoSA: Accurate Parameter-Efficient Fine-Tuning via Robust Adaptation

Mahdi Nikdan, Soroush Tabesh, Elvir Crnčević et al.

ICML 2024arXiv:2401.04679
48
citations