Poster "low-rank adaptation" Papers

73 papers found • Page 1 of 2

Accurate and Efficient Low-Rank Model Merging in Core Space

Aniello Panariello, Daniel Marczak, Simone Magistri et al.

NEURIPS 2025arXiv:2509.17786
3
citations

Cached Multi-Lora Composition for Multi-Concept Image Generation

Xiandong Zou, Mingzhu Shen, Christos-Savvas Bouganis et al.

ICLR 2025arXiv:2502.04923
6
citations

CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning

Jiangpeng He, Zhihao Duan, Fengqing Zhu

CVPR 2025arXiv:2505.24816
8
citations

Continual Multiple Instance Learning with Enhanced Localization for Histopathological Whole Slide Image Analysis

Byung Hyun Lee, Wongi Jeong, Woojae Han et al.

ICCV 2025arXiv:2507.02395
2
citations

Continuous Subspace Optimization for Continual Learning

Quan Cheng, Yuanyu Wan, Lingyu Wu et al.

NEURIPS 2025arXiv:2505.11816
1
citations

Correlated Low-Rank Adaptation for ConvNets

Wu Ran, Weijia Zhang, ShuYang Pang et al.

NEURIPS 2025

dEBORA: Efficient Bilevel Optimization-based low-Rank Adaptation

Emanuele Zangrando, Sara Venturini, Francesco Rinaldi et al.

ICLR 2025

Decoupling Angles and Strength in Low-rank Adaptation

Massimo Bini, Leander Girrbach, Zeynep Akata

ICLR 2025arXiv:2503.18225
8
citations

DEFT: Decompositional Efficient Fine-Tuning for Text-to-Image Models

Komal Kumar, Rao Anwer, Fahad Shahbaz Khan et al.

NEURIPS 2025arXiv:2509.22793

Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights

Zhiyuan Liang, Dongwen Tang, Yuhao Zhou et al.

NEURIPS 2025arXiv:2506.16406
3
citations

Ensembles of Low-Rank Expert Adapters

Yinghao Li, Vianne Gao, Chao Zhang et al.

ICLR 2025arXiv:2502.00089
6
citations

Evidential Knowledge Distillation

Liangyu Xiang, Junyu Gao, Changsheng Xu

ICCV 2025arXiv:2507.18366
1
citations

Federated Residual Low-Rank Adaption of Large Language Models

Yunlu Yan, Chun-Mei Feng, Wangmeng Zuo et al.

ICLR 2025
8
citations

HMVLM:Human Motion-Vision-Language Model via MoE LoRA

Lei Hu, Yongjing Ye, Shihong Xia

NEURIPS 2025

Image Referenced Sketch Colorization Based on Animation Creation Workflow

Dingkun Yan, Xinrui Wang, Zhuoru Li et al.

CVPR 2025arXiv:2502.19937
3
citations

Latent Space Factorization in LoRA

Shashi Kumar, Yacouba Kaloga, John Mitros et al.

NEURIPS 2025arXiv:2510.19640

LoRA3D: Low-Rank Self-Calibration of 3D Geometric Foundation models

Ziqi Lu, Heng Yang, Danfei Xu et al.

ICLR 2025arXiv:2412.07746
11
citations

LoRA Done RITE: Robust Invariant Transformation Equilibration for LoRA Optimization

Jui-Nan Yen, Si Si, Zhao Meng et al.

ICLR 2025arXiv:2410.20625
16
citations

LoRA-FAIR: Federated LoRA Fine-Tuning with Aggregation and Initialization Refinement

Jieming Bian, Lei Wang, Letian Zhang et al.

ICCV 2025arXiv:2411.14961
18
citations

LoRA Learns Less and Forgets Less

Jonathan Frankle, Jose Javier Gonzalez Ortiz, Cody Blakeney et al.

ICLR 2025arXiv:2405.09673
252
citations

LoRA-Pro: Are Low-Rank Adapters Properly Optimized?

Zhengbo Wang, Jian Liang, Ran He et al.

ICLR 2025arXiv:2407.18242
54
citations

LoRA Subtraction for Drift-Resistant Space in Exemplar-Free Continual Learning

Xuan Liu, Xiaobin Chang

CVPR 2025arXiv:2503.18985
9
citations

LoRA vs Full Fine-tuning: An Illusion of Equivalence

Reece Shuttleworth, Jacob Andreas, Antonio Torralba et al.

NEURIPS 2025arXiv:2410.21228
70
citations

LoRA-X: Bridging Foundation Models with Training-Free Cross-Model Adaptation

Farzad Farhadzadeh, Debasmit Das, Shubhankar Borse et al.

ICLR 2025arXiv:2501.16559
6
citations

LuxDiT: Lighting Estimation with Video Diffusion Transformer

Ruofan Liang, Kai He, Zan Gojcic et al.

NEURIPS 2025arXiv:2509.03680
5
citations

Magical: Medical Lay Language Generation via Semantic Invariance and Layperson-tailored Adaptation

Weibin Liao, Tianlong Wang, Yinghao Zhu et al.

NEURIPS 2025arXiv:2508.08730
1
citations

Merging LoRAs like Playing LEGO: Pushing the Modularity of LoRA to Extremes Through Rank-Wise Clustering

Ziyu Zhao, tao shen, Didi Zhu et al.

ICLR 2025arXiv:2409.16167
35
citations

MeteoRA: Multiple-tasks Embedded LoRA for Large Language Models

Jingwei Xu, Junyu Lai, Yunpeng Huang

ICLR 2025arXiv:2405.13053
13
citations

MoORE: SVD-based Model MoE-ization for Conflict- and Oblivion-Resistant Multi-Task Adaptation

Shen Yuan, Yin Zheng, Taifeng Wang et al.

NEURIPS 2025arXiv:2506.14436
1
citations

Multi-Task Vehicle Routing Solver via Mixture of Specialized Experts under State-Decomposable MDP

Yuxin Pan, Zhiguang Cao, Chengyang GU et al.

NEURIPS 2025arXiv:2510.21453

On Large Language Model Continual Unlearning

Chongyang Gao, Lixu Wang, Kaize Ding et al.

ICLR 2025arXiv:2407.10223
30
citations

PaCA: Partial Connection Adaptation for Efficient Fine-Tuning

Sunghyeon Woo, Sol Namkung, SunWoo Lee et al.

ICLR 2025arXiv:2503.01905
3
citations

Parameter Efficient Fine-tuning via Explained Variance Adaptation

Fabian Paischer, Lukas Hauzenberger, Thomas Schmied et al.

NEURIPS 2025arXiv:2410.07170
6
citations

PLAN: Proactive Low-Rank Allocation for Continual Learning

XIEQUN WANG, Zhan Zhuang, Yu Zhang

ICCV 2025arXiv:2510.21188

PointLoRA: Low-Rank Adaptation with Token Selection for Point Cloud Learning

Song Wang, Xiaolu Liu, Lingdong Kong et al.

CVPR 2025arXiv:2504.16023
4
citations

PoLAR: Polar-Decomposed Low-Rank Adapter Representation

Kai Lion, Liang Zhang, Bingcong Li et al.

NEURIPS 2025arXiv:2506.03133
5
citations

Provable Meta-Learning with Low-Rank Adaptations

Jacob Block, Sundararajan Srinivasan, Liam Collins et al.

NEURIPS 2025arXiv:2410.22264

RAD: Region-Aware Diffusion Models for Image Inpainting

Sora Kim, Sungho Suh, Minsik Lee

CVPR 2025arXiv:2412.09191
8
citations

RaSA: Rank-Sharing Low-Rank Adaptation

Zhiwei He, Zhaopeng Tu, Xing Wang et al.

ICLR 2025arXiv:2503.12576
5
citations

Ravan: Multi-Head Low-Rank Adaptation for Federated Fine-Tuning

Arian Raje, Baris Askin, Divyansh Jhunjhunwala et al.

NEURIPS 2025arXiv:2506.05568
3
citations

Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA

Shuangyi Chen, Yuanxin Guo, Yue Ju et al.

NEURIPS 2025arXiv:2502.01755
7
citations

SD-LoRA: Scalable Decoupled Low-Rank Adaptation for Class Incremental Learning

Yichen Wu, Hongming Piao, Long-Kai Huang et al.

ICLR 2025arXiv:2501.13198
34
citations

SMoLoRA: Exploring and Defying Dual Catastrophic Forgetting in Continual Visual Instruction Tuning

Ziqi Wang, Chang Che, Qi Wang et al.

ICCV 2025arXiv:2411.13949
4
citations

S'MoRE: Structural Mixture of Residual Experts for Parameter-Efficient LLM Fine-tuning

Hanqing Zeng, Yinglong Xia, Zhuokai Zhao et al.

NEURIPS 2025arXiv:2504.06426
2
citations

Towards Higher Effective Rank in Parameter-Efficient Fine-tuning using Khatri-Rao Product

Paul Albert, Frederic Zhang, Hemanth Saratchandran et al.

ICCV 2025arXiv:2508.00230
4
citations

Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs

Sungmin Cha, Sungjun Cho, Dasol Hwang et al.

ICLR 2025arXiv:2408.06621
22
citations

Transformed Low-rank Adaptation via Tensor Decomposition and Its Applications to Text-to-image Models

Zerui Tao, Yuhta Takida, Naoki Murata et al.

ICCV 2025arXiv:2501.08727
3
citations

Turning the Tables: Enabling Backward Transfer via Causal-Aware LoRA in Continual Learning

Chaoyang Li, Runze Ye, Jianyang Qin et al.

NEURIPS 2025

You Only Communicate Once: One-shot Federated Low-Rank Adaptation of MLLM

Binqian Xu, Haiyang Mei, Zechen Bai et al.

NEURIPS 2025

You Only Sample Once: Taming One-Step Text-to-Image Synthesis by Self-Cooperative Diffusion GANs

Yihong Luo, Xiaolong Chen, Xinghua Qu et al.

ICLR 2025arXiv:2403.12931
20
citations
PreviousNext