"parameter-efficient fine-tuning" Papers

132 papers found • Page 2 of 3

LoRASuite: Efficient LoRA Adaptation Across Large Language Model Upgrades

Yanan Li, Fanxu Meng, Muhan Zhang et al.

NEURIPS 2025arXiv:2505.13515
2
citations

LoRA vs Full Fine-tuning: An Illusion of Equivalence

Reece Shuttleworth, Jacob Andreas, Antonio Torralba et al.

NEURIPS 2025arXiv:2410.21228
70
citations

LoRA-X: Bridging Foundation Models with Training-Free Cross-Model Adaptation

Farzad Farhadzadeh, Debasmit Das, Shubhankar Borse et al.

ICLR 2025arXiv:2501.16559
6
citations

LT-Soups: Bridging Head and Tail Classes via Subsampled Model Soups

Masih Aminbeidokhti, Subhankar Roy, Eric Granger et al.

NEURIPS 2025arXiv:2511.10683
1
citations

Magical: Medical Lay Language Generation via Semantic Invariance and Layperson-tailored Adaptation

Weibin Liao, Tianlong Wang, Yinghao Zhu et al.

NEURIPS 2025arXiv:2508.08730
1
citations

MetaWriter: Personalized Handwritten Text Recognition Using Meta-Learned Prompt Tuning

Wenhao Gu, Li Gu, Ching Suen et al.

CVPR 2025arXiv:2505.20513
1
citations

MeteoRA: Multiple-tasks Embedded LoRA for Large Language Models

Jingwei Xu, Junyu Lai, Yunpeng Huang

ICLR 2025arXiv:2405.13053
13
citations

Mind the Quote: Enabling Quotation-Aware Dialogue in LLMs via Plug-and-Play Modules

Yueqi Zhang, Peiwen Yuan, Yiwei Li et al.

NEURIPS 2025arXiv:2505.24292

MoST: Efficient Monarch Sparse Tuning for 3D Representation Learning

Xu Han, Yuan Tang, Jinfeng Xu et al.

CVPR 2025arXiv:2503.18368
2
citations

Motion-Agent: A Conversational Framework for Human Motion Generation with LLMs

Qi Wu, Yubo Zhao, Yifan Wang et al.

ICLR 2025arXiv:2405.17013
30
citations

Multi-Token Prediction Needs Registers

Anastasios Gerontopoulos, Spyridon Gidaris, Nikos Komodakis

NEURIPS 2025arXiv:2505.10518
4
citations

One Token Can Help! Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language Models

Yutao Zhu, Zhaoheng Huang, Zhicheng Dou et al.

AAAI 2025paperarXiv:2405.19670
9
citations

On the Robustness Tradeoff in Fine-Tuning

Kunyang Li, Jean-Charles Noirot Ferrand, Ryan Sheatsley et al.

ICCV 2025arXiv:2503.14836
1
citations

Optimization Inspired Few-Shot Adaptation for Large Language Models

Boyan Gao, Xin Wang, Yibo Yang et al.

NEURIPS 2025spotlightarXiv:2505.19107

PaCA: Partial Connection Adaptation for Efficient Fine-Tuning

Sunghyeon Woo, Sol Namkung, SunWoo Lee et al.

ICLR 2025arXiv:2503.01905
3
citations

Parameter Efficient Mamba Tuning via Projector-targeted Diagonal-centric Linear Transformation

Seokil Ham, Hee-Seon Kim, Sangmin Woo et al.

CVPR 2025arXiv:2411.15224
2
citations

PointLoRA: Low-Rank Adaptation with Token Selection for Point Cloud Learning

Song Wang, Xiaolu Liu, Lingdong Kong et al.

CVPR 2025arXiv:2504.16023
4
citations

PoLAR: Polar-Decomposed Low-Rank Adapter Representation

Kai Lion, Liang Zhang, Bingcong Li et al.

NEURIPS 2025arXiv:2506.03133
5
citations

PortLLM: Personalizing Evolving Large Language Models with Training-Free and Portable Model Patches

Rana Muhammad Shahroz Khan, Pingzhi Li, Sukwon Yun et al.

ICLR 2025arXiv:2410.10870
3
citations

Project-Probe-Aggregate: Efficient Fine-Tuning for Group Robustness

Beier Zhu, Jiequan Cui, Hanwang Zhang et al.

CVPR 2025highlightarXiv:2503.09487
3
citations

Promptable Anomaly Segmentation with SAM Through Self-Perception Tuning

Hui-Yue Yang, Hui Chen, Ao Wang et al.

AAAI 2025paperarXiv:2411.17217
9
citations

Provable Meta-Learning with Low-Rank Adaptations

Jacob Block, Sundararajan Srinivasan, Liam Collins et al.

NEURIPS 2025arXiv:2410.22264

QERA: an Analytical Framework for Quantization Error Reconstruction

Cheng Zhang, Jeffrey T. H. Wong, Can Xiao et al.

ICLR 2025arXiv:2410.06040
11
citations

Quantifying Elicitation of Latent Capabilities in Language Models

Elizabeth Donoway, Hailey Joren, Arushi Somani et al.

NEURIPS 2025

RaSA: Rank-Sharing Low-Rank Adaptation

Zhiwei He, Zhaopeng Tu, Xing Wang et al.

ICLR 2025arXiv:2503.12576
5
citations

Ravan: Multi-Head Low-Rank Adaptation for Federated Fine-Tuning

Arian Raje, Baris Askin, Divyansh Jhunjhunwala et al.

NEURIPS 2025arXiv:2506.05568
3
citations

Referring Expression Comprehension for Small Objects

Kanoko Goto, Takumi Hirose, Mahiro Ukai et al.

ICCV 2025arXiv:2510.03701
1
citations

Rethinking Token Reduction with Parameter-Efficient Fine-Tuning in ViT for Pixel-Level Tasks

Cheng Lei, Ao Li, Hu Yao et al.

CVPR 2025
2
citations

RILQ: Rank-Insensitive LoRA-Based Quantization Error Compensation for Boosting 2-Bit Large Language Model Accuracy

Geonho Lee, Janghwan Lee, Sukjin Hong et al.

AAAI 2025paperarXiv:2412.01129
5
citations

Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA

Shuangyi Chen, Yuanxin Guo, Yue Ju et al.

NEURIPS 2025arXiv:2502.01755
7
citations

Seeking and Updating with Live Visual Knowledge

Mingyang Fu, Yuyang Peng, Dongping Chen et al.

NEURIPS 2025arXiv:2504.05288
7
citations

S'MoRE: Structural Mixture of Residual Experts for Parameter-Efficient LLM Fine-tuning

Hanqing Zeng, Yinglong Xia, Zhuokai Zhao et al.

NEURIPS 2025arXiv:2504.06426
2
citations

SMT: Fine-Tuning Large Language Models with Sparse Matrices

Haoze He, Juncheng Li, Xuan Jiang et al.

ICLR 2025
7
citations

Sparse-Dense Side-Tuner for efficient Video Temporal Grounding

David Pujol-Perich, Sergio Escalera, Albert Clapés

ICCV 2025arXiv:2507.07744
1
citations

Sparse MeZO: Less Parameters for Better Performance in Zeroth-Order LLM Fine-Tuning

Yong Liu, Zirui Zhu, Chaoyu Gong et al.

NEURIPS 2025arXiv:2402.15751
37
citations

StelLA: Subspace Learning in Low-rank Adaptation using Stiefel Manifold

Zhizhong Li, Sina Sajadmanesh, Jingtao Li et al.

NEURIPS 2025spotlightarXiv:2510.01938
4
citations

TADFormer: Task-Adaptive Dynamic TransFormer for Efficient Multi-Task Learning

Seungmin Baek, Soyul Lee, Hayeon Jo et al.

CVPR 2025arXiv:2501.04293
1
citations

Towards Higher Effective Rank in Parameter-Efficient Fine-tuning using Khatri-Rao Product

Paul Albert, Frederic Zhang, Hemanth Saratchandran et al.

ICCV 2025arXiv:2508.00230
4
citations

Towards Robust and Parameter-Efficient Knowledge Unlearning for LLMs

Sungmin Cha, Sungjun Cho, Dasol Hwang et al.

ICLR 2025arXiv:2408.06621
22
citations

Towards Scalable Exact Machine Unlearning Using Parameter-Efficient Fine-Tuning

Somnath Basu Roy Chowdhury, Krzysztof Choromanski, Arijit Sehanobish et al.

ICLR 2025arXiv:2406.16257
23
citations

Train with Perturbation, Infer after Merging: A Two-Stage Framework for Continual Learning

Haomiao Qiu, Miao Zhang, Ziyue Qiao et al.

NEURIPS 2025arXiv:2505.22389

Transformed Low-rank Adaptation via Tensor Decomposition and Its Applications to Text-to-image Models

Zerui Tao, Yuhta Takida, Naoki Murata et al.

ICCV 2025arXiv:2501.08727
3
citations

TR-PTS: Task-Relevant Parameter and Token Selection for Efficient Tuning

Siqi Luo, Haoran Yang, Yi Xin et al.

ICCV 2025arXiv:2507.22872
7
citations

Turning the Tables: Enabling Backward Transfer via Causal-Aware LoRA in Continual Learning

Chaoyang Li, Runze Ye, Jianyang Qin et al.

NEURIPS 2025

Uni-LoRA: One Vector is All You Need

Kaiyang Li, Shaobo Han, Qing Su et al.

NEURIPS 2025spotlightarXiv:2506.00799
3
citations

X-Mahalanobis: Transformer Feature Mixing for Reliable OOD Detection

Tong Wei, Bolin Wang, Jiang-Xin Shi et al.

NEURIPS 2025

You Only Communicate Once: One-shot Federated Low-Rank Adaptation of MLLM

Binqian Xu, Haiyang Mei, Zechen Bai et al.

NEURIPS 2025

A Multimodal, Multi-Task Adapting Framework for Video Action Recognition

Mengmeng Wang, Jiazheng Xing, Boyuan Jiang et al.

AAAI 2024paperarXiv:2401.11649
9
citations

Any2Point: Empowering Any-modality Transformers for Efficient 3D Understanding

YIWEN TANG, Renrui Zhang, Jiaming Liu et al.

ECCV 2024
19
citations

APT: Adaptive Pruning and Tuning Pretrained Language Models for Efficient Training and Inference

Bowen Zhao, Hannaneh Hajishirzi, Qingqing Cao

ICML 2024arXiv:2401.12200
28
citations