"parameter-efficient fine-tuning" Papers

132 papers found • Page 1 of 3

Accurate and Efficient Low-Rank Model Merging in Core Space

Aniello Panariello, Daniel Marczak, Simone Magistri et al.

NEURIPS 2025arXiv:2509.17786
3
citations

Achieving More with Less: Additive Prompt Tuning for Rehearsal-Free Class-Incremental Learning

Haoran Chen, Ping Wang, Zihan Zhou et al.

ICCV 2025arXiv:2503.07979
1
citations

Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models

Zekai Zhao, Qi Liu, Kun Zhou et al.

NEURIPS 2025spotlightarXiv:2505.17697
7
citations

AdaMSS: Adaptive Multi-Subspace Approach for Parameter-Efficient Fine-Tuning

Jingjing Zheng, Wanglong Lu, Yiming Dong et al.

NEURIPS 2025

Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and Performance of SGD for Fine-Tuning Language Models

Zeman Li, Xinwei Zhang, Peilin Zhong et al.

ICLR 2025arXiv:2410.06441
11
citations

AuroRA: Breaking Low-Rank Bottleneck of LoRA with Nonlinear Mapping

Haonan Dong, Wenhao Zhu, Guojie Song et al.

NEURIPS 2025spotlightarXiv:2505.18738
2
citations

A Wander Through the Multimodal Landscape: Efficient Transfer Learning via Low-rank Sequence Multimodal Adapter

Zirun Guo, Xize Cheng, Yangyang Wu et al.

AAAI 2025paperarXiv:2412.08979
3
citations

BiLoRA: Almost-Orthogonal Parameter Spaces for Continual Learning

Hao Zhu, Yifei Zhang, Junhao Dong et al.

CVPR 2025
4
citations

Black Sheep in the Herd: Playing with Spuriously Correlated Attributes for Vision-Language Recognition

Xinyu Tian, Shu Zou, Zhaoyuan Yang et al.

ICLR 2025arXiv:2502.15809
5
citations

CITI: Enhancing Tool Utilizing Ability in Large Language Models Without Sacrificing General Performance

Yupu Hao, Pengfei Cao, Zhuoran Jin et al.

AAAI 2025paperarXiv:2409.13202
5
citations

CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning

Jiangpeng He, Zhihao Duan, Fengqing Zhu

CVPR 2025arXiv:2505.24816
8
citations

CLOVER: Cross-Layer Orthogonal Vectors Pruning and Fine-Tuning

Fanxu Meng, Muhan Zhang

ICLR 2025arXiv:2411.17426
3
citations

Compress to Impress: Efficient LLM Adaptation Using a Single Gradient Step on 100 Samples

Shiva Sreeram, Alaa Maalouf, Pratyusha Sharma et al.

NEURIPS 2025spotlightarXiv:2510.20800

Controllable-LPMoE: Adapting to Challenging Object Segmentation via Dynamic Local Priors from Mixture-of-Experts

Yanguang Sun, Jiawei Lian, jian Yang et al.

ICCV 2025arXiv:2510.21114
1
citations

CoPEFT: Fast Adaptation Framework for Multi-Agent Collaborative Perception with Parameter-Efficient Fine-Tuning

Quanmin Wei, Penglin Dai, Wei Li et al.

AAAI 2025paperarXiv:2502.10705
5
citations

Correlated Low-Rank Adaptation for ConvNets

Wu Ran, Weijia Zhang, ShuYang Pang et al.

NEURIPS 2025

CrossSpectra: Exploiting Cross-Layer Smoothness for Parameter-Efficient Fine-Tuning

Yifei Zhang, Hao Zhu, Junhao Dong et al.

NEURIPS 2025

DA-VPT: Semantic-Guided Visual Prompt Tuning for Vision Transformers

Li Ren, Chen Chen, Liqiang Wang et al.

CVPR 2025arXiv:2505.23694
8
citations

dEBORA: Efficient Bilevel Optimization-based low-Rank Adaptation

Emanuele Zangrando, Sara Venturini, Francesco Rinaldi et al.

ICLR 2025

Diff-Prompt: Diffusion-driven Prompt Generator with Mask Supervision

Weicai Yan, Wang Lin, Zirun Guo et al.

ICLR 2025arXiv:2504.21423
7
citations

Distribution-Aligned Decoding for Efficient LLM Task Adaptation

Senkang Hu, Xudong Han, Jinqi Jiang et al.

NEURIPS 2025arXiv:2509.15888
5
citations

DiTASK: Multi-Task Fine-Tuning with Diffeomorphic Transformations

Krishna Sri Ipsit Mantri, Carola-Bibiane Schönlieb, Bruno Ribeiro et al.

CVPR 2025arXiv:2502.06029
5
citations

DLP: Dynamic Layerwise Pruning in Large Language Models

Yuli Chen, Bo Cheng, Jiale Han et al.

ICML 2025arXiv:2505.23807
2
citations

Don’t Forget the Enjoin: FocalLoRA for Instruction Hierarchical Alignment in Large Language Models

Zitong Shi, Guancheng Wan, Haixin Wang et al.

NEURIPS 2025

Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights

Zhiyuan Liang, Dongwen Tang, Yuhao Zhou et al.

NEURIPS 2025arXiv:2506.16406
3
citations

DuoLoRA : Cycle-consistent and Rank-disentangled Content-Style Personalization

Aniket Roy, Shubhankar Borse, Shreya Kadambi et al.

ICCV 2025arXiv:2504.13206
4
citations

Enhancing Visual Prompting through Expanded Transformation Space and Overfitting Mitigation

Shohei Enomoto

NEURIPS 2025arXiv:2510.07823

Ensembles of Low-Rank Expert Adapters

Yinghao Li, Vianne Gao, Chao Zhang et al.

ICLR 2025arXiv:2502.00089
6
citations

F-Adapter: Frequency-Adaptive Parameter-Efficient Fine-Tuning in Scientific Machine Learning

Hangwei Zhang, Chun Kang, Yan Wang et al.

NEURIPS 2025arXiv:2509.23173

Federated Residual Low-Rank Adaption of Large Language Models

Yunlu Yan, Chun-Mei Feng, Wangmeng Zuo et al.

ICLR 2025
8
citations

Fine-tuning with Reserved Majority for Noise Reduction

Shuyang Jiang, Yusheng Liao, Ya Zhang et al.

ICLR 2025
2
citations

From PEFT to DEFT: Parameter Efficient Finetuning for Reducing Activation Density in Transformers

Bharat Runwal, Tejaswini Pedapati, Pin-Yu Chen

AAAI 2025paperarXiv:2402.01911
8
citations

Generative Adapter: Contextualizing Language Models in Parameters with A Single Forward Pass

Tong Chen, Hao Fang, Patrick Xia et al.

ICLR 2025arXiv:2411.05877
10
citations

GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient Fine-Tuning

Yeonjoon Jung, Daehyun Ahn, Hyungjun Kim et al.

NEURIPS 2025spotlightarXiv:2505.20355
2
citations

HALO: Hadamard-Assisted Lower-Precision Optimization for LLMs

Saleh Ashkboos, Mahdi Nikdan, Rush Tabesh et al.

NEURIPS 2025arXiv:2501.02625
6
citations

HiMoLE: Towards OOD-Robust LoRA via Hierarchical Mixture of Experts

Yinuo Jiang, Yan Xiaodong, Keyan Ding et al.

NEURIPS 2025

HMoRA: Making LLMs More Effective with Hierarchical Mixture of LoRA Experts

Mengqi Liao, Wei Chen, Junfeng Shen et al.

ICLR 2025
8
citations

Improving Model Representation and Reducing KV Cache via Skip Connections with First Value Heads

Zhoutong Wu, Yuan Zhang, Yiming Dong et al.

NEURIPS 2025arXiv:2510.16807

Instant Adversarial Purification with Adversarial Consistency Distillation

Chun Tong Lei, Hon Ming Yam, Zhongliang Guo et al.

CVPR 2025arXiv:2408.17064
13
citations

IterIS: Iterative Inference-Solving Alignment for LoRA Merging

Hongxu chen, Zhen Wang, Runshi Li et al.

CVPR 2025arXiv:2411.15231
5
citations

JEN-1 DreamStyler: Customized Musical Concept Learning via Pivotal Parameters Tuning

Boyu Chen, Peike Li, Yao Yao et al.

AAAI 2025paperarXiv:2406.12292
3
citations

Lessons and Insights from a Unifying Study of Parameter-Efficient Fine-Tuning (PEFT) in Visual Recognition

Zheda Mai, Ping Zhang, Cheng-Hao Tu et al.

CVPR 2025highlightarXiv:2409.16434
10
citations

LiFT: Learning to Fine-Tune via Bayesian Parameter Efficient Meta Fine-Tuning

Minyoung Kim, Timothy Hospedales

ICLR 2025
1
citations

Linearization Explains Fine-Tuning in Large Language Models

Zahra Rahimi Afzal, Tara Esmaeilbeig, Mojtaba Soltanalian et al.

NEURIPS 2025

LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging

Ke Wang, Nikos Dimitriadis, Alessandro Favero et al.

ICLR 2025arXiv:2410.17146
27
citations

LLM Unlearning via Neural Activation Redirection

William Shen, Xinchi Qiu, Meghdad Kurmanji et al.

NEURIPS 2025arXiv:2502.07218
16
citations

LoKi: Low-dimensional KAN for Efficient Fine-tuning Image Models

Xuan Cai, Renjie Pan, Hua Yang

CVPR 2025
1
citations

LoRA-FAIR: Federated LoRA Fine-Tuning with Aggregation and Initialization Refinement

Jieming Bian, Lei Wang, Letian Zhang et al.

ICCV 2025arXiv:2411.14961
18
citations

LoRA-Pro: Are Low-Rank Adapters Properly Optimized?

Zhengbo Wang, Jian Liang, Ran He et al.

ICLR 2025arXiv:2407.18242
54
citations

LoRA Subtraction for Drift-Resistant Space in Exemplar-Free Continual Learning

Xuan Liu, Xiaobin Chang

CVPR 2025arXiv:2503.18985
9
citations
PreviousNext