Poster "parameter-efficient fine-tuning" Papers
109 papers found • Page 1 of 3
Conference
Accurate and Efficient Low-Rank Model Merging in Core Space
Aniello Panariello, Daniel Marczak, Simone Magistri et al.
Achieving More with Less: Additive Prompt Tuning for Rehearsal-Free Class-Incremental Learning
Haoran Chen, Ping Wang, Zihan Zhou et al.
AdaMSS: Adaptive Multi-Subspace Approach for Parameter-Efficient Fine-Tuning
Jingjing Zheng, Wanglong Lu, Yiming Dong et al.
Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and Performance of SGD for Fine-Tuning Language Models
Zeman Li, Xinwei Zhang, Peilin Zhong et al.
BiLoRA: Almost-Orthogonal Parameter Spaces for Continual Learning
Hao Zhu, Yifei Zhang, Junhao Dong et al.
Black Sheep in the Herd: Playing with Spuriously Correlated Attributes for Vision-Language Recognition
Xinyu Tian, Shu Zou, Zhaoyuan Yang et al.
CL-LoRA: Continual Low-Rank Adaptation for Rehearsal-Free Class-Incremental Learning
Jiangpeng He, Zhihao Duan, Fengqing Zhu
CLOVER: Cross-Layer Orthogonal Vectors Pruning and Fine-Tuning
Fanxu Meng, Muhan Zhang
Controllable-LPMoE: Adapting to Challenging Object Segmentation via Dynamic Local Priors from Mixture-of-Experts
Yanguang Sun, Jiawei Lian, jian Yang et al.
Correlated Low-Rank Adaptation for ConvNets
Wu Ran, Weijia Zhang, ShuYang Pang et al.
CrossSpectra: Exploiting Cross-Layer Smoothness for Parameter-Efficient Fine-Tuning
Yifei Zhang, Hao Zhu, Junhao Dong et al.
DA-VPT: Semantic-Guided Visual Prompt Tuning for Vision Transformers
Li Ren, Chen Chen, Liqiang Wang et al.
dEBORA: Efficient Bilevel Optimization-based low-Rank Adaptation
Emanuele Zangrando, Sara Venturini, Francesco Rinaldi et al.
Diff-Prompt: Diffusion-driven Prompt Generator with Mask Supervision
Weicai Yan, Wang Lin, Zirun Guo et al.
Distribution-Aligned Decoding for Efficient LLM Task Adaptation
Senkang Hu, Xudong Han, Jinqi Jiang et al.
DiTASK: Multi-Task Fine-Tuning with Diffeomorphic Transformations
Krishna Sri Ipsit Mantri, Carola-Bibiane Schönlieb, Bruno Ribeiro et al.
DLP: Dynamic Layerwise Pruning in Large Language Models
Yuli Chen, Bo Cheng, Jiale Han et al.
Don’t Forget the Enjoin: FocalLoRA for Instruction Hierarchical Alignment in Large Language Models
Zitong Shi, Guancheng Wan, Haixin Wang et al.
Drag-and-Drop LLMs: Zero-Shot Prompt-to-Weights
Zhiyuan Liang, Dongwen Tang, Yuhao Zhou et al.
DuoLoRA : Cycle-consistent and Rank-disentangled Content-Style Personalization
Aniket Roy, Shubhankar Borse, Shreya Kadambi et al.
Enhancing Visual Prompting through Expanded Transformation Space and Overfitting Mitigation
Shohei Enomoto
Ensembles of Low-Rank Expert Adapters
Yinghao Li, Vianne Gao, Chao Zhang et al.
F-Adapter: Frequency-Adaptive Parameter-Efficient Fine-Tuning in Scientific Machine Learning
Hangwei Zhang, Chun Kang, Yan Wang et al.
Federated Residual Low-Rank Adaption of Large Language Models
Yunlu Yan, Chun-Mei Feng, Wangmeng Zuo et al.
Fine-tuning with Reserved Majority for Noise Reduction
Shuyang Jiang, Yusheng Liao, Ya Zhang et al.
Generative Adapter: Contextualizing Language Models in Parameters with A Single Forward Pass
Tong Chen, Hao Fang, Patrick Xia et al.
HALO: Hadamard-Assisted Lower-Precision Optimization for LLMs
Saleh Ashkboos, Mahdi Nikdan, Rush Tabesh et al.
HiMoLE: Towards OOD-Robust LoRA via Hierarchical Mixture of Experts
Yinuo Jiang, Yan Xiaodong, Keyan Ding et al.
HMoRA: Making LLMs More Effective with Hierarchical Mixture of LoRA Experts
Mengqi Liao, Wei Chen, Junfeng Shen et al.
Improving Model Representation and Reducing KV Cache via Skip Connections with First Value Heads
Zhoutong Wu, Yuan Zhang, Yiming Dong et al.
Instant Adversarial Purification with Adversarial Consistency Distillation
Chun Tong Lei, Hon Ming Yam, Zhongliang Guo et al.
IterIS: Iterative Inference-Solving Alignment for LoRA Merging
Hongxu chen, Zhen Wang, Runshi Li et al.
LiFT: Learning to Fine-Tune via Bayesian Parameter Efficient Meta Fine-Tuning
Minyoung Kim, Timothy Hospedales
Linearization Explains Fine-Tuning in Large Language Models
Zahra Rahimi Afzal, Tara Esmaeilbeig, Mojtaba Soltanalian et al.
LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging
Ke Wang, Nikos Dimitriadis, Alessandro Favero et al.
LLM Unlearning via Neural Activation Redirection
William Shen, Xinchi Qiu, Meghdad Kurmanji et al.
LoKi: Low-dimensional KAN for Efficient Fine-tuning Image Models
Xuan Cai, Renjie Pan, Hua Yang
LoRA-FAIR: Federated LoRA Fine-Tuning with Aggregation and Initialization Refinement
Jieming Bian, Lei Wang, Letian Zhang et al.
LoRA-Pro: Are Low-Rank Adapters Properly Optimized?
Zhengbo Wang, Jian Liang, Ran He et al.
LoRA Subtraction for Drift-Resistant Space in Exemplar-Free Continual Learning
Xuan Liu, Xiaobin Chang
LoRASuite: Efficient LoRA Adaptation Across Large Language Model Upgrades
Yanan Li, Fanxu Meng, Muhan Zhang et al.
LoRA vs Full Fine-tuning: An Illusion of Equivalence
Reece Shuttleworth, Jacob Andreas, Antonio Torralba et al.
LoRA-X: Bridging Foundation Models with Training-Free Cross-Model Adaptation
Farzad Farhadzadeh, Debasmit Das, Shubhankar Borse et al.
LT-Soups: Bridging Head and Tail Classes via Subsampled Model Soups
Masih Aminbeidokhti, Subhankar Roy, Eric Granger et al.
Magical: Medical Lay Language Generation via Semantic Invariance and Layperson-tailored Adaptation
Weibin Liao, Tianlong Wang, Yinghao Zhu et al.
MetaWriter: Personalized Handwritten Text Recognition Using Meta-Learned Prompt Tuning
Wenhao Gu, Li Gu, Ching Suen et al.
MeteoRA: Multiple-tasks Embedded LoRA for Large Language Models
Jingwei Xu, Junyu Lai, Yunpeng Huang
Mind the Quote: Enabling Quotation-Aware Dialogue in LLMs via Plug-and-Play Modules
Yueqi Zhang, Peiwen Yuan, Yiwei Li et al.
MoST: Efficient Monarch Sparse Tuning for 3D Representation Learning
Xu Han, Yuan Tang, Jinfeng Xu et al.
Motion-Agent: A Conversational Framework for Human Motion Generation with LLMs
Qi Wu, Yubo Zhao, Yifan Wang et al.