Spotlight "parameter-efficient fine-tuning" Papers
7 papers found
Conference
Activation Control for Efficiently Eliciting Long Chain-of-thought Ability of Language Models
Zekai Zhao, Qi Liu, Kun Zhou et al.
NEURIPS 2025spotlightarXiv:2505.17697
7
citations
AuroRA: Breaking Low-Rank Bottleneck of LoRA with Nonlinear Mapping
Haonan Dong, Wenhao Zhu, Guojie Song et al.
NEURIPS 2025spotlightarXiv:2505.18738
2
citations
Compress to Impress: Efficient LLM Adaptation Using a Single Gradient Step on 100 Samples
Shiva Sreeram, Alaa Maalouf, Pratyusha Sharma et al.
NEURIPS 2025spotlightarXiv:2510.20800
GraLoRA: Granular Low-Rank Adaptation for Parameter-Efficient Fine-Tuning
Yeonjoon Jung, Daehyun Ahn, Hyungjun Kim et al.
NEURIPS 2025spotlightarXiv:2505.20355
2
citations
Optimization Inspired Few-Shot Adaptation for Large Language Models
Boyan Gao, Xin Wang, Yibo Yang et al.
NEURIPS 2025spotlightarXiv:2505.19107
StelLA: Subspace Learning in Low-rank Adaptation using Stiefel Manifold
Zhizhong Li, Sina Sajadmanesh, Jingtao Li et al.
NEURIPS 2025spotlightarXiv:2510.01938
4
citations
Uni-LoRA: One Vector is All You Need
Kaiyang Li, Shaobo Han, Qing Su et al.
NEURIPS 2025spotlightarXiv:2506.00799
3
citations