Poster "language model fine-tuning" Papers
15 papers found
Conference
Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and Performance of SGD for Fine-Tuning Language Models
Zeman Li, Xinwei Zhang, Peilin Zhong et al.
ICLR 2025arXiv:2410.06441
11
citations
Controllable Context Sensitivity and the Knob Behind It
Julian Minder, Kevin Du, Niklas Stoehr et al.
ICLR 2025arXiv:2411.07404
17
citations
Empirical Privacy Variance
Yuzheng Hu, Fan Wu, Ruicheng Xian et al.
ICML 2025arXiv:2503.12314
2
citations
Less is More: Local Intrinsic Dimensions of Contextual Language Models
Benjamin Matthias Ruppik, Julius von Rohrscheidt, Carel van Niekerk et al.
NEURIPS 2025arXiv:2506.01034
NaDRO: Leveraging Dual-Reward Strategies for LLMs Training on Noisy Data
Haolong Qian, Xianliang Yang, Ling Zhang et al.
NEURIPS 2025
PoLAR: Polar-Decomposed Low-Rank Adapter Representation
Kai Lion, Liang Zhang, Bingcong Li et al.
NEURIPS 2025arXiv:2506.03133
5
citations
Rethinking the Role of Verbatim Memorization in LLM Privacy
Tom Sander, Bargav Jayaraman, Mark Ibrahim et al.
NEURIPS 2025
Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts
Minh Le, Chau Nguyen, Huy Nguyen et al.
ICLR 2025arXiv:2410.02200
14
citations
Training Language Models on Synthetic Edit Sequences Improves Code Synthesis
Ulyana Piterbarg, Lerrel Pinto, Rob Fergus
ICLR 2025arXiv:2410.02749
7
citations
Cell2Sentence: Teaching Large Language Models the Language of Biology
Daniel Levine, Syed Rizvi, Sacha Lévy et al.
ICML 2024
Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation
Can Yaras, Peng Wang, Laura Balzano et al.
ICML 2024arXiv:2406.04112
25
citations
Differentially Private Bias-Term Fine-tuning of Foundation Models
Zhiqi Bu, Yu-Xiang Wang, Sheng Zha et al.
ICML 2024arXiv:2210.00036
55
citations
DPZero: Private Fine-Tuning of Language Models without Backpropagation
Liang Zhang, Bingcong Li, Kiran Thekumparampil et al.
ICML 2024arXiv:2310.09639
22
citations
Self-Rewarding Language Models
Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho et al.
ICML 2024arXiv:2401.10020
497
citations
Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models
Tanmay Gautam, Youngsuk Park, Hao Zhou et al.
ICML 2024arXiv:2404.08080
39
citations