"language model fine-tuning" Papers
17 papers found
Conference
Addax: Utilizing Zeroth-Order Gradients to Improve Memory Efficiency and Performance of SGD for Fine-Tuning Language Models
Zeman Li, Xinwei Zhang, Peilin Zhong et al.
Blackbox Model Provenance via Palimpsestic Membership Inference
Rohith Kuditipudi, Jing Huang, Sally Zhu et al.
Controllable Context Sensitivity and the Knob Behind It
Julian Minder, Kevin Du, Niklas Stoehr et al.
Empirical Privacy Variance
Yuzheng Hu, Fan Wu, Ruicheng Xian et al.
Forging Time Series with Language: A Large Language Model Approach to Synthetic Data Generation
Cécile Rousseau, Tobia Boschi, Giandomenico Cornacchia et al.
Less is More: Local Intrinsic Dimensions of Contextual Language Models
Benjamin Matthias Ruppik, Julius von Rohrscheidt, Carel van Niekerk et al.
NaDRO: Leveraging Dual-Reward Strategies for LLMs Training on Noisy Data
Haolong Qian, Xianliang Yang, Ling Zhang et al.
PoLAR: Polar-Decomposed Low-Rank Adapter Representation
Kai Lion, Liang Zhang, Bingcong Li et al.
Rethinking the Role of Verbatim Memorization in LLM Privacy
Tom Sander, Bargav Jayaraman, Mark Ibrahim et al.
Revisiting Prefix-tuning: Statistical Benefits of Reparameterization among Prompts
Minh Le, Chau Nguyen, Huy Nguyen et al.
Training Language Models on Synthetic Edit Sequences Improves Code Synthesis
Ulyana Piterbarg, Lerrel Pinto, Rob Fergus
Cell2Sentence: Teaching Large Language Models the Language of Biology
Daniel Levine, Syed Rizvi, Sacha Lévy et al.
Compressible Dynamics in Deep Overparameterized Low-Rank Learning & Adaptation
Can Yaras, Peng Wang, Laura Balzano et al.
Differentially Private Bias-Term Fine-tuning of Foundation Models
Zhiqi Bu, Yu-Xiang Wang, Sheng Zha et al.
DPZero: Private Fine-Tuning of Language Models without Backpropagation
Liang Zhang, Bingcong Li, Kiran Thekumparampil et al.
Self-Rewarding Language Models
Weizhe Yuan, Richard Yuanzhe Pang, Kyunghyun Cho et al.
Variance-reduced Zeroth-Order Methods for Fine-Tuning Language Models
Tanmay Gautam, Youngsuk Park, Hao Zhou et al.