"instruction fine-tuning" Papers

12 papers found

AdaGrad under Anisotropic Smoothness

Yuxing Liu, Rui Pan, Tong Zhang

ICLR 2025arXiv:2406.15244
14
citations

Ensembles of Low-Rank Expert Adapters

Yinghao Li, Vianne Gao, Chao Zhang et al.

ICLR 2025arXiv:2502.00089
6
citations

Is In-Context Learning Sufficient for Instruction Following in LLMs?

Hao Zhao, Maksym Andriushchenko, francesco croce et al.

ICLR 2025arXiv:2405.19874
22
citations

Layer Swapping for Zero-Shot Cross-Lingual Transfer in Large Language Models

Lucas Bandarkar, Benjamin Muller, Pritish Yuvraj et al.

ICLR 2025arXiv:2410.01335
15
citations

Making Large Vision Language Models to Be Good Few-Shot Learners

Fan Liu, Wenwen Cai, Jian Huo et al.

AAAI 2025paperarXiv:2408.11297
6
citations

Rethinking the Role of Verbatim Memorization in LLM Privacy

Tom Sander, Bargav Jayaraman, Mark Ibrahim et al.

NEURIPS 2025

VCM: Vision Concept Modeling with Adaptive Vision Token Compression via Instruction Fine-Tuning

Run Luo, Renke Shan, Longze Chen et al.

NEURIPS 2025

Video-XL: Extra-Long Vision Language Model for Hour-Scale Video Understanding

Yan Shu, Zheng Liu, Peitian Zhang et al.

CVPR 2025arXiv:2409.14485
155
citations

Whose Instructions Count? Resolving Preference Bias in Instruction Fine-Tuning

Jiayu Zhang, Changbang Li, Yinan Peng et al.

NEURIPS 2025

Long Is More for Alignment: A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning

Hao Zhao, Maksym Andriushchenko, Francesco Croce et al.

ICML 2024arXiv:2402.04833
88
citations

Physics of Language Models: Part 3.1, Knowledge Storage and Extraction

Zeyuan Allen-Zhu, Yuanzhi Li

ICML 2024spotlightarXiv:2309.14316
244
citations

Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models

Yongshuo Zong, Ondrej Bohdal, Tingyang Yu et al.

ICML 2024arXiv:2402.02207
123
citations