On Zero-Initialized Attention: Optimal Prompt and Gating Factor Estimation

5
citations
#987
in ICML 2025
of 3340 papers
8
Top Authors
4
Data Points

Abstract

LLaMA-Adapter has recently emerged as an efficient fine-tuning technique for LLaMA models, leveraging zero-initialized attention to stabilize training and enhance performance. However, despite its empirical success, the theoretical foundations of zero-initialized attention remain largely unexplored. In this paper, we provide a rigorous theoretical analysis, establishing a connection between zero-initialized attention and mixture-of-expert models. We prove that both linear and non-linear prompts, along with gating functions, can be optimally estimated, with non-linear prompts offering greater flexibility for future applications. Empirically, we validate our findings on the open LLM benchmarks, demonstrating that non-linear prompts outperform linear ones. Notably, even with limited training data, both prompt types consistently surpass vanilla attention, highlighting the robustness and adaptability of zero-initialized attention.

Citation History

Jan 28, 2026
0
Feb 13, 2026
5+5
Feb 13, 2026
5
Feb 13, 2026
5