Training Deep Learning Models with Norm-Constrained LMOs

72citations
arXiv:2502.07529
72
citations
#54
in ICML 2025
of 3340 papers
6
Top Authors
3
Data Points

Abstract

In this work, we study optimization methods that leverage the linear minimization oracle (LMO) over a norm-ball. We propose a new stochastic family of algorithms that uses the LMO to adapt to the geometry of the problem and, perhaps surprisingly, show that they can be applied to unconstrained problems. The resulting update rule unifies several existing optimization methods under a single framework. Furthermore, we propose an explicit choice of norm for deep architectures, which, as a side benefit, leads to the transferability of hyperparameters across model sizes. Experimentally, we demonstrate significant speedups on nanoGPT training without any reliance on Adam. The proposed method is memory-efficient, requiring only one set of model weights and one set of gradients, which can be stored in half-precision.

Citation History

Jan 28, 2026
0
Feb 13, 2026
72+72
Feb 13, 2026
72