Scaling Inference-Efficient Language Models

12citations
arXiv:2501.18107
12
citations
#472
in ICML 2025
of 3340 papers
3
Top Authors
4
Data Points

Abstract

Scaling laws are powerful tools to predict the performance of large language models. However, current scaling laws fall short of accounting for inference costs. In this work, we first show that model architecture affects inference latency, where models of the same size can have up to $3.5\times$ difference in latency. To tackle this challenge, we modify the Chinchilla scaling laws to co-optimize the model parameter count, the number of training tokens, and the model architecture. Due to the reason that models of similar training loss exhibit gaps in downstream evaluation, we also propose a novel method to train inference-efficient models based on the revised scaling laws. We perform extensive empirical studies to fit and evaluate our inference-aware scaling laws. We vary model parameters from 80M to 1B, training tokens from 1.6B to 30B, and model shapes, training 63 models. Guided by our inference-efficient scaling law and model selection method, we release the Morph-1B model, which improves inference latency by $1.8\times$ while maintaining accuracy on downstream tasks compared to open-source models, pushing the Pareto frontier of accuracy-latency tradeoff.Notably, our experiments reveal that wider and shallower models can yield efficiency gains while preserving accuracy.

Citation History

Jan 28, 2026
10
Feb 13, 2026
12+2
Feb 13, 2026
12
Feb 13, 2026
12