Optimizing Language Models for Inference Time Objectives using Reinforcement Learning

23citations
arXiv:2503.19595
23
citations
#236
in ICML 2025
of 3340 papers
4
Top Authors
4
Data Points

Abstract

In this work, we investigate the merits of explicitly optimizing for inference time algorithmic performance during model training. We show how optimizing for inference time performance can improve overall model efficacy. We consider generic inference time objectives with $k$ samples, with focus on pass@$k$ and majority voting as two main applications. With language model training on reasoning datasets, we showcase the performance trade-off enabled by training with such objectives. When training on code generation tasks, we show that the approach significantly improves pass@$k$ objectives compared to the baseline method.

Citation History

Jan 28, 2026
0
Feb 13, 2026
23+23
Feb 13, 2026
23
Feb 13, 2026
23