Progress or Regress? Self-Improvement Reversal in Post-training

19
citations
#926
in ICLR 2025
of 3827 papers
3
Top Authors
6
Data Points

Abstract

Self-improvement through post-training methods such as iterative preference learning has been acclaimed for enhancing the problem-solving capabilities (e.g., mathematical reasoning) of Large Language Models (LLMs) without human intervention. However, as exploration deepens, it becomes crucial to assess whether these improvements genuinely signify progress in solving more challenging problems or if they could lead to unintended regressions. To address this, we propose a comprehensive evaluative framework that goes beyond the superficial pass@1 metric to scrutinize the underlying enhancements of post-training paradigms for self-improvement. Through rigorous experimentation and analysis across diverse problem-solving tasks, the empirical results point out the phenomenon of self-improvement reversal, where models showing improved performance across benchmarks will paradoxically exhibit declines in broader, essential capabilities, like output diversity and out-of-distribution (OOD) generalization. These findings indicate that current self-improvement practices through post-training are inadequate for equipping models to tackle more complex problems. Furthermore, they underscore the necessity of our critical evaluation metrics in discerning the progress or regress dichotomy for self-improving LLMs.

Citation History

Jan 26, 2026
0
Jan 26, 2026
18+18
Feb 3, 2026
19+1
Feb 13, 2026
19
Feb 13, 2026
19
Feb 13, 2026
19