The Lottery LLM Hypothesis, Rethinking What Abilities Should LLM Compression Preserve?

11citations
arXiv:2502.17535
11
citations
#1413
in ICLR 2025
of 3827 papers
7
Top Authors
6
Data Points

Abstract

Motivated by reducing the computational and storage costs of LLMs, model compression and KV cache compression have attracted much attention from researchers. However, current methods predominantly emphasize maintaining the performance of compressed LLMs, as measured by perplexity or simple accuracy on tasks of common sense knowledge QA and basic arithmetic reasoning. In this blog, we present a brief review of recent advancements in LLMs related to retrieval-augmented generation, multi-step reasoning, external tools, and computational expressivity, all of which substantially enhance LLM performance. Then, we propose a lottery LLM hypothesis suggesting that for a given LLM and task, there exists a smaller lottery LLM capable of producing the same performance as the original LLM with the assistance of multi-step reasoning and external tools. Based on the review of current progress in LLMs, we discuss and summarize the essential capabilities that the lottery LLM and KV cache compression must possess, which are currently overlooked in existing methods.

Citation History

Jan 26, 2026
10
Feb 1, 2026
10
Feb 6, 2026
11+1
Feb 13, 2026
11
Feb 13, 2026
11
Feb 13, 2026
11