Twilight: Adaptive Attention Sparsity with Hierarchical Top-$p$ Pruning

14
citations
#460
in NEURIPS 2025
of 5858 papers
9
Top Authors
6
Data Points

Abstract

Leveraging attention sparsity to accelerate long-context large language models (LLMs) has been a hot research topic. However, current algorithms such as sparse attention or key-value (KV) cache compression tend to use a fixed budget, which presents a significant challenge during deployment because it fails to account for the dynamic nature of real-world scenarios, where the optimal balance between accuracy and efficiency can vary greatly. In this paper, we find that borrowing top-$p$ sampling (nucleus sampling) to sparse attention can surprisingly achieve adaptive budgeting. Based on this, we propose Twilight, a framework to bring adaptive sparsity to any existing sparse attention algorithm without sacrificing their accuracy. Empirical results show that Twilight can adaptively prune at most 98% of redundant tokens, leading to $15.4\times$ acceleration in self-attention operations and $3.9\times$ acceleration in end-to-end per token latency in long context LLM decoding.

Citation History

Jan 26, 2026
11
Jan 27, 2026
11
Feb 3, 2026
12+1
Feb 13, 2026
14+2
Feb 13, 2026
14
Feb 13, 2026
14