"kv cache pruning" Papers
2 papers found
Conference
DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads
Guangxuan Xiao, Jiaming Tang, Jingwei Zuo et al.
ICLR 2025arXiv:2410.10819
179
citations
MUSTAFAR: Promoting Unstructured Sparsity for KV Cache Pruning in LLM Inference
Donghyeon Joo, Helya Hosseini, Ramyad Hadidi et al.
NEURIPS 2025arXiv:2505.22913
2
citations