"long-sequence inference" Papers
2 papers found
Conference
Ada-KV: Optimizing KV Cache Eviction by Adaptive Budget Allocation for Efficient LLM Inference
Yuan Feng, Junlin Lv, Yukun Cao et al.
NEURIPS 2025arXiv:2407.11550
106
citations
FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference
Xunhao Lai, Jianqiao Lu, Yao Luo et al.
ICLR 2025arXiv:2502.20766
62
citations