Poster "large language model inference" Papers
2 papers found
Conference
DeFT: Decoding with Flash Tree-attention for Efficient Tree-structured LLM Inference
Jinwei Yao, Kaiqi Chen, Kexun Zhang et al.
ICLR 2025arXiv:2404.00242
9
citations
Polar Sparsity: High Throughput Batched LLM Inferencing with Scalable Contextual Sparsity
Susav Shrestha, Bradley Settlemyer, Nikoli Dryden et al.
NEURIPS 2025arXiv:2505.14884
3
citations