by Yao Luo Papers
3 papers found
Conference
FlexPrefill: A Context-Aware Sparse Attention Mechanism for Efficient Long-Sequence Inference
Xunhao Lai, Jianqiao Lu, Yao Luo et al.
ICLR 2025arXiv:2502.20766
62
citations
Model Merging in Pre-training of Large Language Models
Yunshui Li, Yiyuan Ma, Shen Yan et al.
NEURIPS 2025arXiv:2505.12082
21
citations
Why Does the Effective Context Length of LLMs Fall Short?
Chenxin An, Jun Zhang, Ming Zhong et al.
ICLR 2025arXiv:2410.18745
42
citations