"state space models" Papers
65 papers found • Page 2 of 2
Conference
Sports-Traj: A Unified Trajectory Generation Model for Multi-Agent Movement in Sports
Yi Xu, Yun Fu
ICLR 2025oralarXiv:2405.17680
11
citations
State Space Models are Provably Comparable to Transformers in Dynamic Token Selection
Naoki Nishikawa, Taiji Suzuki
ICLR 2025arXiv:2405.19036
6
citations
Stuffed Mamba: Oversized States Lead to the Inability to Forget
Yingfa Chen, Xinrong Zhang, Shengding Hu et al.
COLM 2025paper
3
citations
ThunderKittens: Simple, Fast, and $\textit{Adorable}$ Kernels
Benjamin Spector, Simran Arora, Aaryan Singhal et al.
ICLR 2025
3
citations
TRUST: Test-Time Refinement using Uncertainty-Guided SSM Traverses
Sahar Dastani, Ali Bahri, Gustavo Vargas Hakim et al.
NEURIPS 2025arXiv:2509.22813
VSSD: Vision Mamba with Non-Causal State Space Duality
Yuheng Shi, Mingjia Li, Minjing Dong et al.
ICCV 2025arXiv:2407.18559
30
citations
Zebra-Llama: Towards Extremely Efficient Hybrid Models
Mingyu Yang, Mehdi Rezagholizadeh, Guihong Li et al.
NEURIPS 2025arXiv:2505.17272
7
citations
From Generalization Analysis to Optimization Designs for State Space Models
Fusheng Liu, Qianxiao Li
ICML 2024oralarXiv:2405.02670
11
citations
Hierarchical State Space Models for Continuous Sequence-to-Sequence Modeling
Raunaq Bhirangi, Chenyu Wang, Venkatesh Pattabiraman et al.
ICML 2024oralarXiv:2402.10211
19
citations
Motion Mamba: Efficient and Long Sequence Motion Generation
Zeyu Zhang, Akide Liu, Ian Reid et al.
ECCV 2024arXiv:2403.07487
114
citations
Probabilistic Time Series Modeling with Decomposable Denoising Diffusion Model
Tijin Yan, Hengheng Gong, Yongping He et al.
ICML 2024
Repeat After Me: Transformers are Better than State Space Models at Copying
Samy Jelassi, David Brandfonbrener, Sham Kakade et al.
ICML 2024arXiv:2402.01032
162
citations
Short-Long Convolutions Help Hardware-Efficient Linear Attention to Focus on Long Sequences
Zicheng Liu, Siyuan Li, Li Wang et al.
ICML 2024arXiv:2406.08128
10
citations
VideoMamba: State Space Model for Efficient Video Understanding
Kunchang Li, Xinhao Li, Yi Wang et al.
ECCV 2024arXiv:2403.06977
407
citations
Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model
Lianghui Zhu, Bencheng Liao, Qian Zhang et al.
ICML 2024arXiv:2401.09417
1457
citations