"transformer expressivity" Papers
4 papers found
Conference
Characterizing the Expressivity of Fixed-Precision Transformer Language Models
Jiaoda Li, Ryan Cotterell
NEURIPS 2025oralarXiv:2505.23623
4
citations
Language Models Need Inductive Biases to Count Inductively
Yingshan Chang, Yonatan Bisk
ICLR 2025arXiv:2405.20131
20
citations
Learning Linear Attention in Polynomial Time
Morris Yau, Ekin Akyürek, Jiayuan Mao et al.
NEURIPS 2025oralarXiv:2410.10101
4
citations
Graph As Point Set
Xiyuan Wang, Pan Li, Muhan Zhang
ICML 2024arXiv:2405.02795
4
citations