2
citations
#1626
in ICML 2025
of 3340 papers
2
Top Authors
4
Data Points
Top Authors
Abstract
We present Banyan, a model that efficiently learns semantic representations by leveraging explicit hierarchical structure. While transformers excel at scale, they struggle in low-resource settings. Conversely recent structured models have shown promise as efficient learners, but lack performance. Banyan bridges this gap with two key innovations: an entangled hierarchical tree structure and diagonalized message passing, enabling it to outperform larger transformer models with just 14 non-embedding parameters. It excels in low-resource settings, offering a viable alternative for under-represented languages and highlighting its potential for efficient, interpretable NLP in resource-constrained environments.
Citation History
Jan 28, 2026
0
Feb 13, 2026
2+2
Feb 13, 2026
2
Feb 13, 2026
2