Scaling Transformers for Low-Bitrate High-Quality Speech Coding

62
citations
#254
in ICLR 2025
of 3827 papers
7
Top Authors
7
Data Points

Abstract

The tokenization of speech with neural audio codec models is a vital part of modern AI pipelines for the generation or understanding of speech, alone or in a multimodal context. Traditionally such tokenization models have concentrated on low parameter-count architectures using only components with strong inductive biases. In this work we show that by scaling a transformer architecture with large parameter count to this problem, and applying a flexible Finite Scalar Quantization (FSQ) based bottleneck, it is possible to reach state-of-the-art speech quality at extremely low bit-rates of $400$ or $700$ bits-per-second. The trained models strongly out-perform existing baselines in both objective and subjective tests.

Citation History

Jan 26, 2026
0
Jan 26, 2026
54+54
Jan 27, 2026
54
Feb 3, 2026
57+3
Feb 13, 2026
62+5
Feb 13, 2026
62
Feb 13, 2026
62