Tensor-Parallelism with Partially Synchronized Activations

2
citations
#1951
in NEURIPS 2025
of 5858 papers
5
Top Authors
4
Data Points

Abstract

Training and inference of Large Language Models (LLMs) with tensor-parallelism requires substantial communication to synchronize activations. Our findings suggest that with a few minor adjustments to current practices, LLMs can be trained without fully synchronizing activations, reducing bandwidth demands. We name this “Communication-Aware Architecture for Tensor-parallelism” (CAAT-Net). We train a 7B parameter CAAT-Net model and show that tensor-parallel communication can be reduced by up to 50% with no significant drop in pretraining accuracy across nearly all evaluated benchmarks. We also experiment with smaller 130M and 1.1B models to show the robustness and scalability of our method. We find that, in some scenarios, validation loss can even improve when reducing communication. Finally, we demonstrate how CAAT-Net accelerates both training and inference workloads across various settings and model sizes.

Citation History

Jan 25, 2026
1
Feb 13, 2026
2+1
Feb 13, 2026
2
Feb 13, 2026
2