Statistical Inference for Decentralized Federated Learning

0citations
0
citations
#3347
in NEURIPS 2025
of 5858 papers
2
Top Authors
4
Data Points

Abstract

This paper considers decentralized Federated Learning (FL) under het- erogeneous distributions among distributed clients or data blocks for the M- estimation. The mean squared error and consensus error across the estima- tors from different clients via the decentralized stochastic gradient descent algorithm are derived. The asymptotic normality of the Polyak–Ruppert (PR) averaged estimator in the decentralized distributed setting is attained, which shows that its statistical efficiency comes at a cost as it is more restrictive on the number of clients than that in the distributed M-estimation. To overcome the restriction, a one-step estimator is proposed which permits a much larger number of clients while still achieving the same efficiency as the original PR-averaged estimator in the nondistributed setting. The confidence regions based on both the PR-averaged estimator and the proposed one-step estimator are constructed to facilitate statistical inference for decentralized FL.

Citation History

Jan 25, 2026
0
Jan 26, 2026
0
Jan 26, 2026
0
Jan 28, 2026
0