"distributed optimization" Papers

32 papers found

Accelerated Methods with Compressed Communications for Distributed Optimization Problems Under Data Similarity

Dmitry Bylinkin, Aleksandr Beznosikov

AAAI 2025paperarXiv:2412.16414
3
citations

Communication-Efficient Language Model Training Scales Reliably and Robustly: Scaling Laws for DiLoCo

Zachary Charles, Gabriel Teston, Lucio Dery et al.

NEURIPS 2025spotlightarXiv:2503.09799
14
citations

Computation and Memory-Efficient Model Compression with Gradient Reweighting

Zhiwei Li, Yuesen Liao, Binrui Wu et al.

NEURIPS 2025

Connecting Federated ADMM to Bayes

Siddharth Swaroop, Mohammad Emtiyaz Khan, Finale Doshi-Velez

ICLR 2025arXiv:2501.17325
4
citations

Deep Distributed Optimization for Large-Scale Quadratic Programming

Augustinos Saravanos, Hunter Kuperman, Alex Oshin et al.

ICLR 2025arXiv:2412.12156
14
citations

Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum

SARIT KHIRIRAT, Abdurakhmon Sadiev, Artem Riabinin et al.

NEURIPS 2025arXiv:2410.16871
8
citations

FedQS: Optimizing Gradient and Model Aggregation for Semi-Asynchronous Federated Learning

Yunbo Li, Jiaping Gui, Zhihang Deng et al.

NEURIPS 2025arXiv:2510.07664

FedWSQ: Efficient Federated Learning with Weight Standardization and Distribution-Aware Non-Uniform Quantization

Seung-Wook Kim, Seongyeol Kim, Jiah Kim et al.

ICCV 2025arXiv:2506.23516

Graph Neural Networks Gone Hogwild

Olga Solodova, Nick Richardson, Deniz Oktay et al.

ICLR 2025arXiv:2407.00494
1
citations

Hybrid Decentralized Optimization: Leveraging Both First- and Zeroth-Order Optimizers for Faster Convergence

Shayan Talaei, Matin Ansaripour, Giorgi Nadiradze et al.

AAAI 2025paperarXiv:2210.07703
1
citations

Layer-wise Update Aggregation with Recycling for Communication-Efficient Federated Learning

Jisoo Kim, Sungmin Kang, Sunwoo Lee

NEURIPS 2025arXiv:2503.11146
1
citations

Local Steps Speed Up Local GD for Heterogeneous Distributed Logistic Regression

Michael Crawshaw, Blake Woodworth, Mingrui Liu

ICLR 2025arXiv:2501.13790
1
citations

Newton Meets Marchenko-Pastur: Massively Parallel Second-Order Optimization with Hessian Sketching and Debiasing

Elad Romanov, Fangzhao Zhang, Mert Pilanci

ICLR 2025arXiv:2410.01374
2
citations

Revisiting Consensus Error: A Fine-grained Analysis of Local SGD under Second-order Data Heterogeneity

Kumar Kshitij Patel, Ali Zindari, Sebastian Stich et al.

NEURIPS 2025

Tackling Intertwined Data and Device Heterogeneities in Federated Learning with Unlimited Staleness

Haoming Wang, Wei Gao

AAAI 2025paperarXiv:2309.13536
2
citations

Tight Bounds for Maximum Weight Matroid Independent Set and Matching in the Zero Communication Model

Ilan Doron-Arad

NEURIPS 2025

Understanding outer learning rates in Local SGD

Ahmed Khaled, Satyen Kale, Arthur Douillard et al.

NEURIPS 2025

A New Theoretical Perspective on Data Heterogeneity in Federated Optimization

Jiayi Wang, Shiqiang Wang, Rong-Rong Chen et al.

ICML 2024arXiv:2407.15567
3
citations

A Primal-Dual Algorithm for Hybrid Federated Learning

Tom Overman, Garrett Blum, Diego Klabjan

AAAI 2024paperarXiv:2210.08106
9
citations

A Study of First-Order Methods with a Deterministic Relative-Error Gradient Oracle

Nadav Hallak, Kfir Levy

ICML 2024

Beyond the Federation: Topology-aware Federated Learning for Generalization to Unseen Clients

Mengmeng Ma, Tang Li, Xi Peng

ICML 2024arXiv:2407.04949
7
citations

Byzantine-Robust Federated Learning: Impact of Client Subsampling and Local Updates

Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui et al.

ICML 2024arXiv:2402.12780
13
citations

Distributed Bilevel Optimization with Communication Compression

Yutong He, Jie Hu, Xinmeng Huang et al.

ICML 2024arXiv:2405.18858
2
citations

Faster Adaptive Decentralized Learning Algorithms

Feihu Huang, jianyu zhao

ICML 2024spotlightarXiv:2408.09775
3
citations

FedASMU: Efficient Asynchronous Federated Learning with Dynamic Staleness-Aware Model Update

Ji Liu, Juncheng Jia, Tianshi Che et al.

AAAI 2024paperarXiv:2312.05770
75
citations

Federated Optimization with Doubly Regularized Drift Correction

Xiaowen Jiang, Anton Rodomanov, Sebastian Stich

ICML 2024arXiv:2404.08447
14
citations

High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise

Eduard Gorbunov, Abdurakhmon Sadiev, Marina Danilova et al.

ICML 2024arXiv:2310.01860
25
citations

LASER: Linear Compression in Wireless Distributed Optimization

Ashok Vardhan Makkuva, Marco Bondaschi, Thijs Vogels et al.

ICML 2024arXiv:2310.13033
7
citations

Lessons from Generalization Error Analysis of Federated Learning: You May Communicate Less Often!

Milad Sefidgaran, Romain Chor, Abdellatif Zaidi et al.

ICML 2024arXiv:2306.05862
10
citations

On the Complexity of Finite-Sum Smooth Optimization under the Polyak–Łojasiewicz Condition

Yunyan Bai, Yuxing Liu, Luo Luo

ICML 2024spotlightarXiv:2402.02569
2
citations

Reweighted Solutions for Weighted Low Rank Approximation

David Woodruff, Taisuke Yasuda

ICML 2024arXiv:2406.02431
2
citations

Robust Beamforming for Downlink Multi-Cell Systems: A Bilevel Optimization Perspective

Xingdi Chen, Yu Xiong, Kai YANG

AAAI 2024paperarXiv:2401.11409
4
citations