Poster "distributed optimization" Papers

23 papers found

Computation and Memory-Efficient Model Compression with Gradient Reweighting

Zhiwei Li, Yuesen Liao, Binrui Wu et al.

NEURIPS 2025

Connecting Federated ADMM to Bayes

Siddharth Swaroop, Mohammad Emtiyaz Khan, Finale Doshi-Velez

ICLR 2025arXiv:2501.17325
4
citations

Deep Distributed Optimization for Large-Scale Quadratic Programming

Augustinos Saravanos, Hunter Kuperman, Alex Oshin et al.

ICLR 2025arXiv:2412.12156
14
citations

Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum

SARIT KHIRIRAT, Abdurakhmon Sadiev, Artem Riabinin et al.

NEURIPS 2025arXiv:2410.16871
8
citations

FedQS: Optimizing Gradient and Model Aggregation for Semi-Asynchronous Federated Learning

Yunbo Li, Jiaping Gui, Zhihang Deng et al.

NEURIPS 2025arXiv:2510.07664

FedWSQ: Efficient Federated Learning with Weight Standardization and Distribution-Aware Non-Uniform Quantization

Seung-Wook Kim, Seongyeol Kim, Jiah Kim et al.

ICCV 2025arXiv:2506.23516

Graph Neural Networks Gone Hogwild

Olga Solodova, Nick Richardson, Deniz Oktay et al.

ICLR 2025arXiv:2407.00494
1
citations

Layer-wise Update Aggregation with Recycling for Communication-Efficient Federated Learning

Jisoo Kim, Sungmin Kang, Sunwoo Lee

NEURIPS 2025arXiv:2503.11146
1
citations

Local Steps Speed Up Local GD for Heterogeneous Distributed Logistic Regression

Michael Crawshaw, Blake Woodworth, Mingrui Liu

ICLR 2025arXiv:2501.13790
1
citations

Newton Meets Marchenko-Pastur: Massively Parallel Second-Order Optimization with Hessian Sketching and Debiasing

Elad Romanov, Fangzhao Zhang, Mert Pilanci

ICLR 2025arXiv:2410.01374
2
citations

Revisiting Consensus Error: A Fine-grained Analysis of Local SGD under Second-order Data Heterogeneity

Kumar Kshitij Patel, Ali Zindari, Sebastian Stich et al.

NEURIPS 2025

Tight Bounds for Maximum Weight Matroid Independent Set and Matching in the Zero Communication Model

Ilan Doron-Arad

NEURIPS 2025

Understanding outer learning rates in Local SGD

Ahmed Khaled, Satyen Kale, Arthur Douillard et al.

NEURIPS 2025

A New Theoretical Perspective on Data Heterogeneity in Federated Optimization

Jiayi Wang, Shiqiang Wang, Rong-Rong Chen et al.

ICML 2024arXiv:2407.15567
3
citations

A Study of First-Order Methods with a Deterministic Relative-Error Gradient Oracle

Nadav Hallak, Kfir Levy

ICML 2024

Beyond the Federation: Topology-aware Federated Learning for Generalization to Unseen Clients

Mengmeng Ma, Tang Li, Xi Peng

ICML 2024arXiv:2407.04949
7
citations

Byzantine-Robust Federated Learning: Impact of Client Subsampling and Local Updates

Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui et al.

ICML 2024arXiv:2402.12780
13
citations

Distributed Bilevel Optimization with Communication Compression

Yutong He, Jie Hu, Xinmeng Huang et al.

ICML 2024arXiv:2405.18858
2
citations

Federated Optimization with Doubly Regularized Drift Correction

Xiaowen Jiang, Anton Rodomanov, Sebastian Stich

ICML 2024arXiv:2404.08447
14
citations

High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise

Eduard Gorbunov, Abdurakhmon Sadiev, Marina Danilova et al.

ICML 2024arXiv:2310.01860
25
citations

LASER: Linear Compression in Wireless Distributed Optimization

Ashok Vardhan Makkuva, Marco Bondaschi, Thijs Vogels et al.

ICML 2024arXiv:2310.13033
7
citations

Lessons from Generalization Error Analysis of Federated Learning: You May Communicate Less Often!

Milad Sefidgaran, Romain Chor, Abdellatif Zaidi et al.

ICML 2024arXiv:2306.05862
10
citations

Reweighted Solutions for Weighted Low Rank Approximation

David Woodruff, Taisuke Yasuda

ICML 2024arXiv:2406.02431
2
citations