Poster "distributed optimization" Papers
23 papers found
Conference
Computation and Memory-Efficient Model Compression with Gradient Reweighting
Zhiwei Li, Yuesen Liao, Binrui Wu et al.
Connecting Federated ADMM to Bayes
Siddharth Swaroop, Mohammad Emtiyaz Khan, Finale Doshi-Velez
Deep Distributed Optimization for Large-Scale Quadratic Programming
Augustinos Saravanos, Hunter Kuperman, Alex Oshin et al.
Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum
SARIT KHIRIRAT, Abdurakhmon Sadiev, Artem Riabinin et al.
FedQS: Optimizing Gradient and Model Aggregation for Semi-Asynchronous Federated Learning
Yunbo Li, Jiaping Gui, Zhihang Deng et al.
FedWSQ: Efficient Federated Learning with Weight Standardization and Distribution-Aware Non-Uniform Quantization
Seung-Wook Kim, Seongyeol Kim, Jiah Kim et al.
Graph Neural Networks Gone Hogwild
Olga Solodova, Nick Richardson, Deniz Oktay et al.
Layer-wise Update Aggregation with Recycling for Communication-Efficient Federated Learning
Jisoo Kim, Sungmin Kang, Sunwoo Lee
Local Steps Speed Up Local GD for Heterogeneous Distributed Logistic Regression
Michael Crawshaw, Blake Woodworth, Mingrui Liu
Newton Meets Marchenko-Pastur: Massively Parallel Second-Order Optimization with Hessian Sketching and Debiasing
Elad Romanov, Fangzhao Zhang, Mert Pilanci
Revisiting Consensus Error: A Fine-grained Analysis of Local SGD under Second-order Data Heterogeneity
Kumar Kshitij Patel, Ali Zindari, Sebastian Stich et al.
Tight Bounds for Maximum Weight Matroid Independent Set and Matching in the Zero Communication Model
Ilan Doron-Arad
Understanding outer learning rates in Local SGD
Ahmed Khaled, Satyen Kale, Arthur Douillard et al.
A New Theoretical Perspective on Data Heterogeneity in Federated Optimization
Jiayi Wang, Shiqiang Wang, Rong-Rong Chen et al.
A Study of First-Order Methods with a Deterministic Relative-Error Gradient Oracle
Nadav Hallak, Kfir Levy
Beyond the Federation: Topology-aware Federated Learning for Generalization to Unseen Clients
Mengmeng Ma, Tang Li, Xi Peng
Byzantine-Robust Federated Learning: Impact of Client Subsampling and Local Updates
Youssef Allouah, Sadegh Farhadkhani, Rachid Guerraoui et al.
Distributed Bilevel Optimization with Communication Compression
Yutong He, Jie Hu, Xinmeng Huang et al.
Federated Optimization with Doubly Regularized Drift Correction
Xiaowen Jiang, Anton Rodomanov, Sebastian Stich
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise
Eduard Gorbunov, Abdurakhmon Sadiev, Marina Danilova et al.
LASER: Linear Compression in Wireless Distributed Optimization
Ashok Vardhan Makkuva, Marco Bondaschi, Thijs Vogels et al.
Lessons from Generalization Error Analysis of Federated Learning: You May Communicate Less Often!
Milad Sefidgaran, Romain Chor, Abdellatif Zaidi et al.
Reweighted Solutions for Weighted Low Rank Approximation
David Woodruff, Taisuke Yasuda