α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Peter Richtarik
Peter Richtarik
27
papers
1,160
total citations
papers (27)
Lower Bounds and Optimal Algorithms for Personalized Federated Learning
NEURIPS 2020
arXiv
208
citations
Random Reshuffling: Simple Analysis with Vast Improvements
NEURIPS 2020
arXiv
151
citations
Optimal and Practical Algorithms for Smooth and Strongly Convex Decentralized Optimization
NEURIPS 2020
arXiv
99
citations
Linearly Converging Error Compensated SGD
NEURIPS 2020
arXiv
86
citations
BEER: Fast $O(1/T)$ Rate for Decentralized Nonconvex Optimization with Communication Compression
NEURIPS 2022
arXiv
65
citations
Error Compensated Distributed SGD Can Be Accelerated
NEURIPS 2021
arXiv
56
citations
Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex Decentralized Optimization Over Time-Varying Networks
NEURIPS 2021
arXiv
48
citations
A Guide Through the Zoo of Biased SGD
NEURIPS 2023
arXiv
47
citations
Primal Dual Interpretation of the Proximal Stochastic Gradient Langevin Algorithm
NEURIPS 2020
arXiv
44
citations
Variance Reduced ProxSkip: Algorithm, Theory and Application to Federated Learning
NEURIPS 2022
arXiv
44
citations
Momentum Provably Improves Error Feedback!
NEURIPS 2023
arXiv
40
citations
Accelerated Primal-Dual Gradient Method for Smooth and Convex-Concave Saddle-Point Problems with Bilinear Coupling
NEURIPS 2022
arXiv
39
citations
Optimal Algorithms for Decentralized Stochastic Variational Inequalities
NEURIPS 2022
arXiv
38
citations
CANITA: Faster Rates for Distributed Convex Optimization with Communication Compression
NEURIPS 2021
arXiv
36
citations
Smoothness Matrices Beat Smoothness Constants: Better Communication Compression Techniques for Distributed Optimization
NEURIPS 2021
arXiv
27
citations
EF-BV: A Unified Theory of Error Feedback and Variance Reduction Mechanisms for Biased and Unbiased Compression in Distributed Optimization
NEURIPS 2022
arXiv
26
citations
High-Probability Convergence for Composite and Distributed Stochastic Minimization and Variational Inequalities with Heavy-Tailed Noise
ICML 2024
arXiv
25
citations
Distributed Methods with Compressed Communication for Solving Variational Inequalities, with Theoretical Guarantees
NEURIPS 2022
arXiv
22
citations
Optimal Time Complexities of Parallel Stochastic Optimization Methods Under a Fixed Computation Model
NEURIPS 2023
arXiv
22
citations
Theoretically Better and Numerically Faster Distributed Optimization with Smoothness-Aware Quantization Techniques
NEURIPS 2022
arXiv
11
citations
Towards a Better Theoretical Understanding of Independent Subnetwork Training
ICML 2024
arXiv
8
citations
2Direction: Theoretically Faster Distributed Training with Bidirectional Communication Compression
NEURIPS 2023
arXiv
7
citations
A Computation and Communication Efficient Method for Distributed Nonconvex Problems in the Partial Participation Setting
NEURIPS 2023
arXiv
6
citations
Minibatch Stochastic Three Points Method for Unconstrained Smooth Minimization
AAAI 2024
arXiv
5
citations
EF21: A New, Simpler, Theoretically Better, and Practically Faster Error Feedback
NEURIPS 2021
0
citations
A Damped Newton Method Achieves Global $\mathcal O \left(\frac{1}{k^2}\right)$ and Local Quadratic Convergence Rate
NEURIPS 2022
0
citations
Communication Acceleration of Local Gradient Methods via an Accelerated Primal-Dual Algorithm with an Inexact Prox
NEURIPS 2022
0
citations