α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Wotao Yin
Wotao Yin
16
papers
1,185
total citations
papers (16)
FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting
NEURIPS 2022
arXiv
290
citations
An Improved Analysis of Stochastic Gradient Descent with Momentum
NEURIPS 2020
arXiv
288
citations
An Improved Analysis of (Variance-Reduced) Policy Gradient and Natural Policy Gradient Methods
NEURIPS 2020
arXiv
124
citations
Revisiting Zeroth-Order Optimization for Memory-Efficient LLM Fine-Tuning: A Benchmark
ICML 2024
arXiv
107
citations
Exponential Graph is Provably Efficient for Decentralized Deep Training
NEURIPS 2021
arXiv
107
citations
DecentLaM: Decentralized Momentum SGD for Large-Batch Deep Training
ICCV 2021
arXiv
69
citations
Learned Robust PCA: A Scalable Deep Unfolding Approach for High-Dimensional Outlier Detection
NEURIPS 2021
arXiv
56
citations
Communication-Efficient Topologies for Decentralized Learning with $O(1)$ Consensus Rate
NEURIPS 2022
arXiv
43
citations
Lower Bounds and Nearly Optimal Algorithms in Distributed Learning with Communication Compression
NEURIPS 2022
arXiv
38
citations
Hyperparameter Tuning is All You Need for LISTA
NEURIPS 2021
arXiv
30
citations
An Improved Analysis and Rates for Variance Reduction under Without-replacement Sampling Orders
NEURIPS 2021
arXiv
16
citations
Provably Efficient Exploration for Reinforcement Learning Using Unsupervised Learning
NEURIPS 2020
arXiv
7
citations
Efficient Algorithms for Sum-Of-Minimum Optimization
ICML 2024
arXiv
7
citations
Block Acceleration Without Momentum: On Optimal Stepsizes of Block Gradient Descent for Least-Squares
ICML 2024
arXiv
2
citations
Subsampled Ensemble Can Improve Generalization Tail Exponentially
NEURIPS 2025
arXiv
1
citations
Closing the Gap: Tighter Analysis of Alternating Stochastic Gradient Methods for Bilevel Problems
NEURIPS 2021
0
citations