"nonconvex optimization" Papers
24 papers found
Conference
Adaptive backtracking for faster optimization
Joao V. Cavalcanti, Laurent Lessard, Ashia Wilson
ADMM for Structured Fractional Minimization
Ganzhao Yuan
Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum
SARIT KHIRIRAT, Abdurakhmon Sadiev, Artem Riabinin et al.
Escaping saddle points without Lipschitz smoothness: the power of nonlinear preconditioning
Alexander Bodard, Panagiotis Patrinos
Finite-Time Analysis of Stochastic Nonconvex Nonsmooth Optimization on the Riemannian Manifolds
Emre Sahinoglu, Youbang Sun, Shahin Shahrampour
Improving Convergence Guarantees of Random Subspace Second-order Algorithm for Nonconvex Optimization
Rei Higuchi, Pierre-Louis Poirion, Akiko Takeda
Langevin Multiplicative Weights Update with Applications in Polynomial Portfolio Management
Yi Feng, Xiao Wang, Tian Xie
Nonconvex Stochastic Optimization under Heavy-Tailed Noises: Optimal Convergence without Gradient Clipping
Zijian Liu, Zhengyuan Zhou
On the Crucial Role of Initialization for Matrix Factorization
Bingcong Li, Liang Zhang, Aryan Mokhtari et al.
Optimizing $(L_0, L_1)$-Smooth Functions by Gradient Methods
Daniil Vankov, Anton Rodomanov, Angelia Nedich et al.
Problem-Parameter-Free Federated Learning
Wenjing Yan, Kai Zhang, Xiaolu Wang et al.
Stability and Sharper Risk Bounds with Convergence Rate $\tilde{O}(1/n^2)$
Bowei Zhu, Shaojie Li, Mingyang Yi et al.
Zeroth-Order Methods for Nonconvex Stochastic Problems with Decision-Dependent Distributions
Yuya Hikima, Akiko Takeda
A Doubly Recursive Stochastic Compositional Gradient Descent Method for Federated Multi-Level Compositional Optimization
Hongchang Gao
A Study of First-Order Methods with a Deterministic Relative-Error Gradient Oracle
Nadav Hallak, Kfir Levy
Convergence and Complexity Guarantee for Inexact First-order Riemannian Optimization Algorithms
Yuchen Li, Laura Balzano, Deanna Needell et al.
Convergence Guarantees for the DeepWalk Embedding on Block Models
Christopher Harker, Aditya Bhaskara
Gradient Compressed Sensing: A Query-Efficient Gradient Estimator for High-Dimensional Zeroth-Order Optimization
Ruizhong Qiu, Hanghang Tong
Quantum Algorithms and Lower Bounds for Finite-Sum Optimization
Yexin Zhang, Chenyi Zhang, Cong Fang et al.
SPABA: A Single-Loop and Probabilistic Stochastic Bilevel Algorithm Achieving Optimal Sample Complexity
Tianshu Chu, Dachuan Xu, Wei Yao et al.
Towards Certified Unlearning for Deep Neural Networks
Binchi Zhang, Yushun Dong, Tianhao Wang et al.
Transformers Learn Nonlinear Features In Context: Nonconvex Mean-field Dynamics on the Attention Landscape
Juno Kim, Taiji Suzuki
Tuning-Free Stochastic Optimization
Ahmed Khaled, Chi Jin
Zeroth-Order Methods for Constrained Nonconvex Nonsmooth Stochastic Optimization
Zhuanghua Liu, Cheng Chen, Luo Luo et al.