"nonconvex optimization" Papers

24 papers found

Adaptive backtracking for faster optimization

Joao V. Cavalcanti, Laurent Lessard, Ashia Wilson

ICLR 2025
3
citations

ADMM for Structured Fractional Minimization

Ganzhao Yuan

ICLR 2025arXiv:2411.07496
1
citations

Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum

SARIT KHIRIRAT, Abdurakhmon Sadiev, Artem Riabinin et al.

NEURIPS 2025arXiv:2410.16871
8
citations

Escaping saddle points without Lipschitz smoothness: the power of nonlinear preconditioning

Alexander Bodard, Panagiotis Patrinos

NEURIPS 2025spotlightarXiv:2509.15817
2
citations

Finite-Time Analysis of Stochastic Nonconvex Nonsmooth Optimization on the Riemannian Manifolds

Emre Sahinoglu, Youbang Sun, Shahin Shahrampour

NEURIPS 2025arXiv:2510.21468

Improving Convergence Guarantees of Random Subspace Second-order Algorithm for Nonconvex Optimization

Rei Higuchi, Pierre-Louis Poirion, Akiko Takeda

ICLR 2025arXiv:2406.14337
1
citations

Langevin Multiplicative Weights Update with Applications in Polynomial Portfolio Management

Yi Feng, Xiao Wang, Tian Xie

AAAI 2025paperarXiv:2502.19210
1
citations

Nonconvex Stochastic Optimization under Heavy-Tailed Noises: Optimal Convergence without Gradient Clipping

Zijian Liu, Zhengyuan Zhou

ICLR 2025arXiv:2412.19529
28
citations

On the Crucial Role of Initialization for Matrix Factorization

Bingcong Li, Liang Zhang, Aryan Mokhtari et al.

ICLR 2025arXiv:2410.18965
11
citations

Optimizing $(L_0, L_1)$-Smooth Functions by Gradient Methods

Daniil Vankov, Anton Rodomanov, Angelia Nedich et al.

ICLR 2025arXiv:2410.10800
24
citations

Problem-Parameter-Free Federated Learning

Wenjing Yan, Kai Zhang, Xiaolu Wang et al.

ICLR 2025

Stability and Sharper Risk Bounds with Convergence Rate $\tilde{O}(1/n^2)$

Bowei Zhu, Shaojie Li, Mingyang Yi et al.

NEURIPS 2025arXiv:2410.09766
1
citations

Zeroth-Order Methods for Nonconvex Stochastic Problems with Decision-Dependent Distributions

Yuya Hikima, Akiko Takeda

AAAI 2025paperarXiv:2412.20330
3
citations

A Doubly Recursive Stochastic Compositional Gradient Descent Method for Federated Multi-Level Compositional Optimization

Hongchang Gao

ICML 2024

A Study of First-Order Methods with a Deterministic Relative-Error Gradient Oracle

Nadav Hallak, Kfir Levy

ICML 2024

Convergence and Complexity Guarantee for Inexact First-order Riemannian Optimization Algorithms

Yuchen Li, Laura Balzano, Deanna Needell et al.

ICML 2024arXiv:2405.03073
1
citations

Convergence Guarantees for the DeepWalk Embedding on Block Models

Christopher Harker, Aditya Bhaskara

ICML 2024arXiv:2410.20248

Gradient Compressed Sensing: A Query-Efficient Gradient Estimator for High-Dimensional Zeroth-Order Optimization

Ruizhong Qiu, Hanghang Tong

ICML 2024arXiv:2405.16805
11
citations

Quantum Algorithms and Lower Bounds for Finite-Sum Optimization

Yexin Zhang, Chenyi Zhang, Cong Fang et al.

ICML 2024arXiv:2406.03006
5
citations

SPABA: A Single-Loop and Probabilistic Stochastic Bilevel Algorithm Achieving Optimal Sample Complexity

Tianshu Chu, Dachuan Xu, Wei Yao et al.

ICML 2024arXiv:2405.18777
6
citations

Towards Certified Unlearning for Deep Neural Networks

Binchi Zhang, Yushun Dong, Tianhao Wang et al.

ICML 2024arXiv:2408.00920
25
citations

Transformers Learn Nonlinear Features In Context: Nonconvex Mean-field Dynamics on the Attention Landscape

Juno Kim, Taiji Suzuki

ICML 2024arXiv:2402.01258
38
citations

Tuning-Free Stochastic Optimization

Ahmed Khaled, Chi Jin

ICML 2024spotlightarXiv:2402.07793
13
citations

Zeroth-Order Methods for Constrained Nonconvex Nonsmooth Stochastic Optimization

Zhuanghua Liu, Cheng Chen, Luo Luo et al.

ICML 2024