"convex optimization" Papers

35 papers found

Adaptive backtracking for faster optimization

Joao V. Cavalcanti, Laurent Lessard, Ashia Wilson

ICLR 2025
3
citations

Approximating Metric Magnitude of Point Sets

Rayna Andreeva, James Ward, Primoz Skraba et al.

AAAI 2025paperarXiv:2409.04411
3
citations

Controlling the Flow: Stability and Convergence for Stochastic Gradient Descent with Decaying Regularization

Sebastian Kassing, Simon Weissmann, Leif Döring

NEURIPS 2025arXiv:2505.11434
2
citations

Convergence of Clipped SGD on Convex $(L_0,L_1)$-Smooth Functions

Ofir Gaash, Kfir Y. Levy, Yair Carmon

NEURIPS 2025arXiv:2502.16492
5
citations

Convergence of Distributed Adaptive Optimization with Local Updates

Ziheng Cheng, Margalit Glasgow

ICLR 2025arXiv:2409.13155
5
citations

Descent with Misaligned Gradients and Applications to Hidden Convexity

Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar et al.

ICLR 2025

Exploiting Hidden Symmetry to Improve Objective Perturbation for DP Linear Learners with a Nonsmooth L1-Norm

Du Chen, Geoffrey A. Chua

ICLR 2025

Gradient correlation is a key ingredient to accelerate SGD with momentum

Julien Hermant, Marien Renaud, Jean-François Aujol et al.

ICLR 2025arXiv:2410.07870
3
citations

Hybrid Decentralized Optimization: Leveraging Both First- and Zeroth-Order Optimizers for Faster Convergence

Shayan Talaei, Matin Ansaripour, Giorgi Nadiradze et al.

AAAI 2025paperarXiv:2210.07703
1
citations

Isotropic Noise in Stochastic and Quantum Convex Optimization

Annie Marsden, Liam O'Carroll, Aaron Sidford et al.

NEURIPS 2025arXiv:2510.20745

Langevin Monte Carlo Beyond Lipschitz Gradient Continuity

Matej Benko, Iwona Chlebicka, Jorgen Endal et al.

AAAI 2025paperarXiv:2412.09698
2
citations

Max Entropy Moment Kalman Filter for Polynomial Systems with Arbitrary Noise

Sangli Teng, Harry Zhang, David Jin et al.

NEURIPS 2025arXiv:2506.00838
2
citations

MixMax: Distributional Robustness in Function Space via Optimal Data Mixtures

Anvith Thudi, Chris Maddison

ICLR 2025arXiv:2406.01477
1
citations

New Perspectives on the Polyak Stepsize: Surrogate Functions and Negative Results

Francesco Orabona, Ryan D'Orazio

NEURIPS 2025arXiv:2505.20219
6
citations

Optimizing $(L_0, L_1)$-Smooth Functions by Gradient Methods

Daniil Vankov, Anton Rodomanov, Angelia Nedich et al.

ICLR 2025arXiv:2410.10800
24
citations

When Confidence Fails: Revisiting Pseudo-Label Selection in Semi-supervised Semantic Segmentation

Pan Liu, Jinshi Liu

ICCV 2025highlightarXiv:2509.16704
1
citations

Adaptive Proximal Gradient Methods Are Universal Without Approximation

Konstantinos Oikonomidis, Emanuel Laude, Puya Latafat et al.

ICML 2024spotlightarXiv:2402.06271
13
citations

A New Branch-and-Bound Pruning Framework for $\ell_0$-Regularized Problems

Guyard Theo, Cédric Herzet, Clément Elvira et al.

ICML 2024arXiv:2406.03504
6
citations

A Universal Transfer Theorem for Convex Optimization Algorithms Using Inexact First-order Oracles

Phillip Kerger, Marco Molinaro, Hongyi Jiang et al.

ICML 2024arXiv:2406.00576

Convex and Bilevel Optimization for Neural-Symbolic Inference and Learning

Charles Dickens, Changyu Gao, Connor Pryor et al.

ICML 2024

Differentially Private Domain Adaptation with Theoretical Guarantees

Raef Bassily, Corinna Cortes, Anqi Mao et al.

ICML 2024arXiv:2306.08838

Gaussian Process Neural Additive Models

Wei Zhang, Brian Barr, John Paisley

AAAI 2024paperarXiv:2402.12518
12
citations

How Free is Parameter-Free Stochastic Optimization?

Amit Attia, Tomer Koren

ICML 2024spotlightarXiv:2402.03126
11
citations

Improved Stability and Generalization Guarantees of the Decentralized SGD Algorithm

Batiste Le Bars, Aurélien Bellet, Marc Tommasi et al.

ICML 2024arXiv:2306.02939
11
citations

Minimally Modifying a Markov Game to Achieve Any Nash Equilibrium and Value

Young Wu, Jeremy McMahan, Yiding Chen et al.

ICML 2024arXiv:2311.00582
3
citations

MoMo: Momentum Models for Adaptive Learning Rates

Fabian Schaipp, Ruben Ohana, Michael Eickenberg et al.

ICML 2024arXiv:2305.07583
20
citations

New Sample Complexity Bounds for Sample Average Approximation in Heavy-Tailed Stochastic Programming

Hongcheng Liu, Jindong Tong

ICML 2024

On the Last-Iterate Convergence of Shuffling Gradient Methods

Zijian Liu, Zhengyuan Zhou

ICML 2024arXiv:2403.07723
8
citations

Performative Prediction with Bandit Feedback: Learning through Reparameterization

Yatong Chen, Wei Tang, Chien-Ju Ho et al.

ICML 2024arXiv:2305.01094
12
citations

Privacy Amplification by Iteration for ADMM with (Strongly) Convex Objective Functions

T-H. Hubert Chan, Hao Xie, Mengshi ZHAO

AAAI 2024paperarXiv:2312.08685
1
citations

Projection-Free Variance Reduction Methods for Stochastic Constrained Multi-Level Compositional Optimization

Wei Jiang, Sifan Yang, Wenhao Yang et al.

ICML 2024arXiv:2406.03787
4
citations

Quantum Algorithms and Lower Bounds for Finite-Sum Optimization

Yexin Zhang, Chenyi Zhang, Cong Fang et al.

ICML 2024arXiv:2406.03006
5
citations

Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features

Aleksandr Beznosikov, David Dobre, Gauthier Gidel

ICML 2024arXiv:2304.11737
8
citations

Stability and Generalization for Stochastic Recursive Momentum-based Algorithms for (Strongly-)Convex One to $K$-Level Stochastic Optimizations

Xiaokang Pan, Xingyu Li, Jin Liu et al.

ICML 2024arXiv:2407.05286

Tuning-Free Stochastic Optimization

Ahmed Khaled, Chi Jin

ICML 2024spotlightarXiv:2402.07793
13
citations