"convex optimization" Papers
35 papers found
Conference
Adaptive backtracking for faster optimization
Joao V. Cavalcanti, Laurent Lessard, Ashia Wilson
Approximating Metric Magnitude of Point Sets
Rayna Andreeva, James Ward, Primoz Skraba et al.
Controlling the Flow: Stability and Convergence for Stochastic Gradient Descent with Decaying Regularization
Sebastian Kassing, Simon Weissmann, Leif Döring
Convergence of Clipped SGD on Convex $(L_0,L_1)$-Smooth Functions
Ofir Gaash, Kfir Y. Levy, Yair Carmon
Convergence of Distributed Adaptive Optimization with Local Updates
Ziheng Cheng, Margalit Glasgow
Descent with Misaligned Gradients and Applications to Hidden Convexity
Aditya Bhaskara, Ashok Cutkosky, Ravi Kumar et al.
Exploiting Hidden Symmetry to Improve Objective Perturbation for DP Linear Learners with a Nonsmooth L1-Norm
Du Chen, Geoffrey A. Chua
Gradient correlation is a key ingredient to accelerate SGD with momentum
Julien Hermant, Marien Renaud, Jean-François Aujol et al.
Hybrid Decentralized Optimization: Leveraging Both First- and Zeroth-Order Optimizers for Faster Convergence
Shayan Talaei, Matin Ansaripour, Giorgi Nadiradze et al.
Isotropic Noise in Stochastic and Quantum Convex Optimization
Annie Marsden, Liam O'Carroll, Aaron Sidford et al.
Langevin Monte Carlo Beyond Lipschitz Gradient Continuity
Matej Benko, Iwona Chlebicka, Jorgen Endal et al.
Max Entropy Moment Kalman Filter for Polynomial Systems with Arbitrary Noise
Sangli Teng, Harry Zhang, David Jin et al.
MixMax: Distributional Robustness in Function Space via Optimal Data Mixtures
Anvith Thudi, Chris Maddison
New Perspectives on the Polyak Stepsize: Surrogate Functions and Negative Results
Francesco Orabona, Ryan D'Orazio
Optimizing $(L_0, L_1)$-Smooth Functions by Gradient Methods
Daniil Vankov, Anton Rodomanov, Angelia Nedich et al.
When Confidence Fails: Revisiting Pseudo-Label Selection in Semi-supervised Semantic Segmentation
Pan Liu, Jinshi Liu
Adaptive Proximal Gradient Methods Are Universal Without Approximation
Konstantinos Oikonomidis, Emanuel Laude, Puya Latafat et al.
A New Branch-and-Bound Pruning Framework for $\ell_0$-Regularized Problems
Guyard Theo, Cédric Herzet, Clément Elvira et al.
A Universal Transfer Theorem for Convex Optimization Algorithms Using Inexact First-order Oracles
Phillip Kerger, Marco Molinaro, Hongyi Jiang et al.
Convex and Bilevel Optimization for Neural-Symbolic Inference and Learning
Charles Dickens, Changyu Gao, Connor Pryor et al.
Differentially Private Domain Adaptation with Theoretical Guarantees
Raef Bassily, Corinna Cortes, Anqi Mao et al.
Gaussian Process Neural Additive Models
Wei Zhang, Brian Barr, John Paisley
How Free is Parameter-Free Stochastic Optimization?
Amit Attia, Tomer Koren
Improved Stability and Generalization Guarantees of the Decentralized SGD Algorithm
Batiste Le Bars, Aurélien Bellet, Marc Tommasi et al.
Minimally Modifying a Markov Game to Achieve Any Nash Equilibrium and Value
Young Wu, Jeremy McMahan, Yiding Chen et al.
MoMo: Momentum Models for Adaptive Learning Rates
Fabian Schaipp, Ruben Ohana, Michael Eickenberg et al.
New Sample Complexity Bounds for Sample Average Approximation in Heavy-Tailed Stochastic Programming
Hongcheng Liu, Jindong Tong
On the Last-Iterate Convergence of Shuffling Gradient Methods
Zijian Liu, Zhengyuan Zhou
Performative Prediction with Bandit Feedback: Learning through Reparameterization
Yatong Chen, Wei Tang, Chien-Ju Ho et al.
Privacy Amplification by Iteration for ADMM with (Strongly) Convex Objective Functions
T-H. Hubert Chan, Hao Xie, Mengshi ZHAO
Projection-Free Variance Reduction Methods for Stochastic Constrained Multi-Level Compositional Optimization
Wei Jiang, Sifan Yang, Wenhao Yang et al.
Quantum Algorithms and Lower Bounds for Finite-Sum Optimization
Yexin Zhang, Chenyi Zhang, Cong Fang et al.
Sarah Frank-Wolfe: Methods for Constrained Optimization with Best Rates and Practical Features
Aleksandr Beznosikov, David Dobre, Gauthier Gidel
Stability and Generalization for Stochastic Recursive Momentum-based Algorithms for (Strongly-)Convex One to $K$-Level Stochastic Optimizations
Xiaokang Pan, Xingyu Li, Jin Liu et al.
Tuning-Free Stochastic Optimization
Ahmed Khaled, Chi Jin