"convergence analysis" Papers

47 papers found

Accelerated Vertical Federated Adversarial Learning through Decoupling Layer-Wise Dependencies

Tianxing Man, Yu Bai, Ganyu Wang et al.

NEURIPS 2025

Accelerating Model-Free Optimization via Averaging of Cost Samples

Guido Carnevale, Giuseppe Notarstefano

NEURIPS 2025

ADAM Optimization with Adaptive Batch Selection

Gyu Yeol Kim, Min-hwan Oh

ICLR 2025arXiv:2512.06795
2
citations

A Principled Path to Fitted Distributional Evaluation

Sungee Hong, Jiayi Wang, Zhengling Qi et al.

NEURIPS 2025spotlightarXiv:2506.20048

Broadening Target Distributions for Accelerated Diffusion Models via a Novel Analysis Approach

Yuchen Liang, Peizhong Ju, Yingbin Liang et al.

ICLR 2025arXiv:2402.13901
12
citations

Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis

Zikun Zhang, Zixiang Chen, Quanquan Gu

ICLR 2025arXiv:2410.02321
14
citations

Decentralized Sporadic Federated Learning: A Unified Algorithmic Framework with Convergence Guarantees

Shahryar Zehtabi, Dong-Jun Han, Rohit Parasnis et al.

ICLR 2025arXiv:2402.03448
6
citations

DUO: No Compromise to Accuracy Degradation

Jinda Jia, Cong Xie, Hanlin Lu et al.

NEURIPS 2025

Dynamic Loss-Based Sample Reweighting for Improved Large Language Model Pretraining

Daouda Sow, Herbert Woisetschläger, Saikiran Bulusu et al.

ICLR 2025arXiv:2502.06733
13
citations

Efficient Federated Learning against Byzantine Attacks and Data Heterogeneity via Aggregating Normalized Gradients

Shiyuan Zuo, Xingrun Yan, Rongfei Fan et al.

NEURIPS 2025arXiv:2408.09539
3
citations

FedWMSAM: Fast and Flat Federated Learning via Weighted Momentum and Sharpness-Aware Minimization

Tianle Li, Yongzhi Huang, Linshan Jiang et al.

NEURIPS 2025
2
citations

Flow matching achieves almost minimax optimal convergence

Kenji Fukumizu, Taiji Suzuki, Noboru Isobe et al.

ICLR 2025arXiv:2405.20879
13
citations

Global Well-posedness and Convergence Analysis of Score-based Generative Models via Sharp Lipschitz Estimates

Connor Mooney, Zhongjian Wang, Jack Xin et al.

ICLR 2025arXiv:2405.16104
4
citations

LiD-FL: Towards List-Decodable Federated Learning

Hong Liu, Liren Shan, Han Bao et al.

AAAI 2025paperarXiv:2408.04963

Local Steps Speed Up Local GD for Heterogeneous Distributed Logistic Regression

Michael Crawshaw, Blake Woodworth, Mingrui Liu

ICLR 2025arXiv:2501.13790
1
citations

Memory-Reduced Meta-Learning with Guaranteed Convergence

Honglin Yang, Ji Ma, Xiao Yu

AAAI 2025paperarXiv:2412.12030
1
citations

New Perspectives on the Polyak Stepsize: Surrogate Functions and Negative Results

Francesco Orabona, Ryan D'Orazio

NEURIPS 2025arXiv:2505.20219
6
citations

Nonconvex Stochastic Optimization under Heavy-Tailed Noises: Optimal Convergence without Gradient Clipping

Zijian Liu, Zhengyuan Zhou

ICLR 2025arXiv:2412.19529
28
citations

Online robust locally differentially private learning for nonparametric regression

Chenfei Gu, Qiangqiang Zhang, Ting Li et al.

NEURIPS 2025

On the Convergence of Projected Policy Gradient for Any Constant Step Sizes

Jiacai Liu, Wenye Li, Dachao Lin et al.

NEURIPS 2025arXiv:2311.01104
4
citations

On the Convergence of Stochastic Smoothed Multi-Level Compositional Gradient Descent Ascent

Xinwen Zhang, Hongchang Gao

NEURIPS 2025

Robust Federated Finetuning of LLMs via Alternating Optimization of LoRA

Shuangyi Chen, Yuanxin Guo, Yue Ju et al.

NEURIPS 2025arXiv:2502.01755
7
citations

SPFL: Sequential updates with Parallel aggregation for Enhanced Federated Learning under Category and Domain Shifts

Haoyuan Liang, Shilei Cao, Li et al.

NEURIPS 2025

Stepsize anything: A unified learning rate schedule for budgeted-iteration training

Anda Tang, Yiming Dong, Yutao Zeng et al.

NEURIPS 2025arXiv:2505.24452
1
citations

Tuning-Free Bilevel Optimization: New Algorithms and Convergence Analysis

Yifan Yang, Hao Ban, Minhui Huang et al.

ICLR 2025arXiv:2410.05140
6
citations

Value Improved Actor Critic Algorithms

Yaniv Oren, Moritz Zanger, Pascal van der Vaart et al.

NEURIPS 2025arXiv:2406.01423
1
citations

A New Theoretical Perspective on Data Heterogeneity in Federated Optimization

Jiayi Wang, Shiqiang Wang, Rong-Rong Chen et al.

ICML 2024arXiv:2407.15567
3
citations

A Persuasive Approach to Combating Misinformation

Safwan Hossain, Andjela Mladenovic, Yiling Chen et al.

ICML 2024arXiv:2310.12065
1
citations

A Primal-Dual Algorithm for Hybrid Federated Learning

Tom Overman, Garrett Blum, Diego Klabjan

AAAI 2024paperarXiv:2210.08106
9
citations

Constrained Bayesian Optimization under Partial Observations: Balanced Improvements and Provable Convergence

Shengbo Wang, Ke Li

AAAI 2024paperarXiv:2312.03212
19
citations

Convergence of Online Learning Algorithm for a Mixture of Multiple Linear Regressions

Yujing Liu, Zhixin Liu, Lei Guo

ICML 2024

Convergence of Some Convex Message Passing Algorithms to a Fixed Point

Václav Voráček, Tomáš Werner

ICML 2024spotlight

Delving into the Convergence of Generalized Smooth Minimax Optimization

Wenhan Xian, Ziyi Chen, Heng Huang

ICML 2024

Demystifying SGD with Doubly Stochastic Gradients

Kyurae Kim, Joohwan Ko, Yian Ma et al.

ICML 2024arXiv:2406.00920
2
citations

Distributed Bilevel Optimization with Communication Compression

Yutong He, Jie Hu, Xinmeng Huang et al.

ICML 2024arXiv:2405.18858
2
citations

FADAS: Towards Federated Adaptive Asynchronous Optimization

Yujia Wang, Shiqiang Wang, Songtao Lu et al.

ICML 2024arXiv:2407.18365
13
citations

Faster Adaptive Decentralized Learning Algorithms

Feihu Huang, jianyu zhao

ICML 2024spotlightarXiv:2408.09775
3
citations

FedHide: Federated Learning by Hiding in the Neighbors

Hyunsin Park, Sungrack Yun

ECCV 2024arXiv:2409.07808

Generalized Smooth Variational Inequalities: Methods with Adaptive Stepsizes

Daniil Vankov, Angelia Nedich, Lalitha Sankar

ICML 2024

Locally Differentially Private Decentralized Stochastic Bilevel Optimization with Guaranteed Convergence Accuracy

Ziqin Chen, Yongqiang Wang

ICML 2024

MADA: Meta-Adaptive Optimizers Through Hyper-Gradient Descent

Kaan Ozkara, Can Karakus, Parameswaran Raman et al.

ICML 2024arXiv:2401.08893
6
citations

On Convergence of Incremental Gradient for Non-convex Smooth Functions

Anastasiia Koloskova, Nikita Doikov, Sebastian Stich et al.

ICML 2024arXiv:2305.19259
6
citations

On the Role of Server Momentum in Federated Learning

Jianhui Sun, Xidong Wu, Heng Huang et al.

AAAI 2024paperarXiv:2312.12670
23
citations

SF-DQN: Provable Knowledge Transfer using Successor Feature for Deep Reinforcement Learning

Shuai Zhang, Heshan Fernando, Miao Liu et al.

ICML 2024arXiv:2405.15920
3
citations

Sliced-Wasserstein Estimation with Spherical Harmonics as Control Variates

Rémi Leluc, Aymeric Dieuleveut, François Portier et al.

ICML 2024arXiv:2402.01493
9
citations

Spectral Preconditioning for Gradient Methods on Graded Non-convex Functions

Nikita Doikov, Sebastian Stich, Martin Jaggi

ICML 2024arXiv:2402.04843
8
citations

Understanding Adam Optimizer via Online Learning of Updates: Adam is FTRL in Disguise

Kwangjun Ahn, Zhiyu Zhang, Yunbum Kook et al.

ICML 2024arXiv:2402.01567
22
citations