"sharpness-aware minimization" Papers

20 papers found

An Analysis of Concept Bottleneck Models: Measuring, Understanding, and Mitigating the Impact of Noisy Annotations

Seonghwan Park, Jueun Mun, Donghyun Oh et al.

NEURIPS 2025arXiv:2505.16705
2
citations

FedWMSAM: Fast and Flat Federated Learning via Weighted Momentum and Sharpness-Aware Minimization

Tianle Li, Yongzhi Huang, Linshan Jiang et al.

NEURIPS 2025
2
citations

Improving Model-Based Reinforcement Learning by Converging to Flatter Minima

Shrinivas Ramasubramanian, Benjamin Freed, Alexandre Capone et al.

NEURIPS 2025

Mitigating Parameter Interference in Model Merging via Sharpness-Aware Fine-Tuning

Yeoreum Lee, Jinwook Jung, Sungyong Baik

ICLR 2025arXiv:2504.14662
8
citations

Modality-Aware SAM: Sharpness-Aware-Minimization Driven Gradient Modulation for Harmonized Multimodal Learning

Hossein Rajoli Nowdeh, Jie Ji, Xiaolong Ma et al.

NEURIPS 2025arXiv:2510.24919

Noise Stability Optimization for Finding Flat Minima: A Hessian-based Regularization Approach

Haotian Ju, Hongyang Zhang, Dongyue Li

ICLR 2025arXiv:2306.08553
12
citations

Sharpness-Aware Minimization: General Analysis and Improved Rates

Dimitris Oikonomou, Nicolas Loizou

ICLR 2025arXiv:2503.02225
8
citations

SharpZO: Hybrid Sharpness-Aware Vision Language Model Prompt Tuning via Forward-Only Passes

Yifan Yang, Zhen Zhang, Rupak Vignesh Swaminathan et al.

NEURIPS 2025arXiv:2506.20990
1
citations

The Devil is in Low-Level Features for Cross-Domain Few-Shot Segmentation

Yuhan Liu, Yixiong Zou, Yuhua Li et al.

CVPR 2025arXiv:2503.21150
4
citations

A Universal Class of Sharpness-Aware Minimization Algorithms

Behrooz Tahmasebi, Ashkan Soleymani, Dara Bahri et al.

ICML 2024arXiv:2406.03682
12
citations

Flatness-aware Sequential Learning Generates Resilient Backdoors

Hoang Pham, The-Anh Ta, Anh Tran et al.

ECCV 2024arXiv:2407.14738
1
citations

Forget Sharpness: Perturbed Forgetting of Model Biases Within SAM Dynamics

Ankit Vani, Frederick Tung, Gabriel Oliveira et al.

ICML 2024arXiv:2406.06700

How to Escape Sharp Minima with Random Perturbations

Kwangjun Ahn, Ali Jadbabaie, Suvrit Sra

ICML 2024arXiv:2305.15659
14
citations

Improving SAM Requires Rethinking its Optimization Formulation

Wanyun Xie, Fabian Latorre, Kimon Antonakopoulos et al.

ICML 2024arXiv:2407.12993
4
citations

Improving Sharpness-Aware Minimization by Lookahead

Runsheng Yu, Youzhi Zhang, James Kwok

ICML 2024

Locally Estimated Global Perturbations are Better than Local Perturbations for Federated Sharpness-aware Minimization

Ziqing Fan, Shengchao Hu, Jiangchao Yao et al.

ICML 2024spotlightarXiv:2405.18890
33
citations

Lookbehind-SAM: k steps back, 1 step forward

Gonçalo Mordido, Pranshu Malviya, Aristide Baratin et al.

ICML 2024arXiv:2307.16704
3
citations

On the Duality Between Sharpness-Aware Minimization and Adversarial Training

Yihao Zhang, Hangzhou He, Jingyu Zhu et al.

ICML 2024arXiv:2402.15152
25
citations

Rethinking the Flat Minima Searching in Federated Learning

Taehwan Lee, Sung Whan Yoon

ICML 2024

SAMformer: Unlocking the Potential of Transformers in Time Series Forecasting with Sharpness-Aware Minimization and Channel-Wise Attention

Romain Ilbert, Ambroise Odonnat, Vasilii Feofanov et al.

ICML 2024arXiv:2402.10198
56
citations