"robustness evaluation" Papers

24 papers found

AdvDreamer Unveils: Are Vision-Language Models Truly Ready for Real-World 3D Variations?

Shouwei Ruan, Hanqing Liu, Yao Huang et al.

ICCV 2025highlightarXiv:2412.03002
2
citations

Adversarial Attacks on Event-Based Pedestrian Detectors: A Physical Approach

Guixu Lin, Muyao Niu, Qingtian Zhu et al.

AAAI 2025paperarXiv:2503.00377
4
citations

Breakpoint: Stress-testing systems-level reasoning in LLM agents

Kaivalya Hariharan, Uzay Girit, Zifan Wang et al.

COLM 2025paper

Circumventing Shortcuts in Audio-visual Deepfake Detection Datasets with Unsupervised Learning

Stefan Smeu, Dragos-Alexandru Boldisor, Dan Oneata et al.

CVPR 2025highlightarXiv:2412.00175
9
citations

DiffBreak: Is Diffusion-Based Purification Robust?

Andre Kassis, Urs Hengartner, Yaoliang Yu

NEURIPS 2025arXiv:2411.16598
1
citations

Disentangling Safe and Unsafe Image Corruptions via Anisotropy and Locality

Ramchandran Muthukumar, Ambar Pal, Jeremias Sulam et al.

CVPR 2025

Dysca: A Dynamic and Scalable Benchmark for Evaluating Perception Ability of LVLMs

Jie Zhang, Zhongqi Wang, Mengqi Lei et al.

ICLR 2025arXiv:2406.18849
3
citations

Evaluating Robustness of Monocular Depth Estimation with Procedural Scene Perturbations

Jack Nugent, Siyang Wu, Zeyu Ma et al.

NEURIPS 2025arXiv:2507.00981

FedGPS: Statistical Rectification Against Data Heterogeneity in Federated Learning

Zhiqin Yang, Yonggang Zhang, Chenxin Li et al.

NEURIPS 2025arXiv:2510.20250

MVGBench: a Comprehensive Benchmark for Multi-view Generation Models

Xianghui Xie, Jan Lenssen, Gerard Pons-Moll

ICCV 2025
3
citations

On the Robustness of Distributed Machine Learning Against Transfer Attacks

Sebastien Andreina, Pascal Zimmer, Ghassan Karame

AAAI 2025paperarXiv:2412.14080
1
citations

The Fluorescent Veil: A Stealthy and Effective Physical Adversarial Patch Against Traffic Sign Recognition

Shuai Yuan, Xingshuo Han, Hongwei Li et al.

NEURIPS 2025arXiv:2409.12394
5
citations

Transstratal Adversarial Attack: Compromising Multi-Layered Defenses in Text-to-Image Models

Chunlong Xie, Kangjie Chen, Shangwei Guo et al.

NEURIPS 2025spotlight

Truth over Tricks: Measuring and Mitigating Shortcut Learning in Misinformation Detection

Herun Wan, Jiaying Wu, Minnan Luo et al.

NEURIPS 2025arXiv:2506.02350
6
citations

When Are Concepts Erased From Diffusion Models?

Kevin Lu, Nicky Kriplani, Rohit Gandikota et al.

NEURIPS 2025arXiv:2505.17013
5
citations

Attack-free Evaluating and Enhancing Adversarial Robustness on Categorical Data

Yujun Zhou, Yufei Han, Haomin Zhuang et al.

ICML 2024

CosPGD: an efficient white-box adversarial attack for pixel-wise prediction tasks

Shashank Agnihotri, Steffen Jung, Margret Keuper

ICML 2024arXiv:2302.02213
30
citations

MathAttack: Attacking Large Language Models towards Math Solving Ability

Zihao Zhou, Qiufeng Wang, Mingyu Jin et al.

AAAI 2024paperarXiv:2309.01686
37
citations

Position: TrustLLM: Trustworthiness in Large Language Models

Yue Huang, Lichao Sun, Haoran Wang et al.

ICML 2024

PracticalDG: Perturbation Distillation on Vision-Language Models for Hybrid Domain Generalization

Zining Chen, Weiqiu Wang, Zhicheng Zhao et al.

CVPR 2024arXiv:2404.09011
22
citations

Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders

Yi Yu, Yufei Wang, Song Xia et al.

ICML 2024arXiv:2405.01460
21
citations

Rethinking Label Poisoning for GNNs: Pitfalls and Attacks

Vijay Chandra Lingam, Mohammad Sadegh Akhondzadeh, Aleksandar Bojchevski

ICLR 2024
8
citations

TETRIS: Towards Exploring the Robustness of Interactive Segmentation

Andrey Moskalenko, Vlad Shakhuro, Anna Vorontsova et al.

AAAI 2024paperarXiv:2402.06132
3
citations

Towards Reliable Evaluation and Fast Training of Robust Semantic Segmentation Models

Francesco Croce, Naman D. Singh, Matthias Hein

ECCV 2024arXiv:2306.12941
12
citations