"robustness evaluation" Papers
24 papers found
Conference
AdvDreamer Unveils: Are Vision-Language Models Truly Ready for Real-World 3D Variations?
Shouwei Ruan, Hanqing Liu, Yao Huang et al.
Adversarial Attacks on Event-Based Pedestrian Detectors: A Physical Approach
Guixu Lin, Muyao Niu, Qingtian Zhu et al.
Breakpoint: Stress-testing systems-level reasoning in LLM agents
Kaivalya Hariharan, Uzay Girit, Zifan Wang et al.
Circumventing Shortcuts in Audio-visual Deepfake Detection Datasets with Unsupervised Learning
Stefan Smeu, Dragos-Alexandru Boldisor, Dan Oneata et al.
DiffBreak: Is Diffusion-Based Purification Robust?
Andre Kassis, Urs Hengartner, Yaoliang Yu
Disentangling Safe and Unsafe Image Corruptions via Anisotropy and Locality
Ramchandran Muthukumar, Ambar Pal, Jeremias Sulam et al.
Dysca: A Dynamic and Scalable Benchmark for Evaluating Perception Ability of LVLMs
Jie Zhang, Zhongqi Wang, Mengqi Lei et al.
Evaluating Robustness of Monocular Depth Estimation with Procedural Scene Perturbations
Jack Nugent, Siyang Wu, Zeyu Ma et al.
FedGPS: Statistical Rectification Against Data Heterogeneity in Federated Learning
Zhiqin Yang, Yonggang Zhang, Chenxin Li et al.
MVGBench: a Comprehensive Benchmark for Multi-view Generation Models
Xianghui Xie, Jan Lenssen, Gerard Pons-Moll
On the Robustness of Distributed Machine Learning Against Transfer Attacks
Sebastien Andreina, Pascal Zimmer, Ghassan Karame
The Fluorescent Veil: A Stealthy and Effective Physical Adversarial Patch Against Traffic Sign Recognition
Shuai Yuan, Xingshuo Han, Hongwei Li et al.
Transstratal Adversarial Attack: Compromising Multi-Layered Defenses in Text-to-Image Models
Chunlong Xie, Kangjie Chen, Shangwei Guo et al.
Truth over Tricks: Measuring and Mitigating Shortcut Learning in Misinformation Detection
Herun Wan, Jiaying Wu, Minnan Luo et al.
When Are Concepts Erased From Diffusion Models?
Kevin Lu, Nicky Kriplani, Rohit Gandikota et al.
Attack-free Evaluating and Enhancing Adversarial Robustness on Categorical Data
Yujun Zhou, Yufei Han, Haomin Zhuang et al.
CosPGD: an efficient white-box adversarial attack for pixel-wise prediction tasks
Shashank Agnihotri, Steffen Jung, Margret Keuper
MathAttack: Attacking Large Language Models towards Math Solving Ability
Zihao Zhou, Qiufeng Wang, Mingyu Jin et al.
Position: TrustLLM: Trustworthiness in Large Language Models
Yue Huang, Lichao Sun, Haoran Wang et al.
PracticalDG: Perturbation Distillation on Vision-Language Models for Hybrid Domain Generalization
Zining Chen, Weiqiu Wang, Zhicheng Zhao et al.
Purify Unlearnable Examples via Rate-Constrained Variational Autoencoders
Yi Yu, Yufei Wang, Song Xia et al.
Rethinking Label Poisoning for GNNs: Pitfalls and Attacks
Vijay Chandra Lingam, Mohammad Sadegh Akhondzadeh, Aleksandar Bojchevski
TETRIS: Towards Exploring the Robustness of Interactive Segmentation
Andrey Moskalenko, Vlad Shakhuro, Anna Vorontsova et al.
Towards Reliable Evaluation and Fast Training of Robust Semantic Segmentation Models
Francesco Croce, Naman D. Singh, Matthias Hein