Poster "adversarial attacks" Papers

83 papers found • Page 2 of 2

Enhancing Tracking Robustness with Auxiliary Adversarial Defense Networks

Zhewei Wu, Ruilong Yu, Qihe Liu et al.

ECCV 2024arXiv:2402.17976
4
citations

Exploring Vulnerabilities in Spiking Neural Networks: Direct Adversarial Attacks on Raw Event Data

Yanmeng Yao, Xiaohan Zhao, Bin Gu

ECCV 2024
9
citations

Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions

Jon Vadillo, Roberto Santana, Jose A Lozano

ICML 2024arXiv:2004.06383
1
citations

Fast Adversarial Attacks on Language Models In One GPU Minute

Vinu Sankar Sadasivan, Shoumik Saha, Gaurang Sriramanan et al.

ICML 2024arXiv:2402.15570
72
citations

GLOW: Global Layout Aware Attacks on Object Detection

Jun Bao, Buyu Liu, Kui Ren et al.

CVPR 2024arXiv:2302.14166

Graph Neural Network Explanations are Fragile

Jiate Li, Meng Pang, Yun Dong et al.

ICML 2024arXiv:2406.03193
18
citations

Improved Dimensionality Dependence for Zeroth-Order Optimisation over Cross-Polytopes

Weijia Shao

ICML 2024

Manifold Integrated Gradients: Riemannian Geometry for Feature Attribution

Eslam Zaher, Maciej Trzaskowski, Quan Nguyen et al.

ICML 2024arXiv:2405.09800
9
citations

MedBN: Robust Test-Time Adaptation against Malicious Test Samples

Hyejin Park, Jeongyeon Hwang, Sunung Mun et al.

CVPR 2024arXiv:2403.19326
8
citations

MM-SafetyBench: A Benchmark for Safety Evaluation of Multimodal Large Language Models

Xin Liu, Yichen Zhu, Jindong Gu et al.

ECCV 2024arXiv:2311.17600
199
citations

MultiDelete for Multimodal Machine Unlearning

Jiali Cheng, Hadi Amiri

ECCV 2024arXiv:2311.12047
13
citations

On the Duality Between Sharpness-Aware Minimization and Adversarial Training

Yihao Zhang, Hangzhou He, Jingyu Zhu et al.

ICML 2024arXiv:2402.15152
25
citations

On the Robustness of Large Multimodal Models Against Image Adversarial Attacks

Xuanming Cui, Alejandro Aparcedo, Young Kyun Jang et al.

CVPR 2024arXiv:2312.03777
89
citations

Rethinking Adversarial Robustness in the Context of the Right to be Forgotten

Chenxu Zhao, Wei Qian, Yangyi Li et al.

ICML 2024

Rethinking Independent Cross-Entropy Loss For Graph-Structured Data

Rui Miao, Kaixiong Zhou, Yili Wang et al.

ICML 2024arXiv:2405.15564
5
citations

Revisiting Character-level Adversarial Attacks for Language Models

Elias Abad Rocamora, Yongtao Wu, Fanghui Liu et al.

ICML 2024arXiv:2405.04346
6
citations

RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content

Zhuowen Yuan, Zidi Xiong, Yi Zeng et al.

ICML 2024arXiv:2403.13031
67
citations

Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models

Christian Schlarmann, Naman Singh, Francesco Croce et al.

ICML 2024arXiv:2402.12336
88
citations

Robustness Tokens: Towards Adversarial Robustness of Transformers

Brian Pulfer, Yury Belousov, Slava Voloshynovskiy

ECCV 2024arXiv:2503.10191

Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models

Yongshuo Zong, Ondrej Bohdal, Tingyang Yu et al.

ICML 2024arXiv:2402.02207
123
citations

Shedding More Light on Robust Classifiers under the lens of Energy-based Models

Mujtaba Hussain Mirza, Maria Rosaria Briglia, Senad Beadini et al.

ECCV 2024arXiv:2407.06315
7
citations

SignSGD with Federated Defense: Harnessing Adversarial Attacks through Gradient Sign Decoding

Chanho Park, Namyoon Lee

ICML 2024arXiv:2402.01340
5
citations

SpecFormer: Guarding Vision Transformer Robustness via Maximum Singular Value Penalization

Xixu Hu, Runkai Zheng, Jindong Wang et al.

ECCV 2024arXiv:2402.03317
5
citations

The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks

Ziquan Liu, Yufei Cui, Yan Yan et al.

ICML 2024arXiv:2405.08886
9
citations

Towards Physical World Backdoor Attacks against Skeleton Action Recognition

Qichen Zheng, Yi Yu, SIYUAN YANG et al.

ECCV 2024arXiv:2408.08671
9
citations

Towards the Theory of Unsupervised Federated Learning: Non-asymptotic Analysis of Federated EM Algorithms

Ye Tian, Haolei Weng, Yang Feng

ICML 2024arXiv:2310.15330
7
citations

Transferable 3D Adversarial Shape Completion using Diffusion Models

Xuelong Dai, Bin Xiao

ECCV 2024arXiv:2407.10077
1
citations

Trustworthy Actionable Perturbations

Jesse Friedbaum, Sudarshan Adiga, Ravi Tandon

ICML 2024arXiv:2405.11195
2
citations

Two Heads are Actually Better than One: Towards Better Adversarial Robustness via Transduction and Rejection

Nils Palumbo, Yang Guo, Xi Wu et al.

ICML 2024arXiv:2305.17528

Universal Robustness via Median Randomized Smoothing for Real-World Super-Resolution

Zakariya Chaouai, Mohamed Tamaazousti

CVPR 2024arXiv:2405.14934
5
citations

Unmasking Vulnerabilities: Cardinality Sketches under Adaptive Inputs

Sara Ahmadian, Edith Cohen

ICML 2024arXiv:2405.17780
6
citations

UPAM: Unified Prompt Attack in Text-to-Image Generation Models Against Both Textual Filters and Visual Checkers

Duo Peng, Qiuhong Ke, Jun Liu

ICML 2024arXiv:2405.11336
8
citations

WAVES: Benchmarking the Robustness of Image Watermarks

Bang An, Mucong Ding, Tahseen Rabbani et al.

ICML 2024arXiv:2401.08573
72
citations