Poster "adversarial perturbations" Papers

18 papers found

Adversarial Attention Perturbations for Large Object Detection Transformers

Zachary Yahn, Selim Tekin, Fatih Ilhan et al.

ICCV 2025arXiv:2508.02987
2
citations

Adversarial Perturbations Cannot Reliably Protect Artists From Generative AI

Robert Hönig, Javier Rando, Nicholas Carlini et al.

ICLR 2025arXiv:2406.12027
35
citations

AdvPaint: Protecting Images from Inpainting Manipulation via Adversarial Attention Disruption

Joonsung Jeon, Woo Jae Kim, Suhyeon Ha et al.

ICLR 2025arXiv:2503.10081
4
citations

Algorithmic Stability Based Generalization Bounds for Adversarial Training

Runzhi Tian, Yongyi Mao

ICLR 2025
2
citations

BadVLA: Towards Backdoor Attacks on Vision-Language-Action Models via Objective-Decoupled Optimization

Xueyang Zhou, Guiyao Tie, Guowen Zhang et al.

NEURIPS 2025arXiv:2505.16640
13
citations

Edit Away and My Face Will not Stay: Personal Biometric Defense against Malicious Generative Editing

Hanhui Wang, Yihua Zhang, Ruizheng Bai et al.

CVPR 2025arXiv:2411.16832
8
citations

ETA: Energy-based Test-time Adaptation for Depth Completion

Younjoon Chung, Hyoungseob Park, Patrick Rim et al.

ICCV 2025arXiv:2508.05989
2
citations

Lie Detector: Unified Backdoor Detection via Cross-Examination Framework

Xuan Wang, Siyuan Liang, Dongping Liao et al.

NEURIPS 2025arXiv:2503.16872
4
citations

LORE: Lagrangian-Optimized Robust Embeddings for Visual Encoders

Borna Khodabandeh, Amirabbas Afzali, Amirhossein Afsharrad et al.

NEURIPS 2025arXiv:2505.18884

NullSwap: Proactive Identity Cloaking Against Deepfake Face Swapping

Tianyi Wang, Shuaicheng Niu, Harry Cheng et al.

ICCV 2025arXiv:2503.18678
8
citations

On the Adversarial Vulnerability of Label-Free Test-Time Adaptation

Shahriar Rifat, Jonathan Ashdown, Michael De Lucia et al.

ICLR 2025
1
citations

Phase and Amplitude-aware Prompting for Enhancing Adversarial Robustness

Yibo Xu, Dawei Zhou, Decheng Liu et al.

ICML 2025

Robust Satisficing Gaussian Process Bandits Under Adversarial Attacks

Artun Saday, Yaşar Cahit Yıldırım, Cem Tekin

NEURIPS 2025arXiv:2506.01625

Safeguarding Vision-Language Models: Mitigating Vulnerabilities to Gaussian Noise in Perturbation-based Attacks

Jiawei Wang, Yushen Zuo, Yuanjun Chai et al.

ICCV 2025arXiv:2504.01308

Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks

Yong Xie, Weijie Zheng, Hanxun Huang et al.

CVPR 2025arXiv:2411.15210
2
citations

Attack-free Evaluating and Enhancing Adversarial Robustness on Categorical Data

Yujun Zhou, Yufei Han, Haomin Zhuang et al.

ICML 2024

Rethinking Fast Adversarial Training: A Splitting Technique To Overcome Catastrophic Overfitting

Masoumeh Zareapoor, Pourya Shamsolmoali

ECCV 2024

Using My Artistic Style? You Must Obtain My Authorization

Xiuli Bi, Haowei Liu, Weisheng Li et al.

ECCV 2024
5
citations