Poster "adversarial attacks" Papers
83 papers found • Page 2 of 2
Conference
Enhancing Tracking Robustness with Auxiliary Adversarial Defense Networks
Zhewei Wu, Ruilong Yu, Qihe Liu et al.
Exploring Vulnerabilities in Spiking Neural Networks: Direct Adversarial Attacks on Raw Event Data
Yanmeng Yao, Xiaohan Zhao, Bin Gu
Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions
Jon Vadillo, Roberto Santana, Jose A Lozano
Fast Adversarial Attacks on Language Models In One GPU Minute
Vinu Sankar Sadasivan, Shoumik Saha, Gaurang Sriramanan et al.
GLOW: Global Layout Aware Attacks on Object Detection
Jun Bao, Buyu Liu, Kui Ren et al.
Graph Neural Network Explanations are Fragile
Jiate Li, Meng Pang, Yun Dong et al.
Improved Dimensionality Dependence for Zeroth-Order Optimisation over Cross-Polytopes
Weijia Shao
Manifold Integrated Gradients: Riemannian Geometry for Feature Attribution
Eslam Zaher, Maciej Trzaskowski, Quan Nguyen et al.
MedBN: Robust Test-Time Adaptation against Malicious Test Samples
Hyejin Park, Jeongyeon Hwang, Sunung Mun et al.
MM-SafetyBench: A Benchmark for Safety Evaluation of Multimodal Large Language Models
Xin Liu, Yichen Zhu, Jindong Gu et al.
MultiDelete for Multimodal Machine Unlearning
Jiali Cheng, Hadi Amiri
On the Duality Between Sharpness-Aware Minimization and Adversarial Training
Yihao Zhang, Hangzhou He, Jingyu Zhu et al.
On the Robustness of Large Multimodal Models Against Image Adversarial Attacks
Xuanming Cui, Alejandro Aparcedo, Young Kyun Jang et al.
Rethinking Adversarial Robustness in the Context of the Right to be Forgotten
Chenxu Zhao, Wei Qian, Yangyi Li et al.
Rethinking Independent Cross-Entropy Loss For Graph-Structured Data
Rui Miao, Kaixiong Zhou, Yili Wang et al.
Revisiting Character-level Adversarial Attacks for Language Models
Elias Abad Rocamora, Yongtao Wu, Fanghui Liu et al.
RigorLLM: Resilient Guardrails for Large Language Models against Undesired Content
Zhuowen Yuan, Zidi Xiong, Yi Zeng et al.
Robust CLIP: Unsupervised Adversarial Fine-Tuning of Vision Embeddings for Robust Large Vision-Language Models
Christian Schlarmann, Naman Singh, Francesco Croce et al.
Robustness Tokens: Towards Adversarial Robustness of Transformers
Brian Pulfer, Yury Belousov, Slava Voloshynovskiy
Safety Fine-Tuning at (Almost) No Cost: A Baseline for Vision Large Language Models
Yongshuo Zong, Ondrej Bohdal, Tingyang Yu et al.
Shedding More Light on Robust Classifiers under the lens of Energy-based Models
Mujtaba Hussain Mirza, Maria Rosaria Briglia, Senad Beadini et al.
SignSGD with Federated Defense: Harnessing Adversarial Attacks through Gradient Sign Decoding
Chanho Park, Namyoon Lee
SpecFormer: Guarding Vision Transformer Robustness via Maximum Singular Value Penalization
Xixu Hu, Runkai Zheng, Jindong Wang et al.
The Pitfalls and Promise of Conformal Inference Under Adversarial Attacks
Ziquan Liu, Yufei Cui, Yan Yan et al.
Towards Physical World Backdoor Attacks against Skeleton Action Recognition
Qichen Zheng, Yi Yu, SIYUAN YANG et al.
Towards the Theory of Unsupervised Federated Learning: Non-asymptotic Analysis of Federated EM Algorithms
Ye Tian, Haolei Weng, Yang Feng
Transferable 3D Adversarial Shape Completion using Diffusion Models
Xuelong Dai, Bin Xiao
Trustworthy Actionable Perturbations
Jesse Friedbaum, Sudarshan Adiga, Ravi Tandon
Two Heads are Actually Better than One: Towards Better Adversarial Robustness via Transduction and Rejection
Nils Palumbo, Yang Guo, Xi Wu et al.
Universal Robustness via Median Randomized Smoothing for Real-World Super-Resolution
Zakariya Chaouai, Mohamed Tamaazousti
Unmasking Vulnerabilities: Cardinality Sketches under Adaptive Inputs
Sara Ahmadian, Edith Cohen
UPAM: Unified Prompt Attack in Text-to-Image Generation Models Against Both Textual Filters and Visual Checkers
Duo Peng, Qiuhong Ke, Jun Liu
WAVES: Benchmarking the Robustness of Image Watermarks
Bang An, Mucong Ding, Tahseen Rabbani et al.