"adversarial robustness" Papers
129 papers found • Page 2 of 3
Conference
Resolution Attack: Exploiting Image Compression to Deceive Deep Neural Networks
Wangjia Yu, Xiaomeng Fu, Qiao Li et al.
Robust Conformal Prediction with a Single Binary Certificate
Soroush H. Zargarbashi, Aleksandar Bojchevski
Robust Contextual Pricing
Anupam Gupta, Guru Guruganesh, Renato Leme et al.
Robust Feature Learning for Multi-Index Models in High Dimensions
Alireza Mousavi-Hosseini, Adel Javanmard, Murat A Erdogdu
Robust SuperAlignment: Weak-to-Strong Robustness Generalization for Vision-Language Models
Junhao Dong, Cong Zhang, Xinghua Qu et al.
R-TPT: Improving Adversarial Robustness of Vision-Language Models through Test-Time Prompt Tuning
Lijun Sheng, Jian Liang, Zilei Wang et al.
Support is All You Need for Certified VAE Training
Changming Xu, Debangshu Banerjee, Deepak Vasisht et al.
Synergy Between the Strong and the Weak: Spiking Neural Networks are Inherently Self-Distillers
Yongqi Ding, Lin Zuo, Mengmeng Jing et al.
Towards Adversarially Robust Dataset Distillation by Curvature Regularization
Eric Xue, Yijiang Li, Haoyang Liu et al.
Towards Adversarial Robustness via Debiased High-Confidence Logit Alignment
Kejia Zhang, Juanjuan Weng, Zhiming Luo et al.
Towards Robust Knowledge Unlearning: An Adversarial Framework for Assessing and Improving Unlearning Robustness in Large Language Models
Hongbang Yuan, Zhuoran Jin, Pengfei Cao et al.
Two is Better than One: Efficient Ensemble Defense for Robust and Compact Models
Yoojin Jung, Byung Cheol Song
UIBDiffusion: Universal Imperceptible Backdoor Attack for Diffusion Models
Yuning Han, Bingyin Zhao, Rui Chu et al.
Understanding and Improving Adversarial Robustness of Neural Probabilistic Circuits
Weixin Chen, Han Zhao
WMCopier: Forging Invisible Watermarks on Arbitrary Images
Ziping Dong, Chao Shuai, Zhongjie Ba et al.
Your Text Encoder Can Be An Object-Level Watermarking Controller
Naresh Kumar Devulapally, Mingzhen Huang, Vishal Asnani et al.
Zero-cost Proxy for Adversarial Robustness Evaluation
Yuqi Feng, Yuwei Ou, Jiahao Fan et al.
Adaptive Hierarchical Certification for Segmentation using Randomized Smoothing
Alaa Anani, Tobias Lorenz, Bernt Schiele et al.
Adversarial Attacks on Combinatorial Multi-Armed Bandits
Rishab Balasubramanian, Jiawei Li, Tadepalli Prasad et al.
Adversarially Robust Deep Multi-View Clustering: A Novel Attack and Defense Framework
Haonan Huang, Guoxu Zhou, Yanghang Zheng et al.
Adversarially Robust Distillation by Reducing the Student-Teacher Variance Gap
Junhao Dong, Piotr Koniusz, Junxi Chen et al.
Adversarially Robust Hypothesis Transfer Learning
Yunjuan Wang, Raman Arora
Adversarial Prompt Tuning for Vision-Language Models
Jiaming Zhang, Xingjun Ma, Xin Wang et al.
Adversarial Robustification via Text-to-Image Diffusion Models
Daewon Choi, Jongheon Jeong, Huiwon Jang et al.
Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies
Brian Bartoldson, James Diffenderfer, Konstantinos Parasyris et al.
Attack-free Evaluating and Enhancing Adversarial Robustness on Categorical Data
Yujun Zhou, Yufei Han, Haomin Zhuang et al.
BadCLIP: Trigger-Aware Prompt Learning for Backdoor Attacks on CLIP
Jiawang Bai, Kuofeng Gao, Shaobo Min et al.
BadPart: Unified Black-box Adversarial Patch Attacks against Pixel-wise Regression Tasks
Zhiyuan Cheng, Zhaoyi Liu, Tengda Guo et al.
Better Safe than Sorry: Pre-training CLIP against Targeted Data Poisoning and Backdoor Attacks
Wenhan Yang, Jingdong Gao, Baharan Mirzasoleiman
Be Your Own Neighborhood: Detecting Adversarial Examples by the Neighborhood Relations Built on Self-Supervised Learning
Zhiyuan He, Yijun Yang, Pin-Yu Chen et al.
Boosting Adversarial Training via Fisher-Rao Norm-based Regularization
Xiangyu Yin, Wenjie Ruan
Breaking the Barrier: Enhanced Utility and Robustness in Smoothed DRL Agents
Chung-En Sun, Sicun Gao, Lily Weng
Can Implicit Bias Imply Adversarial Robustness?
Hancheng Min, Rene Vidal
Catastrophic Overfitting: A Potential Blessing in Disguise
MN Zhao, Lihe Zhang, Yuqiu Kong et al.
Causality Based Front-door Defense Against Backdoor Attack on Language Models
Yiran Liu, Xiaoang Xu, Zhiyi Hou et al.
Certifiably Robust Image Watermark
Zhengyuan Jiang, Moyang Guo, Yuepeng Hu et al.
Characterizing Model Robustness via Natural Input Gradients
Adrian Rodriguez-Munoz, Tongzhou Wang, Antonio Torralba
Collapse-Aware Triplet Decoupling for Adversarially Robust Image Retrieval
Qiwei Tian, Chenhao Lin, Zhengyu Zhao et al.
Comparing the Robustness of Modern No-Reference Image- and Video-Quality Metrics to Adversarial Attacks
Anastasia Antsiferova, Khaled Abud, Aleksandr Gushchin et al.
Compositional Curvature Bounds for Deep Neural Networks
Taha Entesari, Sina Sharifi, Mahyar Fazlyab
Consistent Adversarially Robust Linear Classification: Non-Parametric Setting
Elvis Dohmatob
Coupling Graph Neural Networks with Fractional Order Continuous Dynamics: A Robustness Study
Qiyu Kang, Kai Zhao, Yang Song et al.
DataFreeShield: Defending Adversarial Attacks without Training Data
Hyeyoon Lee, Kanghyun Choi, Dain Kwon et al.
Defense Against Adversarial Attacks on No-Reference Image Quality Models with Gradient Norm Regularization
Yujia Liu, Chenxi Yang, Dingquan Li et al.
DiffuseMix: Label-Preserving Data Augmentation with Diffusion Models
Khawar Islam, Muhammad Zaigham Zaheer, Arif Mahmood et al.
DiG-IN: Diffusion Guidance for Investigating Networks - Uncovering Classifier Differences Neuron Visualisations and Visual Counterfactual Explanations
Maximilian Augustin, Yannic Neuhaus, Matthias Hein
Enhancing Adversarial Robustness in SNNs with Sparse Gradients
Yujia Liu, Tong Bu, Ding Jianhao et al.
Et Tu Certifications: Robustness Certificates Yield Better Adversarial Examples
Andrew C. Cullen, Shijie Liu, Paul Montague et al.
Extending Adversarial Attacks to Produce Adversarial Class Probability Distributions
Jon Vadillo, Roberto Santana, Jose A Lozano
Geometry-Aware Instrumental Variable Regression
Heiner Kremer, Bernhard Schölkopf