"adversarial training" Papers
82 papers found • Page 1 of 2
Conference
A²RNet: Adversarial Attack Resilient Network for Robust Infrared and Visible Image Fusion
Jiawei Li, Hongwei Yu, Jiansheng Chen et al.
Accelerated Vertical Federated Adversarial Learning through Decoupling Layer-Wise Dependencies
Tianxing Man, Yu Bai, Ganyu Wang et al.
Adversarial Exploitation of Data Diversity Improves Visual Localization
Sihang Li, Siqi Tan, Bowen Chang et al.
Adversarial Generative Flow Network for Solving Vehicle Routing Problems
Ni Zhang, Jingfeng Yang, Zhiguang Cao et al.
Adversarially Robust Anomaly Detection through Spurious Negative Pair Mitigation
Hossein Mirzaei Sadeghlou, Mojtaba Nafez, Jafar Habibi et al.
ALBAR: Adversarial Learning approach to mitigate Biases in Action Recognition
Joseph Fioresi, Ishan Rajendrakumar Dave, Mubarak Shah
Algorithmic Stability Based Generalization Bounds for Adversarial Training
Runzhi Tian, Yongyi Mao
Breaking Latent Prior Bias in Detectors for Generalizable AIGC Image Detection
Yue Zhou, Xinan He, Kaiqing Lin et al.
Distributional LLM-as-a-Judge
Luyu Chen, Zeyu Zhang, Haoran Tan et al.
Enhancing Robustness in Incremental Learning with Adversarial Training
Seungju Cho, Hongsin Lee, Changick Kim
FrameShield: Adversarially Robust Video Anomaly Detection
Mojtaba Nafez, Mobina Poulaei, Nikan Vasei et al.
Generating Less Certain Adversarial Examples Improves Robust Generalization
Minxing Zhang, Michael Backes, Xiao Zhang
Improved Diffusion-based Generative Model with Better Adversarial Robustness
Zekun Wang, Mingyang Yi, Shuchen Xue et al.
Improving Generalization and Robustness in SNNs Through Signed Rate Encoding and Sparse Encoding Attacks
Bhaskar Mukhoty, Hilal AlQuabeh, Bin Gu
Indirect Gradient Matching for Adversarial Robust Distillation
Hongsin Lee, Seungju Cho, Changick Kim
Lifelong Safety Alignment for Language Models
Haoyu Wang, Yifei Zhao, Zeyu Qin et al.
Long-tailed Adversarial Training with Self-Distillation
Seungju Cho, Hongsin Lee, Changick Kim
MEIcoder: Decoding Visual Stimuli from Neural Activity by Leveraging Most Exciting Inputs
Jan Sobotka, Luca Baroni, Ján Antolík
NitroFusion: High-Fidelity Single-Step Diffusion through Dynamic Adversarial Training
Dar-Yen Chen, Hmrishav Bandyopadhyay, Kai Zou et al.
On the Alignment between Fairness and Accuracy: from the Perspective of Adversarial Robustness
Junyi Chai, Taeuk Jang, Jing Gao et al.
Out-of-Distribution Generalized Graph Anomaly Detection with Homophily-aware Environment Mixup
Sibo Tian, Xin Wang, Zeyang Zhang et al.
PBCAT: Patch-Based Composite Adversarial Training against Physically Realizable Attacks on Object Detection
Xiao Li, Yiming Zhu, Yifan Huang et al.
PN-GAIL: Leveraging Non-optimal Information from Imperfect Demonstrations
Qiang Liu, Huiqiao Fu, Kaiqiang Tang et al.
Rethinking Invariance Regularization in Adversarial Training to Improve Robustness-Accuracy Trade-off
Futa Waseda, Ching-Chun Chang, Isao Echizen
Robust LLM safeguarding via refusal feature adversarial training
Lei Yu, Virginie Do, Karen Hambardzumyan et al.
Short-length Adversarial Training Helps LLMs Defend Long-length Jailbreak Attacks: Theoretical and Empirical Evidence
Shaopeng Fu, Liang Ding, Jingfeng ZHANG et al.
Solving Neural Min-Max Games: The Role of Architecture, Initialization & Dynamics
Deep Patel, Emmanouil-Vasileios Vlatakis-Gkaragkounis
Stealthy Yet Effective: Distribution-Preserving Backdoor Attacks on Graph Classification
Xiaobao Wang, Ruoxiao Sun, Yujun Zhang et al.
Towards Adversarially Robust Dataset Distillation by Curvature Regularization
Eric Xue, Yijiang Li, Haoyang Liu et al.
Towards Adversarial Robustness via Debiased High-Confidence Logit Alignment
Kejia Zhang, Juanjuan Weng, Zhiming Luo et al.
Understanding and Improving Fast Adversarial Training against $l_0$ Bounded Perturbations
Xuyang Zhong, Yixiao Huang, Chen Liu
Unlearning Concepts in Diffusion Model via Concept Domain Correction and Concept Preserving Gradient
Yongliang Wu, Shiji Zhou, Mingzhuo Yang et al.
VLMs can Aggregate Scattered Training Patches
Zhanhui Zhou, Lingjie Chen, Chao Yang et al.
ZEBRA: Towards Zero-Shot Cross-Subject Generalization for Universal Brain Visual Decoding
Haonan Wang, Jingyu Lu, Hongrui Li et al.
ACT-Diffusion: Efficient Adversarial Consistency Training for One-step Diffusion Models
Fei Kong, Jinhao Duan, Lichao Sun et al.
Adversarially Robust Deep Multi-View Clustering: A Novel Attack and Defense Framework
Haonan Huang, Guoxu Zhou, Yanghang Zheng et al.
Adversarially Robust Hypothesis Transfer Learning
Yunjuan Wang, Raman Arora
Adversarial Robustness Limits via Scaling-Law and Human-Alignment Studies
Brian Bartoldson, James Diffenderfer, Konstantinos Parasyris et al.
A Theoretical Analysis of Backdoor Poisoning Attacks in Convolutional Neural Networks
Boqi Li, Weiwei Liu
Benign Overfitting in Adversarial Training of Neural Networks
Yunjuan Wang, Kaibo Zhang, Raman Arora
Bias-Conflict Sample Synthesis and Adversarial Removal Debias Strategy for Temporal Sentence Grounding in Video
Zhaobo Qi, Yibo Yuan, Xiaowen Ruan et al.
Boosting Adversarial Training via Fisher-Rao Norm-based Regularization
Xiangyu Yin, Wenjie Ruan
Catastrophic Overfitting: A Potential Blessing in Disguise
MN Zhao, Lihe Zhang, Yuqiu Kong et al.
CMDA: Cross-Modal and Domain Adversarial Adaptation for LiDAR-Based 3D Object Detection
Gyusam Chang, Wonseok Roh, Sujin Jang et al.
Collapse-Aware Triplet Decoupling for Adversarially Robust Image Retrieval
Qiwei Tian, Chenhao Lin, Zhengyu Zhao et al.
Data-Free Hard-Label Robustness Stealing Attack
Xiaojian Yuan, Kejiang Chen, Wen Huang et al.
Delving into the Convergence of Generalized Smooth Minimax Optimization
Wenhan Xian, Ziyi Chen, Heng Huang
E2E-AT: A Unified Framework for Tackling Uncertainty in Task-Aware End-to-End Learning
8445 Wangkun Xu, Jianhong Wang, Fei Teng
Enhancing Tracking Robustness with Auxiliary Adversarial Defense Networks
Zhewei Wu, Ruilong Yu, Qihe Liu et al.
Exploiting Supervised Poison Vulnerability to Strengthen Self-Supervised Defense
Jeremy Styborski, Mingzhi Lyu, YI HUANG et al.