α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Xingjun Ma
Xingjun Ma
1
Affiliations
Affiliations
Fudan University
22
papers
2,359
total citations
papers (22)
Reflection Backdoor: A Natural Backdoor Attack on Deep Neural Networks
ECCV 2020
arXiv
581
citations
Anti-Backdoor Learning: Training Clean Models on Poisoned Data
NEURIPS 2021
arXiv
412
citations
Clean-Label Backdoor Attacks on Video Recognition Models
CVPR 2020
arXiv
315
citations
$\alpha$-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression
NEURIPS 2021
arXiv
306
citations
Adversarial Camouflage: Hiding Physical-World Attacks With Natural Styles
CVPR 2020
arXiv
257
citations
Revisiting Adversarial Robustness Distillation: Robust Soft Labels Make Student Better
ICCV 2021
arXiv
130
citations
Exploring Architectural Ingredients of Adversarially Robust Deep Neural Networks
NEURIPS 2021
arXiv
113
citations
Short-Term and Long-Term Context Aggregation Network for Video Inpainting
ECCV 2020
arXiv
50
citations
CalFAT: Calibrated Federated Adversarial Training with Label Skewness
NEURIPS 2022
arXiv
44
citations
Unlearnable Clusters: Towards Label-Agnostic Unlearnable Examples
CVPR 2023
arXiv
34
citations
Adversarial Prompt Tuning for Vision-Language Models
ECCV 2024
arXiv
34
citations
BlueSuffix: Reinforced Blue Teaming for Vision-Language Models Against Jailbreak Attacks
ICLR 2025
arXiv
20
citations
TAPT: Test-Time Adversarial Prompt Tuning for Robust Inference in Vision-Language Models
CVPR 2025
arXiv
15
citations
Anyattack: Towards Large-scale Self-supervised Adversarial Attacks on Vision-language Models
CVPR 2025
arXiv
15
citations
LDReg: Local Dimensionality Regularized Self-Supervised Learning
ICLR 2024
arXiv
10
citations
IDEATOR: Jailbreaking and Benchmarking Large Vision-Language Models Using Themselves
ICCV 2025
arXiv
9
citations
Free-Form Motion Control: Controlling the 6D Poses of Camera and Objects in Video Generation
ICCV 2025
arXiv
4
citations
AIM: Additional Image Guided Generation of Transferable Adversarial Attacks
AAAI 2025
arXiv
4
citations
HoneypotNet: Backdoor Attacks Against Model Extraction
AAAI 2025
arXiv
4
citations
Towards Million-Scale Adversarial Robustness Evaluation With Stronger Individual Attacks
CVPR 2025
arXiv
2
citations
StolenLoRA: Exploring LoRA Extraction Attacks via Synthetic Data
ICCV 2025
arXiv
0
citations
Gradient Driven Rewards to Guarantee Fairness in Collaborative Machine Learning
NEURIPS 2021
0
citations