α
Research
Alpha Leak
Conferences
Topics
Top Authors
Rankings
Browse All
EN
中
Home
/
Authors
/
Kaidi Xu
Kaidi Xu
17
papers
448
total citations
papers (17)
General Cutting Planes for Bound-Propagation-Based Neural Network Verification
NEURIPS 2022
arXiv
127
citations
Does Physical Adversarial Example Really Matter to Autonomous Driving? Towards System-Level Effect of Adversarial Object Evasion Attack
ICCV 2023
arXiv
57
citations
Decoding Compressed Trust: Scrutinizing the Trustworthiness of Efficient LLMs Under Compression
ICML 2024
arXiv
49
citations
Can Protective Perturbation Safeguard Personal Data from Being Exploited by Stable Diffusion?
CVPR 2024
arXiv
39
citations
Adversarial T-shirt! Evading Person Detectors in A Physical World
ECCV 2020
arXiv
31
citations
ScaleCert: Scalable Certified Defense against Adversarial Patches with Sparse Superficial Layers
NEURIPS 2021
arXiv
26
citations
Light-weight Calibrator: A Separable Component for Unsupervised Domain Adaptation
CVPR 2020
arXiv
23
citations
Toward Robust Spiking Neural Network Against Adversarial Perturbation
NEURIPS 2022
arXiv
22
citations
Stable Unlearnable Example: Enhancing the Robustness of Unlearnable Examples via Stable Error-Minimizing Noise
AAAI 2024
arXiv
17
citations
Unveiling Typographic Deceptions: Insights of the Typographic Vulnerability in Large Vision-Language Models
ECCV 2024
arXiv
15
citations
Automatic Perturbation Analysis for Scalable Certified Robustness and Beyond
NEURIPS 2020
arXiv
15
citations
Jailbreak-AudioBench: In-Depth Evaluation and Analysis of Jailbreak Threats for Large Audio Language Models
NEURIPS 2025
arXiv
8
citations
Not Just Text: Uncovering Vision Modality Typographic Threats in Image Generation Models
CVPR 2025
arXiv
7
citations
TruthPrInt: Mitigating Large Vision-Language Models Object Hallucination Via Latent Truthful-Guided Pre-Intervention
ICCV 2025
7
citations
ACT-Diffusion: Efficient Adversarial Consistency Training for One-step Diffusion Models
CVPR 2024
arXiv
5
citations
Beta-CROWN: Efficient Bound Propagation with Per-neuron Split Constraints for Neural Network Robustness Verification
NEURIPS 2021
0
citations
Position: TrustLLM: Trustworthiness in Large Language Models
ICML 2024
0
citations