Paper "jailbreak attacks" Papers
2 papers found
Conference
FigStep: Jailbreaking Large Vision-Language Models via Typographic Visual Prompts
Yichen Gong, Delong Ran, Jinyuan Liu et al.
AAAI 2025paperarXiv:2311.05608
302
citations
Perception-Guided Jailbreak Against Text-to-Image Models
Yihao Huang, Le Liang, Tianlin Li et al.
AAAI 2025paperarXiv:2408.10848
27
citations