Rethinking Vision-Language Model in Face Forensics: Multi-Modal Interpretable Forged Face Detector

26citations
arXiv:2503.20188
26
citations
#287
in CVPR 2025
of 2873 papers
5
Top Authors
6
Data Points

Abstract

Deepfake detection is a long-established research topic vital for mitigating the spread of malicious misinformation. Unlike prior methods that provide either binary classification results or textual explanations separately, we introduce a novel method capable of generating both simultaneously. Our method harnesses the multi-modal learning capability of the pre-trained CLIP and the unprecedented interpretability of large language models (LLMs) to enhance both the generalization and explainability of deepfake detection. Specifically, we introduce a multi-modal face forgery detector (M2F2-Det) that employs tailored face forgery prompt learning, incorporating the pre-trained CLIP to improve generalization to unseen forgeries. Also, M2F2-Det incorporates an LLM to provide detailed textual explanations of its detection decisions, enhancing interpretability by bridging the gap between natural language and subtle cues of facial forgeries. Empirically, we evaluate M2F2-Det on both detection and explanation generation tasks, where it achieves state-of-the-art performance, demonstrating its effectiveness in identifying and explaining diverse forgeries.

Citation History

Jan 26, 2026
24
Jan 31, 2026
24
Feb 6, 2026
25+1
Feb 13, 2026
26+1
Feb 13, 2026
26
Feb 13, 2026
26