OOTDiffusion: Outfitting Fusion Based Latent Diffusion for Controllable Virtual Try-On

138
citations
#8
in AAAI 2025
of 3028 papers
4
Top Authors
4
Data Points

Abstract

We present OOTDiffusion, a novel network architecture for realistic and controllable image-based virtual try-on (VTON). We leverage the power of pretrained latent diffusion models, designing an outfitting UNet to learn the garment detail features. Without a redundant warping process, the garment features are precisely aligned with the target human body via the proposed outfitting fusion in the self-attention layers of the denoising UNet. In order to further enhance the controllability, we introduce outfitting dropout to the training process, which enables us to adjust the strength of the garment features through classifier-free guidance. Our comprehensive experiments on the VITON-HD and Dress Code datasets demonstrate that OOTDiffusion efficiently generates high-quality try-on results for arbitrary human and garment images, which outperforms other VTON methods in both realism and controllability, indicating an impressive breakthrough in virtual try-on. Our source code is available at https://github.com/levihsu/OOTDiffusion.

Citation History

Jan 28, 2026
0
Feb 13, 2026
138+138
Feb 13, 2026
138
Feb 13, 2026
138