USP: Unified Self-Supervised Pretraining for Image Generation and Understanding

17citations
arXiv:2503.06132
17
citations
#216
in ICCV 2025
of 2701 papers
3
Top Authors
6
Data Points

Abstract

Recent studies have highlighted the interplay between diffusion models and representation learning. Intermediate representations from diffusion models can be leveraged for downstream visual tasks, while self-supervised vision models can enhance the convergence and generation quality of diffusion models. However, transferring pretrained weights from vision models to diffusion models is challenging due to input mismatches and the use of latent spaces. To address these challenges, we propose Unified Self-supervised Pretraining (USP), a framework that initializes diffusion models via masked latent modeling in a Variational Autoencoder (VAE) latent space. USP achieves comparable performance in understanding tasks while significantly improving the convergence speed and generation quality of diffusion models. Our code will be publicly available at https://github.com/AMAP-ML/USP.

Citation History

Jan 25, 2026
16
Jan 31, 2026
17+1
Feb 5, 2026
17
Feb 13, 2026
17
Feb 13, 2026
17
Feb 13, 2026
17