Information Theoretic Text-to-Image Alignment

4
citations
#2351
in ICLR 2025
of 3827 papers
5
Top Authors
7
Data Points

Abstract

Diffusion models for Text-to-Image (T2I) conditional generation have recently achieved tremendous success. Yet, aligning these models with user's intentions still involves a laborious trial-and-error process, and this challenging alignment problem has attracted considerable attention from the research community. In this work, instead of relying on fine-grained linguistic analyses of prompts, human annotation, or auxiliary vision-language models, we use Mutual Information (MI) to guide model alignment. In brief, our method uses self-supervised fine-tuning and relies on a point-wise (MI) estimation between prompts and images to create a synthetic fine-tuning set for improving model alignment. Our analysis indicates that our method is superior to the state-of-the-art, yet it only requires the pre-trained denoising network of the T2I model itself to estimate MI, and a simple fine-tuning strategy that improves alignment while maintaining image quality. Code available at https://github.com/Chao0511/mitune.

Citation History

Jan 26, 2026
0
Jan 26, 2026
3+3
Jan 27, 2026
3
Feb 3, 2026
3
Feb 13, 2026
4+1
Feb 13, 2026
4
Feb 13, 2026
4