CONFORM: Contrast is All You Need for High-Fidelity Text-to-Image Diffusion Models

51citations
arXiv:2312.06059
51
citations
#545
in CVPR 2024
of 2716 papers
4
Top Authors
4
Data Points

Abstract

Images produced by text-to-image diffusion models might not always faithfully represent the semantic intent of the provided text prompt, where the model might overlook or entirely fail to produce certain objects. Existing solutions often require customly tailored functions for each of these problems, leading to sub-optimal results, especially for complex prompts. Our work introduces a novel perspective by tackling this challenge in a contrastive context. Our approach intuitively promotes the segregation of objects in attention maps while also maintaining that pairs of related attributes are kept close to each other. We conduct extensive experiments across a wide variety of scenarios, each involving unique combinations of objects, attributes, and scenes. These experiments effectively showcase the versatility, efficiency, and flexibility of our method in working with both latent and pixel-based diffusion models, including Stable Diffusion and Imagen. Moreover, we publicly share our source code to facilitate further research.

Citation History

Jan 28, 2026
0
Feb 13, 2026
51+51
Feb 13, 2026
51
Feb 13, 2026
51