Grounded Text-to-Image Synthesis with Attention Refocusing

162citations
arXiv:2306.05427
162
citations
#148
in CVPR 2024
of 2716 papers
3
Top Authors
4
Data Points

Abstract

Driven by the scalable diffusion models trained on large-scale datasets, text-to-image synthesis methods have shown compelling results. However, these models still fail to precisely follow the text prompt involving multiple objects, attributes, or spatial compositions. In this paper, we reveal the potential causes in the diffusion model's cross-attention and self-attention layers. We propose two novel losses to refocus attention maps according to a given spatial layout during sampling. Creating the layouts manually requires additional effort and can be tedious. Therefore, we explore using large language models (LLM) to produce these layouts for our method. We conduct extensive experiments on the DrawBench, HRS, and TIFA benchmarks to evaluate our proposed method. We show that our proposed attention refocusing effectively improves the controllability of existing approaches.

Citation History

Jan 27, 2026
157
Feb 7, 2026
159+2
Feb 13, 2026
162+3
Feb 13, 2026
162