MobileDiffusion: Instant Text-to-Image Generation on Mobile Devices

36
citations
#343
in ECCV 2024
of 2387 papers
5
Top Authors
5
Data Points

Abstract

The deployment of large-scale text-to-image diffusion models on mobile devices is impeded by their substantial model size and slow inference speed. In this paper, we propose \textbf{MobileDiffusion}, a highly efficient text-to-image diffusion model obtained through extensive optimizations in both architecture and sampling techniques. We conduct a comprehensive examination of model architecture design to reduce redundancy, enhance computational efficiency, and minimize model's parameter count, while preserving image generation quality. Additionally, we employ distillation and diffusion-GAN finetuning techniques on MobileDiffusion to achieve 8-step and 1-step inference respectively. Empirical studies, conducted both quantitatively and qualitatively, demonstrate the effectiveness of our proposed techniques. MobileDiffusion achieves a remarkable \textbf{sub-second} inference speed for generating a $512\times512$ image on mobile devices, establishing a new state of the art.

Citation History

Jan 26, 2026
34
Feb 1, 2026
35+1
Feb 6, 2026
35
Feb 13, 2026
36+1
Feb 13, 2026
36