Scaling Properties of Diffusion Models For Perceptual Tasks

15citations
arXiv:2411.08034
15
citations
#545
in CVPR 2025
of 2873 papers
4
Top Authors
5
Data Points

Abstract

In this paper, we argue that iterative computation with diffusion models offers a powerful paradigm for not only generation but also visual perception tasks. We unify tasks such as depth estimation, optical flow, and amodal segmentation under the framework of image-to-image translation, and show how diffusion models benefit from scaling training and test-time compute for these perceptual tasks. Through a careful analysis of these scaling properties, we formulate compute-optimal training and inference recipes to scale diffusion models for visual perception tasks. Our models achieve competitive performance to state-of-the-art methods using significantly less data and compute. To access our code and models, see https://scaling-diffusion-perception.github.io .

Citation History

Jan 26, 2026
15
Feb 2, 2026
15
Feb 13, 2026
15
Feb 13, 2026
15
Feb 13, 2026
15