Can We Achieve Efficient Diffusion Without Self-Attention? Distilling Self-Attention into Convolutions

0
citations
#1781
in ICCV 2025
of 2701 papers
6
Top Authors
7
Data Points

Abstract

Contemporary diffusion models built upon U-Net or Diffusion Transformer (DiT) architectures have revolutionized image generation through transformer-based attention mechanisms. The prevailing paradigm has commonly employed self-attention with quadratic computational complexity to handle global spatial relationships in complex images, thereby synthesizing high-fidelity images with coherent visual semantics.Contrary to conventional wisdom, our systematic layer-wise analysis reveals an interesting discrepancy: self-attention in pre-trained diffusion models predominantly exhibits localized attention patterns, closely resembling convolutional inductive biases. This suggests that global interactions in self-attention may be less critical than commonly assumed.Driven by this, we propose \(Δ\)ConvFusion to replace conventional self-attention modules with Pyramid Convolution Blocks (\(Δ\)ConvBlocks).By distilling attention patterns into localized convolutional operations while keeping other components frozen, \(Δ\)ConvFusion achieves performance comparable to transformer-based counterparts while reducing computational cost by 6929$\times$ and surpassing LinFusion by 5.42$\times$ in efficiency--all without compromising generative fidelity.

Citation History

Jan 24, 2026
0
Jan 26, 2026
0
Jan 26, 2026
0
Jan 28, 2026
0
Feb 13, 2026
0
Feb 13, 2026
0
Feb 13, 2026
0