MagicDrive-V2: High-Resolution Long Video Generation for Autonomous Driving with Adaptive Control

52
citations
#43
in ICCV 2025
of 2701 papers
6
Top Authors
7
Data Points

Abstract

The rapid advancement of diffusion models has greatly improved video synthesis, especially in controllable video generation, which is vital for applications like autonomous driving. Although DiT with 3D VAE has become a standard framework for video generation, it introduces challenges in controllable driving video generation, especially for geometry control, rendering existing control methods ineffective. To address these issues, we propose MagicDrive-V2, a novel approach that integrates the MVDiT block and spatial-temporal conditional encoding to enable multi-view video generation and precise geometric control. Additionally, we introduce an efficient method for obtaining contextual descriptions for videos to support diverse textual control, along with a progressive training strategy using mixed video data to enhance training efficiency and generalizability. Consequently, MagicDrive-V2 enables multi-view driving video synthesis with $3.3\times$ resolution and $4\times$ frame count (compared to current SOTA), rich contextual control, and geometric controls. Extensive experiments demonstrate MagicDrive-V2's ability, unlocking broader applications in autonomous driving.

Citation History

Jan 26, 2026
0
Jan 26, 2026
44+44
Jan 27, 2026
44
Feb 3, 2026
46+2
Feb 13, 2026
52+6
Feb 13, 2026
52
Feb 13, 2026
52