DrivingGPT: Unifying Driving World Modeling and Planning with Multi-modal Autoregressive Transformers

51citations
arXiv:2412.18607
51
citations
#44
in ICCV 2025
of 2701 papers
3
Top Authors
7
Data Points

Abstract

World model-based searching and planning are widely recognized as a promising path toward human-level physical intelligence. However, current driving world models primarily rely on video diffusion models, which specialize in visual generation but lack the flexibility to incorporate other modalities like action. In contrast, autoregressive transformers have demonstrated exceptional capability in modeling multimodal data. Our work aims to unify both driving model simulation and trajectory planning into a single sequence modeling problem. We introduce a multimodal driving language based on interleaved image and action tokens, and develop DrivingGPT to learn joint world modeling and planning through standard next-token prediction. Our DrivingGPT demonstrates strong performance in both action-conditioned video generation and end-to-end planning, outperforming strong baselines on large-scale nuPlan and NAVSIM benchmarks.

Citation History

Jan 26, 2026
0
Jan 26, 2026
44+44
Jan 27, 2026
44
Feb 3, 2026
44
Feb 13, 2026
50+6
Feb 13, 2026
51+1
Feb 13, 2026
51