SOLVE: Synergy of Language-Vision and End-to-End Networks for Autonomous Driving

15citations
arXiv:2505.16805
15
citations
#545
in CVPR 2025
of 2873 papers
6
Top Authors
6
Data Points

Abstract

The integration of Vision-Language Models (VLMs) into autonomous driving systems has shown promise in addressing key challenges such as learning complexity, interpretability, and common-sense reasoning. However, existing approaches often struggle with efficient integration and realtime decision-making due to computational demands. In this paper, we introduce SOLVE, an innovative framework that synergizes VLMs with end-to-end (E2E) models to enhance autonomous vehicle planning. Our approach emphasizes knowledge sharing at the feature level through a shared visual encoder, enabling comprehensive interaction between VLM and E2E components. We propose a Trajectory Chain-of-Thought (T-CoT) paradigm, which progressively refines trajectory predictions, reducing uncertainty and improving accuracy. By employing a temporal decoupling strategy, SOLVE achieves efficient cooperation by aligning high-quality VLM outputs with E2E real-time performance. Evaluated on the nuScenes dataset, our method demonstrates significant improvements in trajectory prediction accuracy, paving the way for more robust and reliable autonomous driving systems.

Citation History

Jan 24, 2026
15
Jan 27, 2026
15
Feb 4, 2026
15
Feb 13, 2026
15
Feb 13, 2026
15
Feb 13, 2026
15