ProtoCar: Learning 3D Vehicle Prototypes from Single-View and Unconstrained Driving Scene Images
Abstract
Reconstructing 3D models from sensor data is a valuable and promising direction for developing testing and validation environments in applications like autonomous driving. However, existing methods for 3D modeling often rely on extensive multi-view data or controlled conditions, making them difficult and expensive to scale. Furthermore, these methods, particularly those based on neural radiance fields, typically produce implicit models that can be challenging to manipulate and suffer from slow rendering speeds. In this paper, we introduce ProtoCar, a novel approach that overcomes these limitations by learning 3D vehicle prototypes from single-view images with diverse and unconstrained visual conditions. ProtoCar uses real-world driving data from LiDAR and image sensors, and employs 3D Gaussian splatting techniques to represent explicit geometric and texture. Extensive experiments demonstrate that ProtoCar generates high-quality 3D models and adapts well to various vehicle types and challenging visual scenarios, offering a scalable and effective solution for 3D modeling in environments with limited and variable visual information.