STI-Bench: Are MLLMs Ready for Precise Spatial-Temporal World Understanding?

38
citations
#70
in ICCV 2025
of 2701 papers
7
Top Authors
7
Data Points

Abstract

The use of Multimodal Large Language Models (MLLMs) as an end-to-end solution for Embodied AI and Autonomous Driving has become a prevailing trend. While MLLMs have been extensively studied for visual semantic understanding tasks, their ability to perform precise and quantitative spatial-temporal understanding in real-world applications remains largely unexamined, leading to uncertain prospects. To evaluate models' Spatial-Temporal Intelligence, we introduce STI-Bench, a benchmark designed to evaluate MLLMs' spatial-temporal understanding through challenging tasks such as estimating and predicting the appearance, pose, displacement, and motion of objects. Our benchmark encompasses a wide range of robot and vehicle operations across desktop, indoor, and outdoor scenarios. The extensive experiments reveals that the state-of-the-art MLLMs still struggle in real-world spatial-temporal understanding, especially in tasks requiring precise distance estimation and motion analysis.

Citation History

Jan 26, 2026
36
Jan 26, 2026
36
Jan 27, 2026
36
Feb 3, 2026
36
Feb 13, 2026
38+2
Feb 13, 2026
38
Feb 13, 2026
38