"multi-modal dataset" Papers
6 papers found
Conference
AutoHood3D: A Multi‑Modal Benchmark for Automotive Hood Design and Fluid–Structure Interaction
Vansh Sharma, Harish Ganesh, Maryam Akram et al.
NEURIPS 2025oralarXiv:2511.05596
DetectiumFire: A Comprehensive Multi-modal Dataset Bridging Vision and Language for Fire Understanding
Zixuan Liu, Siavash H. Khajavi, Guangkai Jiang
NEURIPS 2025arXiv:2511.02495
mmWalk: Towards Multi-modal Multi-view Walking Assistance
Kedi Ying, Ruiping Liu, Chongyan Chen et al.
NEURIPS 2025arXiv:2510.11520
V2X-Radar: A Multi-modal Dataset with 4D Radar for Cooperative Perception
Lei Yang, Xinyu Zhang, Jun Li et al.
NEURIPS 2025spotlightarXiv:2411.10962
20
citations
EarthVQA: Towards Queryable Earth via Relational Reasoning-Based Remote Sensing Visual Question Answering
Junjue Wang, Zhuo Zheng, Zihang Chen et al.
AAAI 2024paperarXiv:2312.12222
53
citations
VinT-6D: A Large-Scale Object-in-hand Dataset from Vision, Touch and Proprioception
Zhaoliang Wan, Yonggen Ling, Senlin Yi et al.
ICML 2024arXiv:2501.00510
9
citations