"multi-modal sensor fusion" Papers
7 papers found
Conference
Ego4o: Egocentric Human Motion Capture and Understanding from Multi-Modal Input
Jian Wang, Rishabh Dabral, Diogo Luvizon et al.
CVPR 2025arXiv:2504.08449
6
citations
EgoLM: Multi-Modal Language Model of Egocentric Motions
Fangzhou Hong, Vladimir Guzov, Hyo Jin Kim et al.
CVPR 2025arXiv:2409.18127
12
citations
EVT: Efficient View Transformation for Multi-Modal 3D Object Detection
Yongjin Lee, Hyeon-Mun Jeong, Yurim Jeon et al.
ICCV 2025arXiv:2411.10715
2
citations
Pixel-aligned RGB-NIR Stereo Imaging and Dataset for Robot Vision
Jinneyong Kim, Seung-Hwan Baek
CVPR 2025arXiv:2411.18025
3
citations
ThermalGen: Style-Disentangled Flow-Based Generative Models for RGB-to-Thermal Image Translation
Jiuhong Xiao, Roshan Nayak, Ning Zhang et al.
NEURIPS 2025arXiv:2509.24878
CMDA: Cross-Modal and Domain Adversarial Adaptation for LiDAR-Based 3D Object Detection
Gyusam Chang, Wonseok Roh, Sujin Jang et al.
AAAI 2024paperarXiv:2403.03721
6
citations
Nymeria: A Massive Collection of Egocentric Multi-modal Human Motion in the Wild
Lingni Ma, Yuting Ye, Rowan Postyeni et al.
ECCV 2024