OmniVec2 - A Novel Transformer based Network for Large Scale Multimodal and Multitask Learning

39citations
arXiv:2507.13364
39
citations
#718
in CVPR 2024
of 2716 papers
2
Top Authors
4
Data Points

Abstract

We present a novel multimodal multitask network and associated training algorithm. The method is capable of ingesting data from approximately 12 different modalities namely image, video, audio, text, depth, point cloud, time series, tabular, graph, X-ray, infrared, IMU, and hyperspectral. The proposed approach utilizes modality specialized tokenizers, a shared transformer architecture, and cross-attention mechanisms to project the data from different modalities into a unified embedding space. It addresses multimodal and multitask scenarios by incorporating modality-specific task heads for different tasks in respective modalities. We propose a novel pretraining strategy with iterative modality switching to initialize the network, and a training algorithm which trades off fully joint training over all modalities, with training on pairs of modalities at a time. We provide comprehensive evaluation across 25 datasets from 12 modalities and show state of the art performances, demonstrating the effectiveness of the proposed architecture, pretraining strategy and adapted multitask training.

Citation History

Jan 28, 2026
0
Feb 13, 2026
39+39
Feb 13, 2026
39
Feb 13, 2026
39