Learning Streaming Video Representation via Multitask Training

4
citations
#755
in ICCV 2025
of 2701 papers
9
Top Authors
7
Data Points

Abstract

Understanding continuous video streams plays a fundamental role in real-time applications including embodied AI and autonomous driving. Unlike offline video understanding, streaming video understanding requires the ability to process video streams frame by frame, preserve historical information, and make low-latency decisions. To address these challenges, our main contributions are three-fold. (i) We develop a novel streaming video backbone, termed as StreamFormer, by incorporating causal temporal attention into a pre-trained vision transformer. This enables efficient streaming video processing while maintaining image representation capability. (ii) To train StreamFormer, we propose to unify diverse spatial-temporal video understanding tasks within a multitask visual-language alignment framework. Hence, StreamFormer learns global semantics, temporal dynamics, and fine-grained spatial relationships simultaneously. (iii) We conduct extensive experiments on online action detection, online video instance segmentation, and video question answering. StreamFormer achieves competitive results while maintaining efficiency, demonstrating its potential for real-time applications.

Citation History

Jan 26, 2026
0
Jan 26, 2026
0
Jan 27, 2026
3+3
Feb 3, 2026
3
Feb 13, 2026
4+1
Feb 13, 2026
4
Feb 13, 2026
4