VideoCutLER: Surprisingly Simple Unsupervised Video Instance Segmentation

37citations
arXiv:2308.14710
37
citations
#764
in CVPR 2024
of 2716 papers
5
Top Authors
4
Data Points

Abstract

Existing approaches to unsupervised video instance segmentation typically rely on motion estimates and experience difficulties tracking small or divergent motions. We present VideoCutLER, a simple method for unsupervised multi-instance video segmentation without using motion-based learning signals like optical flow or training on natural videos. Our key insight is that using high-quality pseudo masks and a simple video synthesis method for model training is surprisingly sufficient to enable the resulting video model to effectively segment and track multiple instances across video frames. We show the first competitive unsupervised learning results on the challenging YouTubeVIS-2019 benchmark, achieving 50.7% APvideo^50 , surpassing the previous state-of-the-art by a large margin. VideoCutLER can also serve as a strong pretrained model for supervised video instance segmentation tasks, exceeding DINO by 15.9% on YouTubeVIS-2019 in terms of APvideo.

Citation History

Jan 27, 2026
36
Feb 13, 2026
37+1
Feb 13, 2026
37
Feb 13, 2026
37