Video-ColBERT: Contextualized Late Interaction for Text-to-Video Retrieval

9
citations
#914
in CVPR 2025
of 2873 papers
10
Top Authors
4
Data Points

Abstract

In this work, we tackle the problem of text-to-video retrieval (T2VR). Inspired by the success of late interaction techniques in text-document, text-image, and text-video retrieval, our approach, Video-ColBERT, introduces a simple and efficient mechanism for fine-grained similarity assessment between queries and videos. Video-ColBERT is built upon 3 main components: a fine-grained spatial and temporal token-wise interaction, query and visual expansions, and a dual sigmoid loss during training. We find that this interaction and training paradigm leads to strong individual, yet compatible, representations for encoding video content. These representations lead to increases in performance on common text-to-video retrieval benchmarks compared to other bi-encoder methods.

Citation History

Jan 24, 2026
9
Feb 13, 2026
9
Feb 13, 2026
9
Feb 13, 2026
9