Object-Shot Enhanced Grounding Network for Egocentric Video

7
citations
#1097
in CVPR 2025
of 2873 papers
5
Top Authors
6
Data Points

Abstract

Egocentric video grounding is a crucial task for embodied intelligence applications, distinct from exocentric video moment localization. Existing methods primarily focus on the distributional differences between egocentric and exocentric videos but often neglect key characteristics of egocentric videos and the fine-grained information emphasized by question-type queries. To address these limitations, we propose OSGNet, an Object-Shot enhanced Grounding Network for egocentric video. Specifically, we extract object information from videos to enrich video representation, particularly for objects highlighted in the textual query but not directly captured in the video features. Additionally, we analyze the frequent shot movements inherent to egocentric videos, leveraging these features to extract the wearer's attention information, which enhances the model's ability to perform modality alignment. Experiments conducted on three datasets demonstrate that OSGNet achieves state-of-the-art performance, validating the effectiveness of our approach. Our code can be found at https://github.com/Yisen-Feng/OSGNet.

Citation History

Jan 24, 2026
7
Jan 27, 2026
7
Feb 4, 2026
7
Feb 13, 2026
7
Feb 13, 2026
7
Feb 13, 2026
7