"temporal grounding" Papers

12 papers found

EgoTextVQA: Towards Egocentric Scene-Text Aware Video Question Answering

Sheng Zhou, Junbin Xiao, Qingyun Li et al.

CVPR 2025arXiv:2502.07411
30
citations

Factorized Learning for Temporally Grounded Video-Language Models

Wenzheng Zeng, Difei Gao, Mike Zheng Shou et al.

ICCV 2025arXiv:2512.24097

MAGNET: A Multi-agent Framework for Finding Audio-Visual Needles by Reasoning over Multi-Video Haystacks

Sanjoy Chowdhury, Mohamed Elmoghany, Yohan Abeysinghe et al.

NEURIPS 2025oralarXiv:2506.07016
5
citations

Multi-Scale Contrastive Learning for Video Temporal Grounding

Thong Thanh Nguyen, Yi Bin, Xiaobao Wu et al.

AAAI 2025paperarXiv:2412.07157
3
citations

ReVisionLLM: Recursive Vision-Language Model for Temporal Grounding in Hour-Long Videos

Tanveer Hannan, Md Mohaiminul Islam, Jindong Gu et al.

CVPR 2025arXiv:2411.14901
9
citations

SEAL: Semantic Attention Learning for Long Video Representation

Lan Wang, Yujia Chen, Wen-Sheng Chu et al.

CVPR 2025arXiv:2412.01798
7
citations

Seq2Time: Sequential Knowledge Transfer for Video LLM Temporal Grounding

Andong Deng, Zhongpai Gao, Anwesa Choudhuri et al.

CVPR 2025arXiv:2411.16932
7
citations

TOGA: Temporally Grounded Open-Ended Video QA with Weak Supervision

Ayush Gupta, Anirban Roy, Rama Chellappa et al.

ICCV 2025arXiv:2506.09445

Tracking and Understanding Object Transformations

Yihong Sun, Xinyu Yang, Jennifer Sun et al.

NEURIPS 2025oralarXiv:2511.04678

Youku Dense Caption: A Large-scale Chinese Video Dense Caption Dataset and Benchmarks

Zixuan Xiong, Guangwei Xu, wenkai zhang et al.

ICLR 2025

Can I Trust Your Answer? Visually Grounded Video Question Answering

Junbin Xiao, Angela Yao, Yicong Li et al.

CVPR 2024highlightarXiv:2309.01327
113
citations

TimeChat: A Time-sensitive Multimodal Large Language Model for Long Video Understanding

Shuhuai Ren, Linli Yao, Shicheng Li et al.

CVPR 2024arXiv:2312.02051
372
citations