Enhancing Video-LLM Reasoning via Agent-of-Thoughts Distillation

23citations
arXiv:2412.01694
23
citations
#339
in CVPR 2025
of 2873 papers
4
Top Authors
6
Data Points

Abstract

This paper tackles the problem of video question answering (VideoQA), a task that often requires multi-step reasoning and a profound understanding of spatial-temporal dynamics. While large video-language models perform well on benchmarks, they often lack explainability and spatial-temporal grounding. In this paper, we propose Agent-of-Thoughts Distillation (AoTD), a method that enhances models by incorporating automatically generated Chain-of-Thoughts (CoTs) into the instruction-tuning process. Specifically, we leverage an agent-based system to decompose complex questions into sub-tasks, and address them with specialized vision models, the intermediate results are then treated as reasoning chains. We also introduce a verification mechanism using a large language model (LLM) to ensure the reliability of generated CoTs. Extensive experiments demonstrate that AoTD improves the performance on multiple-choice and open-ended benchmarks.

Citation History

Jan 24, 2026
22
Jan 27, 2026
22
Feb 3, 2026
22
Feb 13, 2026
23+1
Feb 13, 2026
23
Feb 13, 2026
23