VTQA: Visual Text Question Answering via Entity Alignment and Cross-Media Reasoning

20
citations
#1255
in CVPR 2024
of 2716 papers
2
Top Authors
4
Data Points

Abstract

The ideal form of Visual Question Answering requires understanding, grounding and reasoning in the joint space of vision and language and serves as a proxy for the AI task of scene understanding. However, most existing VQA benchmarks are limited to just picking the answer from a pre-defined set of options and lack attention to text. We present a new challenge with a dataset that contains 23,781 questions based on 10124 image-text pairs. Specifically, the task requires the model to align multimedia representations of the same entity to implement multi-hop reasoning between image and text and finally use natural language to answer the question. The aim of this challenge is to develop and benchmark models that are capable of multimedia entity alignment, multi-step reasoning and open-ended answer generation.

Citation History

Jan 28, 2026
19
Feb 13, 2026
20+1
Feb 13, 2026
20
Feb 13, 2026
20