MASS: Overcoming Language Bias in Image-Text Matching

0
citations
#2074
in AAAI 2025
of 3028 papers
4
Top Authors
5
Data Points

Abstract

Pretrained visual-language models have made significant advancements in multimodal tasks, including image-text retrieval. However, a major challenge in image-text matching lies in language bias, where models predominantly rely on language priors and neglect to adequately consider the visual content. We thus present Multimodal ASsociation Score (MASS), a framework that reduces the reliance on language priors for better visual accuracy in image-text matching problems. It can be seamlessly incorporated into existing visual-language models without necessitating additional training. Our experiments have shown that MASS effectively lessens language bias without losing an understanding of linguistic compositionality. Overall, MASS offers a promising solution for enhancing image-text matching performance in visual-language models.

Citation History

Jan 27, 2026
0
Feb 4, 2026
0
Feb 13, 2026
0
Feb 13, 2026
0
Feb 13, 2026
0