From Local Details to Global Context: Advancing Vision-Language Models with Attention-Based Selection

3
citations
#1367
in ICML 2025
of 3340 papers
7
Top Authors
4
Data Points

Abstract

Pretrained vision-language models (VLMs), e.g., CLIP, demonstrate impressive zero-shot capabilities on downstream tasks. Prior research highlights the crucial role of visual augmentation techniques, like random cropping, in alignment with fine-grained class descriptions generated by large language models (LLMs), significantly enhancing zero-shot performance by incorporating multi-view information. However, the inherent randomness of these augmentations can inevitably introduce background artifacts and cause models to overly focus on local details, compromising global semantic understanding. To address these issues, we propose anAttention-BasedSelection (ABS) method from local details to global context, which applies attention-guided cropping in both raw images and feature space, supplement global semantic information through strategic feature selection. Additionally, we introduce a soft matching technique to effectively filter LLM descriptions for better alignment.ABSachieves state-of-the-art performance on out-of-distribution generalization and zero-shot classification tasks. Notably,ABSis training-free and even rivals few-shot and test-time adaptation methods.

Citation History

Jan 28, 2026
0
Feb 13, 2026
3+3
Feb 13, 2026
3
Feb 13, 2026
3