Weighted Point Set Embedding for Multimodal Contrastive Learning Toward Optimal Similarity Metric

3
citations
#2552
in ICLR 2025
of 3827 papers
6
Top Authors
6
Data Points

Abstract

In typical multimodal contrastive learning, such as CLIP, encoders produce one point in the latent representation space for each input. However, one-point representation has difficulty in capturing the relationship and the similarity structure of a huge amount of instances in the real world. For richer classes of the similarity, we propose the use of weighted point sets, namely, sets of pairs of weight and vector, as representations of instances. In this work, we theoretically show the benefit of our proposed method through a new understanding of the contrastive loss of CLIP, which we call symmetric InfoNCE. We clarify that the optimal similarity that minimizes symmetric InfoNCE is the pointwise mutual information, and show an upper bound of excess risk on downstream classification tasks of representations that achieve the optimal similarity. In addition, we show that our proposed similarity based on weighted point sets consistently achieves the optimal similarity. To verify the effectiveness of our proposed method, we demonstrate pretraining of text-image representation models and classification tasks on common benchmarks.

Citation History

Jan 26, 2026
1
Feb 1, 2026
2+1
Feb 6, 2026
3+1
Feb 13, 2026
3
Feb 13, 2026
3
Feb 13, 2026
3