Unified Multimodal Understanding via Byte-Pair Visual Encoding

8
citations
#427
in ICCV 2025
of 2701 papers
7
Top Authors
6
Data Points

Abstract

Multimodal large language models (MLLMs) have made significant progress in vision-language understanding, yet effectively aligning different modalities remains a fundamental challenge. We present a framework that unifies multimodal understanding by applying byte-pair encoding to visual tokens. Unlike conventional approaches that rely on modality-specific encoders, our method directly incorporates structural information into visual tokens, mirroring successful tokenization strategies in text-only language models. We introduce a priority-guided encoding scheme that considers both frequency and spatial consistency, coupled with a multi-stage training procedure based on curriculum-driven data composition. These enhancements enable the transformer model to better capture cross-modal relationships and reason with visual information. Comprehensive experiments demonstrate improved performance across diverse vision-language tasks. By bridging the gap between visual and textual representations, our approach contributes to the advancement of more capable and efficient multimodal foundation models.

Citation History

Jan 25, 2026
0
Jan 26, 2026
7+7
Jan 26, 2026
7
Feb 13, 2026
8+1
Feb 13, 2026
8
Feb 13, 2026
8