X-Fusion: Introducing New Modality to Frozen Large Language Models

9
citations
#381
in ICCV 2025
of 2701 papers
12
Top Authors
4
Data Points

Abstract

We propose X-Fusion, a framework that extends pretrained Large Language Models (LLMs) for multimodal tasks while preserving their language capabilities. X-Fusion employs a dual-tower design with modality-specific weights, keeping the LLM's parameters frozen while integrating vision-specific information for both understanding and generation. Our experiments demonstrate that X-Fusion consistently outperforms alternative architectures on both image-to-text and text-to-image tasks. We find that incorporating understanding-focused data improves generation quality, reducing image data noise enhances overall performance, and feature alignment accelerates convergence for smaller models but has minimal impact on larger ones. Our findings provide valuable insights into building efficient unified multimodal models.

Citation History

Jan 24, 2026
8
Feb 13, 2026
9+1
Feb 13, 2026
9
Feb 13, 2026
9