ReMamber: Referring Image Segmentation with Mamba Twister

50
citations
#255
in ECCV 2024
of 2387 papers
6
Top Authors
7
Data Points

Abstract

Referring Image Segmentation~(RIS) leveraging transformers has achieved great success on the interpretation of complex visual-language tasks. However, the quadratic computation cost makes it resource-consuming in capturing long-range visual-language dependencies. Fortunately, Mamba addresses this with efficient linear complexity in processing. However, directly applying Mamba to multi-modal interactions presents challenges, primarily due to inadequate channel interactions for the effective fusion of multi-modal data. In this paper, we propose ReMamber, a novel RIS architecture that integrates the power of Mamba with a multi-modal Mamba Twister block. The Mamba Twister explicitly models image-text interaction, and fuses textual and visual features through its unique channel and spatial twisting mechanism. We achieve competitive results on three challenging benchmarks with a simple and efficient architecture. Moreover, we conduct thorough analyses of ReMamber and discuss other fusion designs using Mamba. These provide valuable perspectives for future research. The code has been released at: https://github.com/yyh-rain-song/ReMamber.

Citation History

Jan 26, 2026
0
Jan 26, 2026
49+49
Jan 27, 2026
49
Feb 3, 2026
49
Feb 13, 2026
50+1
Feb 13, 2026
50
Feb 13, 2026
50