SAM2Act: Integrating Visual Foundation Model with A Memory Architecture for Robotic Manipulation

35citations
arXiv:2501.18564
35
citations
#148
in ICML 2025
of 3340 papers
7
Top Authors
4
Data Points

Abstract

Robotic manipulation systems operating in diverse, dynamic environments must exhibit three critical abilities: multitask interaction, generalization to unseen scenarios, and spatial memory. While significant progress has been made in robotic manipulation, existing approaches often fall short in generalization to complex environmental variations and addressing memory-dependent tasks. To bridge this gap, we introduceSAM2Act, a multi-view robotic transformer-based policy that leverages multi-resolution upsampling with visual representations from large-scale foundation model. SAM2Act achieves a state-of-the-art average success rate of86.8% across 18 tasksin the RLBench benchmark, and demonstrates robust generalization onThe Colosseumbenchmark, with only a4.3% performance gapunder diverse environmental perturbations. Building on this foundation, we proposeSAM2Act+, a memory-based architecture inspired by SAM2, which incorporates a memory bank, an encoder, and an attention mechanism to enhance spatial memory. To address the need for evaluating memory-dependent tasks, we introduceMemoryBench, a novel benchmark designed to assess spatial memory and action recall in robotic manipulation. SAM2Act+ achieves an average success rate of94.3% on memory-based tasksinMemoryBench, significantly outperforming existing approaches and pushing the boundaries of memory-based robotic systems.Project page:sam2act.github.io.

Citation History

Jan 28, 2026
0
Feb 13, 2026
35+35
Feb 13, 2026
35
Feb 13, 2026
35