Spike2Former: Efficient Spiking Transformer for High-performance Image Segmentation

14
citations
#297
in AAAI 2025
of 3028 papers
7
Top Authors
5
Data Points

Abstract

Spiking Neural Networks (SNNs) have a low-power advantage but perform poorly in image segmentation tasks. The reason is that directly converting neural networks with complex architectural designs for segmentation tasks into spiking versions leads to performance degradation and non-convergence. To address this challenge, we first identify the modules in the architecture design that lead to the severe reduction in spike firing, make targeted improvements, and propose Spike2Former architecture. Second, we propose normalized integer spiking neurons to solve the training stability problem of SNNs with complex architectures. We set a new state-of-the-art for SNNs in various semantic segmentation datasets, with a significant improvement of +12.7% mIoU and 5.0 efficiency on ADE20K, +14.3% mIoU and 5.2 efficiency on VOC2012, and +9.1% mIoU and 6.6 efficiency on CityScapes.

Citation History

Jan 27, 2026
13
Feb 4, 2026
14+1
Feb 13, 2026
14
Feb 13, 2026
14
Feb 13, 2026
14