SuperNormal: Neural Surface Reconstruction via Multi-View Normal Integration

20citations
arXiv:2312.04803
20
citations
#1255
in CVPR 2024
of 2716 papers
2
Top Authors
5
Data Points

Abstract

We present SuperNormal, a fast, high-fidelity approach to multi-view 3D reconstruction using surface normal maps. With a few minutes, SuperNormal produces detailed surfaces on par with 3D scanners. We harness volume rendering to optimize a neural signed distance function (SDF) powered by multi-resolution hash encoding. To accelerate training, we propose directional finite difference and patch-based ray marching to approximate the SDF gradients numerically. While not compromising reconstruction quality, this strategy is nearly twice as efficient as analytical gradients and about three times faster than axis-aligned finite difference. Experiments on the benchmark dataset demonstrate the superiority of SuperNormal in efficiency and accuracy compared to existing multi-view photometric stereo methods. On our captured objects, SuperNormal produces more fine-grained geometry than recent neural 3D reconstruction methods.

Citation History

Jan 27, 2026
17
Feb 13, 2026
19+2
Feb 13, 2026
19
Feb 13, 2026
20+1
Feb 13, 2026
20