G-HOP: Generative Hand-Object Prior for Interaction Reconstruction and Grasp Synthesis

33citations
arXiv:2404.12383
33
citations
#852
in CVPR 2024
of 2716 papers
4
Top Authors
4
Data Points

Abstract

We propose G-HOP, a denoising diffusion based generative prior for hand-object interactions that allows modeling both the 3D object and a human hand, conditioned on the object category. To learn a 3D spatial diffusion model that can capture this joint distribution, we represent the human hand via a skeletal distance field to obtain a representation aligned with the (latent) signed distance field for the object. We show that this hand-object prior can then serve as generic guidance to facilitate other tasks like reconstruction from interaction clip and human grasp synthesis. We believe that our model, trained by aggregating seven diverse real-world interaction datasets spanning across 155 categories, represents a first approach that allows jointly generating both hand and object. Our empirical evaluations demonstrate the benefit of this joint prior in video-based reconstruction and human grasp synthesis, outperforming current task-specific baselines. Project website: https://judyye.github.io/ghop-www

Citation History

Jan 28, 2026
0
Feb 13, 2026
33+33
Feb 13, 2026
33
Feb 13, 2026
33