MirrorPose: Enabling Full-body Gestures Interaction for Head-mounted Devices with a Full-length Mirror

0citations
0
citations
#33
in ISMAR 2025
of 229 papers
6
Top Authors
2
Data Points

Abstract

Human-computer interaction based on full-body gestures has been successfully adopted in various applications, such as motionsensing games. Typically, full-body gestures are captured using vision-based pose estimation or multiple inertial measurement units (IMUs) attached to the limbs. Gesture-based interactions in virtual and augmented reality environments allow for seamless and intuitive engagement across virtual and real domains. However, due to the design of head-mounted devices, only partial body tracking—such as hand tracking—is typically available for interactions. Capturing full-body pose using head-mounted sensors is inherently challenging due to device placement constraints. Furthermore, the limited computational resources of AR devices (e.g., constrained processing power and memory bandwidth) present significant challenges for the real-time deployment of sophisticated 3D human pose estimation architectures. To address these challenges, we propose MirrorPose, a lightweight framework that integrates a 3D pose estimation network (PoseARNet), optimized for the resource constraints of AR headsets and the dynamic viewpoint changes inherent in mirror-mediated spatial perception. This design enables practical, full-body gesture interaction on AR devices. To demonstrate its practicality, we developed a 3D virtual teaching application on Microsoft HoloLens 2, to enhance students' understanding of human poses. Extensive experiments and evaluations confirm that our system provides users with accurate and timely feedback. The codes and dataset are available at https://github.com/zhchlong/mirror_pose.

Citation History

Jan 27, 2026
0
Feb 2, 2026
0