A Study of Multimodal Pen + Gaze Interaction Techniques for Shape Point Translation in Extended Reality

0citations
0
citations
#33
in ISMAR 2025
of 229 papers
8
Top Authors
2
Data Points

Abstract

Eye-tracking offers new ways to augment our interaction possibilities in extended reality. This paper investigates how gaze can assist pen users in translating shape points within graphical models. By leveraging gaze, we can support the usual design activities with an option where objects can be selected and repositioned through eye movements, with the pen serving as a confirmation tool. This can reduce manual effort and enhance efficiency and ergonomics. To evaluate its effectiveness, we compare four interaction techniques: two pen-based baselines (direct and ray-based) and two gaze-supported methods (gaze for selection and/or object dragging), using a probability based selection scheme. In a user study, 16 participants carried out a shape point translation task and their performance, effort, and user experience were measured. The results highlight the performance trade-offs of each technique—while the gaze-based dragging method introduced marginally more errors, it significantly reduced task time. Our findings offer comparative insights into the strength and limitations of gaze-and pen-based interaction methods, supporting the design of future multimodal 3D design tools.

Citation History

Jan 27, 2026
0
Feb 2, 2026
0