How Spatial Ability Affects Response to Gaze-Adaptive Cueing in Mixed Reality Spatial Navigation
Top Authors
Abstract
Gaze-adaptive interfaces are increasingly employed in mixed reality (MR) to support attention-aware interaction. However, it remains unclear how the usability and effectiveness of such interfaces are influenced by individual differences in spatial ability, cognition, or prior experience. This paper addresses this gap by focusing on user spatial navigation ability and its role in modulating user interaction with gaze-adaptive cues during spatial navigation. We compare gaze-adaptive cues to proximity-triggered cues in an MR navigation task, and leverage eye tracking to examine how users' gaze patterns relate to their spatial ability and spatial knowledge acquisition under each cue type. This enables us to explore contentindependent gaze measures as a potential subjective approach to assess spatial ability and learning outcomes in MR. Our findings show that gaze-adaptive cueing impacts users differently based on their spatial abilities. In particular, when adding gaze-adaptive cues, individuals with higher spatial ability seem to engaged with cues more and exhibited longer fixations. Furthermore, higher spatial ability and better learning outcomes were linked to more effective gaze patterns defined by longer fixation durations, larger saccadic amplitudes, and reduced gaze transition entropy, with effects varying by cue type. Our findings highlight the potential of gaze transition entropy for assessing spatial ability disparities in MR spatial tasks.