,
Peter D. Hodkinson
,
Jarle Steinberg,
Felix Gorbatsevich
,
Yang Gao
Creative Commons Attribution 4.0 International license
Extravehicular activities (EVAs) are increasingly frequent in human spaceflight, particularly in spacecraft maintenance, scientific research, and planetary exploration. Spacesuits are essential for sustaining astronauts in the harsh environment of space, making their design a key factor in the success of EVA missions. The development of spacesuit technology has traditionally been driven by highly engineered solutions focused on life support, mission adaptability and operational efficiency. Modern spacesuits prioritize maintaining optimal internal temperature, humidity and pressure, as well as withstanding extreme temperature fluctuations and providing robust protection against micrometeoroid impacts and space debris. However, their bulkiness and rigidity impose significant physical strain on astronauts, reducing mobility and dexterity, particularly in tasks requiring fine motor control. The restricted field of view further complicates situational awareness, increasing the cognitive load during high-precision operations. While traditional spacesuits support basic EVA tasks, future space exploration shifting toward long-duration lunar and Martian surface missions demand more adaptive, intelligent, and astronaut-centric designs to overcome current constraints. To explore a next-generation spacesuit, this paper proposed an in-process eye-tracking embedded Augmented Reality (AR) Spacesuit System to enhance astronaut-environment interactions. By leveraging Segment-Anything Models (SAM) and Vision-Language Models (VLMs), we demonstrate a four-step approach to enable top-down gaze detection to minimize erroneous fixation data, gaze-based segmentation of objects of interest, real-time contextual assistance via AR overlays and hands-free operation within the spacesuit. This approach enhances real-time situational awareness and improves EVA task efficiency. We conclude with an exploration of the AR Helmet System’s potential in revolutionizing human-space interaction paradigms for future long-duration deep-space missions and discuss the further optimization of eye-tracking interactions using VLMs to predict astronaut intent and highlight relevant objects preemptively.
@InProceedings{he_et_al:OASIcs.SpaceCHI.2025.29,
author = {He, Jiayu and Li, Yifan and Runswick, Oliver R. and Hodkinson, Peter D. and Steinberg, Jarle and Gorbatsevich, Felix and Gao, Yang},
title = {{Gaze Beyond Limits: Integrating Eye-Tracking and Augmented Reality for Next-Generation Spacesuit Interaction}},
booktitle = {Advancing Human-Computer Interaction for Space Exploration (SpaceCHI 2025)},
pages = {29:1--29:15},
series = {Open Access Series in Informatics (OASIcs)},
ISBN = {978-3-95977-384-3},
ISSN = {2190-6807},
year = {2025},
volume = {130},
editor = {Bensch, Leonie and Nilsson, Tommy and Nisser, Martin and Pataranutaporn, Pat and Schmidt, Albrecht and Sumini, Valentina},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/OASIcs.SpaceCHI.2025.29},
URN = {urn:nbn:de:0030-drops-240197},
doi = {10.4230/OASIcs.SpaceCHI.2025.29},
annote = {Keywords: Augmented Reality (AR), Eye-Tracking, Cognitive Load/Workload, Segment Anything Model (SAM), Visual Language Models (VLMs)}
}