Unbound Human-Machine Interfaces for Interaction in Weightless Environments
Abstract
User interfaces are subject to the rules of physics (e.g., Newton and Archimedes’ laws) relevant to the environment they are in. As such, most interfaces and interaction techniques have been designed for Earth surface. However, when interacting with technology in weightless environments, such as in space, both human and machine will be subject to different physical constraints. For instance, underwater or in Space, people can experience spatial disorientation, which will in turn affect how they use a system. This position paper conceptualizes unbound Human-Machine Interfaces (HMIs) as interfaces where either, or both, human and machine are located beyond Earth surface. In particular, it describes how traditional HCI needs to be rethought for interaction in weightless environments and how theoretical models such as joint cognition can support future developments of unbound interfaces.
Keywords and phrases:
human-robot interaction, gravity, space, interaction technique2012 ACM Subject Classification:
Computer systems organization External interfaces for robotics ; Human-centered computing HCI theory, concepts and models ; Human-centered computing Interaction paradigmsEditors:
Leonie Bensch, Tommy Nilsson, Martin Nisser, Pat Pataranutaporn, Albrecht Schmidt, and Valentina SuminiSeries and Publisher:
Open Access Series in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Conceptualizing Unbound Interfaces
Deploying autonomous devices to support humans in their tasks has been a longstanding ambition of scientists across numerous fields. Technological progress and innovations over the last decades have not only brought this vision to life in professional environments where people can now work with devices [42], they have also enabled this in space, such as for astronauts with free-flying robots [7]. In such particular setting, the physicality [31, 47] between human and autonomous devices can be defined as Unbound, as the person and device are located beyond Earth surface. We propose that the term can be applied to a larger range of interactions that take place in settings such as in Space or underwater, such as when a robot and diver work together underwater [35]. Such interactions are also possible on Earth when the device is not bound to a surface, as in human-drone interaction where the person is on the Earth surface and the machine is flying [26]. Figure 1 presents examples of such interfaces.
Designing interaction between human and machine is particularly challenging in Unbound contexts as interaction techniques rely primarily on a shared plane of reference, which is unavailable in the Unbound context. For example, prior research showed that the positioning (distance and angle) between human and machine is quintessential to the interaction design [22]. Yet, it does not consider the possibility of movement across all 3 dimensions of space and the constant changes that may occur in such conditions. It also does not take into account differences in friction or in gravity. Similarly, taxonomies of interaction with mobile systems, built around the concept of space and location, did not consider full degrees of freedom between human and machine (e.g., [16]). Yet, these notions are fundamental in Unbound contexts, as for example, the positioning of sensors on the machine and the relative position of a human by the machine will affect the resolution and quality of the resulting interaction – which will simply fail if the user cannot be detected [8]. Similarly, a person will only see visual feedback on a machine if it is within their visual field, otherwise, the interaction is amiss [30]. In space or underwater, the visual field will often be limited (e.g., using a helmet or goggles) and therefore interaction techniques need to further consider such aspect. Another example is the use of Fitts’ Law [19], which has been widely used and adapted for pointing in Human-Machine Interfaces (HMIs) [40, 23], and which relies on Earth physics rules. As such, Fitts’ Law cannot be directly applied to Unbound HMIs when the person wants to point and the device to make sense of it, due to issues such as ambiguity, latency, or inertia. This lack of point of reference in Unbound contexts alludes to the fact that standard interaction paradigms that have been established over the last 50 years of research in the field of Human-Computer Interaction (HCI) do not hold beyond Earth surface.
2 From traditional HCI to interaction in weightless environments
HCIs have traditionally been built to support users to interact with technologies on Earth surface. The first user interfaces provided users with input devices such as a mouse and keyboard, and output via visual Graphical User Interfaces (GUI) displayed on a screen [52]. Over the past decades, much research has been conducted and novel multimodal input and output techniques have been proposed, from touchscreens and voice user interfaces in input, to visual, auditory, haptic, or even smell cues in output. Yet, for all of these interfaces, the interaction happens when both the user and the device are within reach of each other, this reach varying based on the modality of interaction, such that a person needs to be within arm’s reach to interact with a touchscreen and can be further away from the device in the case of voice input. Several research work have investigated these notions of interaction distances between user and device (e.g., [3, 22, 16]). These models have enabled researchers to go beyond the close interaction that interfaces are traditionally designed for and investigate transitional stages where people become aware of the device, and vice-versa [3, 60, 59], or prepare for the interaction (e.g., adapt to the user [48]). Such research has been fundamental to the fields of HRI in particular, where researchers have been able to develop robots capable of understanding people’s needs [56].
In particular, the literature has identified a plethora of criteria that affect the user’s experience, such as the appearance of the device [33] or the distance between the device and the person [58]. When focusing on the latter, we find that interfaces are traditionally designed for people to interact at a certain distance from the device, using specific modalities (e.g., touch or voice), and enabling people to point or to refer to specific locations that the device can make sense of and calculate based on its own frame of reference. Moreover, research in robotics has shown over the years the importance of proxemics in human-robot interaction [61] proving that when a robot gets too close to a person’s body, they can experience discomfort and even withdrawal [41, 53, 13, 30]. This body of work related to interpersonal distancing is based around the notion of proxemics [24]. Preliminary work in human-drone interaction analyzed such proxemics, and showed that the added degree of freedom, indeed affects characteristics of comfortable interaction [62, 18]. As such, a major challenge in Unbound contexts will be to understand how 3D positioning affects human-machine interactions and which mechanisms can be used to mediate interaction distances. It is important to highlight that many additional constraints will affect users in space, e.g., whether they are located inside a vessel or not. Here, we propose to focus entirely on aspects linked to the physics of the environment.
As Unbound interfaces become a reality, we find that the proposed interaction metaphors, concepts, and paradigms are not adapted for interaction with the added degrees of freedom. As such, there is a need to entirely rethink how people interact with such devices, that for example, could be positioned anywhere in space compared to the user. Additional degrees of freedom, not only complexify our interaction models but also highlight some of their limitations. The difficulties in creating Unbound interfaces are not merely a matter of adapting findings from ground robotics or from the autonomous vehicle literature because there are fundamental gaps in our understanding of the scientific and technological knowledge required to develop such systems. For instance, current interfaces are primarily conditional to their context of use and lack ecological validity for machines that will be deployed beyond Earth surface, such as in space. We furthermore need novel methodologies and metrics to define successful interactions. In summary, while interaction techniques have been developed to support on Earth surface HMIs, it is not clear how current technologies and methodologies will support Unbound interactions.
3 Existing Unbound HMIs
Prior work researching interfaces where the machine is Unbound are primarily found in the Human-Drone Interaction (HDI) literature. In collocated HDI, many interaction techniques have been researched. Input from the user to the drone can use a plethora of modalities, from voice [46] and gestures [9, 43] to gaze [34] and touch [1, 38] (see full surveys [57, 26]). Diverse modalities have been proposed to output information from a drone to a user – in collocated settings. One of the prominent modalities is visual, where for instance: LEDs have been used to convey a drone’s intent [55, 21]; a screen to convey information [50] or even the drone’s emotional state [25]; a projector to display a UI [5, 10]; a beam of light to indicate a point of interest [36]; and the drone’s movement to convey intent and affect [51, 54, 11] or indicate the way [12]. Another modality is audio, where Lieser et al. [37] proposed the use of vocalics, such as to attract a person’s attention.
The research on interfaces for which the human is Unbound is, however in its infancy. Prior works have explored HMIs between autonomous underwater robots and diver(s), but primarily focusing on technical solutions (e.g., computer vision, algorithm, and machine learning contributions) to make the interaction work [14, 15, 32]. We further find discussions around underwater humanoid robot interacting with a diver [35] or as proxy for interaction between a diver and a remote operator [4]. However, the work did not focus on the interfaces themselves and whether they were suitable in such environment. We further find devices such as in-cabin flying robots aboard the space station [17] (e.g., CIMON [2], Astrobee [7], BIT [39]), which have been designed with multi-modal input and output capabilities to communicate directly or indirectly with the astronauts’ crew (e.g., to convey state and intentions). Some of the challenges for HRI in space have already been identified [20, 44, 45], including discussions on the challenges of using input devices that are Unbound in micro-gravity environments [6]. Furthermore, recent work proposed adapted technical solutions to the use of eXtended Reality (XR) in space [49], highlighting limitations of current hardware development to their use in space. All these exemplify the emergence of the field and the interest for interacting in Unbound contexts.
4 Need for Theoretical Modeling
We here propose that beyond technical solutions, theoretical models are needed to support the design and evaluation of unbound interfaces. One theoretical framing that could be adopted is to use the concept of joint cognitive systems, which focuses on the cognitive aspect of the human-machine interaction. The initial term was Cognitive Systems Engineering (CSE) [27], with its central tenet being that a human-machine-system needs to be conceived, designed, analyzed, and evaluated in terms of a cognitive system. The configuration or organization of human and machine components is a critical determinant of the outcome or output of the system as a whole. One of CSE fundamentals is to explore how humans and devices can be described as joint cognitive systems (JCS), and how this extends the scope from the focus on the interaction between human and machine to how humans and technology effectively can work together [28]. JCS, however, does not focus on the interaction between human and machine, but instead focuses on the external function (i.e., the result of their activity). One key element to such theoretical model will be to establish what “joint cognition” is between devices and users with a focus not only on human factors [29] but also on traditional HCI. Such factors could then include perception, affordance, coordination, trust, and resilience in support of establishing a joint cognitive system. Empirical research will be primordial in establishing theoretical models, and as such, we envision running simulations, as well as user studies in real (e.g., underwater, analog missions) or simulated environments (e.g., parabolic flights). A next step will then be to provide design principles that take into account the varying applications of the rules of physics based on the environment of both machine and human.
5 Future Work and Conclusion
Much research is needed to fully provide one or several theoretical models that can support unbound interactions, such as interactions happening between human and machine in space. It will require both theory and empirical research to identify the various parameters, as well as a complete re-thinking of interactions beyond traditional laws of physics within Earth surface constraints. This work presents a thought-provoking position paper conceptualizing various types of interfaces and interactions happening beyond the Earth surface. It attempts to identify existing literature that falls within this concept and proposes potential direction for modeling unbound interactions.
References
- [1] Parastoo Abtahi, David Y. Zhao, Jane L. E, and James A. Landay. Drone Near Me: Exploring Touch-Based Human-Drone Interaction. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies, 1(3):34, September 2017. doi:10.1145/3130899.
- [2] Airbus. "Hello, I am CIMON*!", February 2018. URL: https://www.airbus.com/newsroom/press-releases/en/2018/02/hello--i-am-cimon-.html.
- [3] Steve Benford and Lennart Fahlén. A spatial model of interaction in large virtual environments. In Proceedings of the Third European Conference on Computer-Supported Cooperative Work 13–17 September 1993, Milan, Italy ECSCW’93, pages 109–124. Springer, 1993.
- [4] Gerald Brantner and Oussama Khatib. Controlling Ocean One. In Marco Hutter and Roland Siegwart, editors, Field and Service Robotics, pages 3–17. Springer, Cham, 2018. doi:10.1007/978-3-319-67361-5{_}1.
- [5] Anke M. Brock, Julia Chatain, Michelle Park, Tommy Fang, Martin Hachet, James A. Landay, and Jessica R. Cauchard. FlyMap: Interacting with Maps Projected from a Drone. In Proceedings of the 7th ACM International Symposium on Pervasive Displays - PerDis ’18, pages 1–9, New York, New York, USA, 2018. ACM Press. doi:10.1145/3205873.3205877.
- [6] Damien Brun, Caglar Genc, and Jonna Häkkilä. Concepting personal input devices for micro-gravity. In SpaceCHI 2021, 2021.
- [7] Maria G Bualat, Trey Smith, Terrence W Fong, Ernest E Smith, and D W Wheeler. Astrobee: A New Tool for ISS Operations. In 2018 SpaceOps Conferences, pages 1–11, Marseille, France, May 2018. doi:10.2514/6.2018-2517.
- [8] Jennifer Carlson and Robin R. Murphy. How UGVs physically fail in the field. IEEE Transactions on Robotics, 21(3):423–437, June 2005. doi:10.1109/TRO.2004.838027.
- [9] Jessica R. Cauchard, Jane L. E, Kevin Y. Zhai, and James A. Landay. Drone & Me: An Exploration Into Natural Human-Drone Interaction. In Proceedings of the 2015 ACM International Joint Conference on Pervasive and Ubiquitous Computing - UbiComp ’15, pages 361–365, New York, New York, USA, 2015. Association for Computing Machinery. doi:10.1145/2750858.2805823.
- [10] Jessica R. Cauchard, Alex Tamkin, Cheng Yao Wang, Luke Vink, Michelle Park, Tommy Fang, and James A. Landay. Drone.io: A Gestural and Visual Interface for Human-Drone Interaction. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 153–162. IEEE, March 2019. doi:10.1109/HRI.2019.8673011.
- [11] Jessica R. Cauchard, Kevin Y. Zhai, Marco Spadafora, and James A. Landay. Emotion Encoding in Human-Drone Interaction. ACM/IEEE International Conference on Human-Robot Interaction, 2016-April:263–270, 2016. doi:10.1109/HRI.2016.7451761.
- [12] Ashley Colley, Lasse Virtanen, Pascal Knierim, and Jonna Häkkilä. Investigating Drone Motion as Pedestrian Guidance. In Proceedings of the 16th International Conference on Mobile and Ubiquitous Multimedia, 2017. doi:10.1145/3152832.3152837.
- [13] Martin Cooney, Francesco Zanlungo, Shuichi Nishio, and Hiroshi Ishiguro. Designing a Flying Humanoid Robot (FHR): Effects of Flight on Interactive Communication. In 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, pages 364–371, 2012. doi:10.1109/ROMAN.2012.6343780.
- [14] Karin De Langis and Junaed Sattar. Realtime Multi-Diver Tracking and Re-identification for Underwater Human-Robot Collaboration. In Proceedings - IEEE International Conference on Robotics and Automation, pages 11140–11146. Institute of Electrical and Electronics Engineers Inc., May 2020. doi:10.1109/ICRA40945.2020.9197308.
- [15] Kevin J. DeMarco, Michael E. West, and Ayanna M. Howard. Sonar-based detection and tracking of a diver for underwater human-robot interaction scenarios. In Proceedings - 2013 IEEE International Conference on Systems, Man, and Cybernetics, SMC 2013, pages 2378–2383, 2013. doi:10.1109/SMC.2013.406.
- [16] Alan Dix, Tom Rodden, Nigel Davies, Jonathan Trevor, Adrian Friday, and Kevin Palfreyman. Exploiting Space and Location as a Design Framework for Interactive Mobile Systems. ACM Transactions on Computer-Human Interaction, 7(3):285–321, September 2000. doi:10.1145/355324.355325.
- [17] Gregory A. Dorais and Yuri Gawdiak. The personal satellite assistant: An internal spacecraft autonomous mobile monitor. In IEEE Aerospace Conference Proceedings, pages 333–348, 2003. doi:10.1109/AERO.2003.1235064.
- [18] Brittany A. Duncan and Robin R. Murphy. Effects of Speed, Cyclicity, and Dimensionality on Distancing, Time, and Preference in Human-Aerial Vehicle Interactions. ACM Transactions on Interactive Intelligent Systems, 7(3):1–27, September 2017. doi:10.1145/2983927.
- [19] Paul M. Fitts. The information capacity of the human motor system in controlling the amplitude of movement. Journal of Experimental Psychology, 47(6):381–391, June 1954. doi:10.1037/h0055392.
- [20] Terrence Fong and Illah Nourbakhsh. Interaction challenges in human-robot space exploration. Interactions, 12(2):42–45, March 2005. doi:10.1145/1052438.1052462.
- [21] Eyal Ginosar and Jessica R. Cauchard. At first light: Expressive lights in support of drone-initiated communication. In Proc. CHI ’23, New York, NY, USA, 2023. ACM. doi:10.1145/3544548.3581062.
- [22] Saul Greenberg, Kasper Honbaek, Aaron Quigley, Harald Reiterer, and Roman Rädle. Proxemics in Human-Computer Interaction (Dagstuhl Seminar 13452). Dagstuhl Reports, 3(11):29–57, 2014. doi:10.4230/DAGREP.3.11.29.
- [23] Yves Guiard and Michel Beaudouin-Lafon. Fitts’ law 50 years later: applications and contributions from human–computer interaction. International Journal of Human-Computer Studies, 61(6):747–750, December 2004. doi:10.1016/j.ijhcs.2004.09.003.
- [24] Edward Twitchell Hall. The hidden dimension, volume 609. Garden City, NY: Doubleday, 1966. URL: https://psycnet.apa.org/record/2003-00029-000.
- [25] Viviane Herdel, Anastasia Kuzminykh, Andrea Hildebrandt, and Jessica R. Cauchard. Drone in love: Emotional perception of facial expressions on flying robots. In Proc. CHI ’21, New York, NY, USA, 2021. ACM. doi:10.1145/3411764.3445495.
- [26] Viviane Herdel, Lee J. Yamin, and Jessica R. Cauchard. Above and beyond: A scoping review of domains and applications for human-drone interaction. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, New York, NY, USA, 2022. Association for Computing Machinery. doi:10.1145/3491102.3501881.
- [27] Erik Hollnagel and David D Woods. Cognitive systems engineering: New wine in new bottles. International journal of man-machine studies, 18(6):583–600, 1983. doi:10.1016/S0020-7373(83)80034-0.
- [28] Erik Hollnagel and David D Woods. Joint cognitive systems: Foundations of cognitive systems engineering. CRC press, 2005.
- [29] Erik Hollnagel and David D. Woods. Joint Cognitive Systems: Foundations of Cognitive Systems Engineering. CRC Press, December 2019. URL: https://www.amazon.com/Joint-Cognitive-Systems-Foundations-Engineering/dp/0367864207/ref=sr_1_1?dchild=1&keywords=9781420038194&linkCode=qs&qid=1612092887&s=books&sr=1-1.
- [30] Shanee S. Honig, Tal Oron-Gilad, Hanan Zaichyk, Vardit Sarne-Fleischmann, Samuel Olatunji, and Yael Edan. Toward Socially Aware Person-Following Robots. IEEE Transactions on Cognitive and Developmental Systems, 10(4):936–954, 2018. doi:10.1109/TCDS.2018.2825641.
- [31] Eva Hornecker. The Role of Physicality in Tangible and Embodied Interactions. Interactions, 18(2):19–23, March 2011. doi:10.1145/1925820.1925826.
- [32] Md Jahidul Islam, Marc Ho, and Junaed Sattar. Understanding human motion and gestures for underwater human-robot collaboration. Journal of Field Robotics, 36(5):851–873, August 2019. doi:10.1002/rob.21837.
- [33] Hao Jiang, Siyuan Lin, Veerajagadheswar Prabakaran, Mohan Rajesh Elara, and Lingyun Sun. A survey of users’ expectations towards on-body companion robots. In Proceedings of the 2019 on Designing Interactive Systems Conference, pages 621–632, 2019. doi:10.1145/3322276.3322316.
- [34] Mohamed Khamis, Anna Kienle, Florian Alt, and Andreas Bulling. GazeDrone: Mobile Eye-Based Interaction in Public Space Without Augmenting the User. In Proceedings of the 4th ACM Workshop on Micro Aerial Vehicle Networks, Systems, and Applications - DroNet’18, pages 66–71, New York, New York, USA, 2018. ACM Press. doi:10.1145/3213526.3213539.
- [35] Oussama Khatib, Xiyang Yeh, Gerald Brantner, Brian Soe, Boyeon Kim, Shameek Ganguly, Hannah Stuart, Shiquan Wang, Mark Cutkosky, Aaron Edsinger, Phillip Mullins, Mitchell Barham, Christian R. Voolstra, Khaled Nabil Salama, Michel L’Hour, and Vincent Creuze. Ocean one: A robotic avatar for oceanic discovery. IEEE Robotics and Automation Magazine, 23(4):20–29, December 2016. doi:10.1109/MRA.2016.2613281.
- [36] Moyi Li, Dzmitry Katsiuba, Mateusz Dolata, and Gerhard Schwabe. Firefighters’ perceptions on collaboration and interaction with autonomous drones: Results of a field trial. In Proc. CHI ’24, New York, NY, USA, 2024. ACM. doi:10.1145/3613904.3642061.
- [37] Marc Lieser and Ulrich Schwanecke. Vocalics in human-drone interaction. In IEEE ROMAN, pages 2226–2232, 2024. doi:10.1109/RO-MAN60168.2024.10731428.
- [38] Marc Lieser, Ulrich Schwanecke, and Jörg Berdux. Tactile human-quadrotor interaction: Metrodrone. In Proc. TEI ’21, New York, NY, USA, 2021. ACM. doi:10.1145/3430524.3440649.
- [39] Yunqi Liu, Long Li, Marco Ceccarelli, Hui Li, Qiang Huang, and Xiang Wang. Design and Testing of BIT Flying Robot. In CISM International Centre for Mechanical Sciences, Courses and Lectures, volume 601, pages 68–75. Springer Science and Business Media Deutschland GmbH, September 2021. doi:10.1007/978-3-030-58380-4{_}9.
- [40] I. Scott MacKenzie. Fitts’ Law as a Research and Design Tool in Human-Computer Interaction. Human–Computer Interaction, 7(1):91–139, March 1992. doi:10.1207/s15327051hci0701{_}3.
- [41] Jonathan Mumm and Bilge Mutlu. Human-Robot Proxemics: Physical and Psychological Distancing in Human-Robot Interaction. In HRI 2011 - Proceedings of the 6th ACM/IEEE International Conference on Human-Robot Interaction, pages 331–338, 2011. doi:10.1145/1957656.1957786.
- [42] Bilge Mutlu and Jodi Forlizzi. Robots in organizations: The role of workflow, social, and environmental factors in human-robot interaction. In HRI 2008 - Proceedings of the 3rd ACM/IEEE International Conference on Human-Robot Interaction: Living with Robots, pages 287–294, 2008. doi:10.1145/1349822.1349860.
- [43] Wai Shan Ng and Ehud Sharlin. Collocated Interaction with Flying Robots. In 2011 RO-MAN, pages 143–149. IEEE, July 2011. doi:10.1109/ROMAN.2011.6005280.
- [44] Pat Pataranutaporn, Valentina Sumini, Ariel Ekblaw, Melodie Yashar, Sandra Häuplik-Meusburger, Susanna Testa, Marianna Obrist, Dorit Donoviel, Joseph Paradiso, and Pattie Maes. Spacechi: Designing human-computer interaction systems for space exploration. In Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems, CHI EA ’21, New York, NY, USA, 2021. Association for Computing Machinery. doi:10.1145/3411763.3441358.
- [45] Pat Pataranutaporn, Valentina Sumini, Melodie Yashar, Susanna Testa, Marianna Obrist, Scott Davidoff, Amber M. Paul, Dorit Donoviel, Jimmy Wu, Sands A Fish, Ariel Ekblaw, Albrecht Schmidt, Joe Paradiso, and Pattie Maes. Spacechi 2.0: Advancing human-computer interaction systems for space exploration. In Extended Abstracts of the 2022 CHI Conference on Human Factors in Computing Systems, CHI EA ’22, New York, NY, USA, 2022. Association for Computing Machinery. doi:10.1145/3491101.3503708.
- [46] Shokoofeh Pourmehr, Valiallah (Mani) Monajjemi, Seyed Abbas Sadat, Fei Zhan, Jens Wawerla, Greg Mori, and Richard Vaughan. You are green: a touch-to-name interaction in an integrated multi-modal multi-robot hri system. In Proceedings of the 2014 ACM/IEEE international conference on Human-robot interaction - HRI ’14, pages 266–267, New York, New York, USA, 2014. ACM Press. doi:10.1145/2559636.2559806.
- [47] Stuart Reeves. Physicality, spatial configuration and computational objects. In First International Workshop on Physicality, 2006.
- [48] Silvia Rossi, Mariacarla Staffa, Luigi Bove, Roberto Capasso, and Giovanni Ercolano. User’s personality and activity influence on hri comfortable distances. In International Conference on Social Robotics, pages 167–177. Springer, 2017. doi:10.1007/978-3-319-70022-9_17.
- [49] Florian Saling, Andrea E.M. Casini, Andreas Treuer, Martial Costantini, Leonie Bensch, Tommy Nilsson, and Lionel Ferra. Testing and validation of innovative extended reality technologies for astronaut training in a partial-gravity parabolic flight campaign. Proceedings of the International Astronautical Congress, IAC, 2:776–784, 2024. doi:10.52202/078364-0088.
- [50] Stefan Schneegass, Florian Alt, Jürgen Scheible, and Albrecht Schmidt. Midair displays: Concept and first experiences with free-floating pervasive displays. In PerDis 2014 - Proceedings: 3rd ACM International Symposium on Pervasive Displays 2014, pages 27–31, New York, New York, USA, 2014. ACM Press. doi:10.1145/2611009.2611013.
- [51] Megha Sharma, Dale Hildebrandt, Gem Newman, James E. Young, and Rasit Eskicioglu. Communicating Affect via Flight Path. Human Robot Interaction 2013, pages 293–300, 2013. doi:10.1109/HRI.2013.6483602.
- [52] Douglas K. Smith and Robert D. Alexander. Fumbling the Future: How Xerox Invented, Then Ignored, the First Personal Computer. William Morrow & Co., Inc., USA, 1988.
- [53] Jessi Stark, Roberta R.C. C Mota, and Ehud Sharlin. Personal Space Intrusion in Human-Robot Collaboration. In ACM/IEEE International Conference on Human-Robot Interaction, pages 245–246, New York, NY, USA, March 2018. IEEE Computer Society. doi:10.1145/3173386.3176998.
- [54] Daniel Szafir, Bilge Mutlu, and Terrence Fong. Communication of Intent in Assistive Free Flyers. In proceedings of the 2014 ACM/IEEE International Conference on Human-robot Interaction, 2(1):358–365, 2014. doi:10.1145/2559636.2559672.
- [55] Daniel Szafir, Bilge Mutlu, and Terrence Fong. Communicating Directionality in Flying Robots. Proceedings of the 10th ACM/IEEE International Conference on Human-Robot Interaction, 2(1):19–26, 2015. doi:10.1145/2696454.2696475.
- [56] Leila Takayama. Making sense of agentic objects and teleoperation: In-the-moment and reflective perspectives. In Proceedings of the 4th ACM/IEEE International Conference on Human-Robot Interaction, HRI’09, pages 239–240, 2008. doi:10.1145/1514095.1514155.
- [57] Dante Tezza and Marvin Andujar. The State-of-the-Art of Human-Drone Interaction: A Survey. IEEE Access, 7:167438–167454, 2019. doi:10.1109/ACCESS.2019.2953900.
- [58] Michael L Walters, Kerstin Dautenhahn, Kheng Lee Koay, Christina Kaouri, R te Boekhorst, Chrystopher Nehaniv, Iain Werry, and David Lee. Close encounters: Spatial distances between people and a robot of mechanistic appearance. In 5th IEEE-RAS International Conference on Humanoid Robots, 2005., pages 450–455. IEEE, 2005.
- [59] Michael L Walters, Kerstin Dautenhahn, René Te Boekhorst, Kheng Lee Koay, Dag Sverre Syrdal, and Chrystopher L Nehaniv. An empirical framework for human-robot proxemics. Procs of new frontiers in human-robot interaction, 2009.
- [60] Michael L Walters, Kerstin Dautenhahn, Sarah N Woods, Kheng Lee Koay, R Te Boekhorst, and David Lee. Exploratory studies on social spaces between humans and a mechanical-looking robot. Connection Science, 18(4):429–439, 2006. doi:10.1080/09540090600879513.
- [61] M.L. Walters, K. Dautenhahn, R. Te Boekhorst, K.L. Koay, D.S. Syrdal, and C.L. Nehaniv. An Empirical Framework for Human-Robot Proxemics. Technical report, University of Hertfordshire,, 2009. doi:10.1.1.160.202.
- [62] Anna Wojciechowska, Jeremy Frey, Sarit Sass, Roy Shafir, and Jessica R. Cauchard. Collocated Human-Drone Interaction: Methodology and Approach Strategy. In 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pages 172–181. IEEE, March 2019. doi:10.1109/HRI.2019.8673127.
