Search Results

Documents authored by Bulling, Andreas


Document
Towards a Better Understanding of Graph Perception in Immersive Environments

Authors: Lin Zhang, Yao Wang, Ying Zhang, Wilhelm Kerle-Malcharek, Karsten Klein, Falk Schreiber, and Andreas Bulling

Published in: LIPIcs, Volume 357, 33rd International Symposium on Graph Drawing and Network Visualization (GD 2025)


Abstract
As Immersive Analytics (IA) increasingly uses Virtual Reality (VR) for stereoscopic 3D (S3D) graph visualisation, it is crucial to understand how users perceive network structures in these immersive environments. However, little is known about how humans read S3D graphs during task solving, and how gaze behaviour indicates task performance. To address this gap, we report a user study with 18 participants asked to perform three analytical tasks on S3D graph visualisations in a VR environment. Our findings reveal systematic relationships between network structural properties and gaze behaviour. Based on these insights, we contribute a comprehensive eye tracking methodology for analysing human perception in immersive environments and establish eye tracking as a valuable tool for objectively evaluating cognitive load in S3D graph visualisation.

Cite as

Lin Zhang, Yao Wang, Ying Zhang, Wilhelm Kerle-Malcharek, Karsten Klein, Falk Schreiber, and Andreas Bulling. Towards a Better Understanding of Graph Perception in Immersive Environments. In 33rd International Symposium on Graph Drawing and Network Visualization (GD 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 357, pp. 11:1-11:19, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{zhang_et_al:LIPIcs.GD.2025.11,
  author =	{Zhang, Lin and Wang, Yao and Zhang, Ying and Kerle-Malcharek, Wilhelm and Klein, Karsten and Schreiber, Falk and Bulling, Andreas},
  title =	{{Towards a Better Understanding of Graph Perception in Immersive Environments}},
  booktitle =	{33rd International Symposium on Graph Drawing and Network Visualization (GD 2025)},
  pages =	{11:1--11:19},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-403-1},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{357},
  editor =	{Dujmovi\'{c}, Vida and Montecchiani, Fabrizio},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.GD.2025.11},
  URN =		{urn:nbn:de:0030-drops-249976},
  doi =		{10.4230/LIPIcs.GD.2025.11},
  annote =	{Keywords: Stereoscopic 3D, Graph Visualisation, Eye Tracking, Graph Perception}
}
Document
Anticipatory Human-Machine Interaction (Dagstuhl Seminar 22202)

Authors: Jelmer Borst, Andreas Bulling, Cleotilde Gonzalez, and Nele Russwinkel

Published in: Dagstuhl Reports, Volume 12, Issue 5 (2022)


Abstract
Even after three decades of research on human-machine interaction (HMI), current systems still lack the ability to predict mental states of their users, i.e., they fail to understand users' intentions, goals, and needs and therefore cannot anticipate their actions. This lack of anticipation drastically restricts their capabilities to interact and collaborate effectively with humans. The goal of this Dagstuhl Seminar was to discuss the scientific foundations of a new generation of human-machine systems that anticipate, and proactively adapt to, human actions by monitoring their attention, behavior, and predicting their mental states. Anticipation might be realized by using mental models of tasks, specific situations and systems to build up expectations about intentions, goals, and mental states that gathered evidence can be tested against. The seminar provided an inter-disciplinary forum to discuss this emerging topic by bringing together - for the first time - researchers from a range of fields that are directly relevant but hitherto haven't met on this topic so far. This includes human-computer interaction, cognitive-inspired AI, machine learning, computational cognitive science, and social and decision sciences. We discussed theoretical foundations, key research challenges and opportunities, new computational methods, and future applications of anticipatory human-machine interaction.

Cite as

Jelmer Borst, Andreas Bulling, Cleotilde Gonzalez, and Nele Russwinkel. Anticipatory Human-Machine Interaction (Dagstuhl Seminar 22202). In Dagstuhl Reports, Volume 12, Issue 5, pp. 131-169, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2022)


Copy BibTex To Clipboard

@Article{borst_et_al:DagRep.12.5.131,
  author =	{Borst, Jelmer and Bulling, Andreas and Gonzalez, Cleotilde and Russwinkel, Nele},
  title =	{{Anticipatory Human-Machine Interaction (Dagstuhl Seminar 22202)}},
  pages =	{131--169},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2022},
  volume =	{12},
  number =	{5},
  editor =	{Borst, Jelmer and Bulling, Andreas and Gonzalez, Cleotilde and Russwinkel, Nele},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.12.5.131},
  URN =		{urn:nbn:de:0030-drops-174462},
  doi =		{10.4230/DagRep.12.5.131},
  annote =	{Keywords: Human-Computer Interaction, Anticipation, Collaboration, Collaborative Intelligence, Human-AI Teaming, Multi-Agent Simulation, Artificial Intelligence}
}
Document
Eyewear Computing – Augmenting the Human with Head-mounted Wearable Assistants (Dagstuhl Seminar 16042)

Authors: Andreas Bulling, Ozan Cakmakci, Kai Kunze, and James M. Rehg

Published in: Dagstuhl Reports, Volume 6, Issue 1 (2016)


Abstract
The seminar was composed of workshops and tutorials on head-mounted eye tracking, egocentric vision, optics, and head-mounted displays. The seminar welcomed 30 academic and industry researchers from Europe, the US, and Asia with a diverse background, including wearable and ubiquitous computing, computer vision, developmental psychology, optics, and human-computer interaction. In contrast to several previous Dagstuhl seminars, we used an ignite talk format to reduce the time of talks to one half-day and to leave the rest of the week for hands-on sessions, group work, general discussions, and socialising. The key results of this seminar are 1) the identification of key research challenges and summaries of breakout groups on multimodal eyewear computing, egocentric vision, security and privacy issues, skill augmentation and task guidance, eyewear computing for gaming, as well as prototyping of VR applications, 2) a list of datasets and research tools for eyewear computing, 3) three small-scale datasets recorded during the seminar, 4) an article in ACM Interactions entitled "Eyewear Computers for Human-Computer Interaction", as well as 5) two follow-up workshops on "Egocentric Perception, Interaction, and Computing" at the European Conference on Computer Vision (ECCV) as well as "Eyewear Computing" at the ACM International Joint Conference on Pervasive and Ubiquitous Computing (UbiComp).

Cite as

Andreas Bulling, Ozan Cakmakci, Kai Kunze, and James M. Rehg. Eyewear Computing – Augmenting the Human with Head-mounted Wearable Assistants (Dagstuhl Seminar 16042). In Dagstuhl Reports, Volume 6, Issue 1, pp. 160-206, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2016)


Copy BibTex To Clipboard

@Article{bulling_et_al:DagRep.6.1.160,
  author =	{Bulling, Andreas and Cakmakci, Ozan and Kunze, Kai and Rehg, James M.},
  title =	{{Eyewear Computing – Augmenting the Human with Head-mounted Wearable Assistants (Dagstuhl Seminar 16042)}},
  pages =	{160--206},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2016},
  volume =	{6},
  number =	{1},
  editor =	{Bulling, Andreas and Cakmakci, Ozan and Kunze, Kai and Rehg, James M.},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.6.1.160},
  URN =		{urn:nbn:de:0030-drops-58204},
  doi =		{10.4230/DagRep.6.1.160},
  annote =	{Keywords: Augmented Human, Cognition-Aware Computing, Wearable Computing, Egocentric Vision, Head-Mounted Eye Tracking, Optics, Displays, Human-Computer Interac}
}
Any Issues?
X

Feedback on the Current Page

CAPTCHA

Thanks for your feedback!

Feedback submitted to Dagstuhl Publishing

Could not send message

Please try again later or send an E-mail