Search Results

Documents authored by Magnor, Marcus A.


Document
Real VR - Importing the Real World into Immersive VR and Optimizing the Perceptual Experience of Head-Mounted Displays (Dagstuhl Seminar 19272)

Authors: Marcus A. Magnor and Alexander Sorkine-Hornung

Published in: Dagstuhl Reports, Volume 9, Issue 6 (2020)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 19272 "Real VR -- Importing the Real World into Immersive VR and Optimizing the Perceptual Experience of Head-Mounted Displays". Motivated by the advent of mass-market VR headsets, this Dagstuhl Seminar addresses the scientific and engineering challenges that need to be overcome in order to experience omni-directional video recordings of the real world with the sense of stereoscopic, full-parallax immersion as can be provided by today’s head-mounted displays. Since the times of the Lumière brothers, the way we watch movies hasn’t fundamentally changed: Whether in movie theaters, on mobile devices, or on TV at home, we still experience movies as outside observers, watching the action through a "peephole" whose size is defined by the angular extent of the screen. As soon as we look away from the screen or turn around, we are immediately reminded that we are only "voyeurs" With modern full-field-of-view, head-mounted and tracked VR displays, this outside-observer paradigm of visual entertainment is quickly giving way to a fully immersive experience. Now, the action fully encompasses the viewer, drawing us in much more than was possible before. For the time being, however, current endeavors towards immersive visual entertainment are based almost entirely on 3D graphics-generated content, limiting application scenarios to purely digital, virtual worlds only. The reason is that in order to provide for stereo vision and ego-motion parallax, which are both essential for genuine visual immersion perception, the scene must be rendered in real-time from arbitrary vantage points. While this can be easily accomplished with 3D graphics via standard GPU rendering, it is not at all straight-forward to do the same from conventional video footage acquired of real-world events. Another challenge is that consumer-grade VR headsets feature spatial resolutions that are still considerably below foveal acuity, yielding a pixelated, subpar immersive viewing experience. At the same time, the visual perception characteristics of our fovea are decidedly different from our peripheral vision (as regards spatial and temporal resolution, color, contrast, clutter disambiguation etc.). So far, computer graphics research has focused almost entirely on foveal perception, even though our peripheral vision accounts for 99% of our field of view. To optimize perceived visual quality of head-mounted immersive displays, and to make optimal use of available computational resources, advanced VR rendering algorithms need to simultaneously account for our foveal and peripheral vision characteristics. The aim of the seminar was to collectively fathom what needs to be done to facilitate truly immersive viewing of real-world recordings and how to enhance the immersive viewing experience by taking perceptual aspects into account. The topic touches on research aspects from various fields, ranging from digital imaging, video processing, and computer vision to computer graphics, virtual reality, and visual perception. The seminar brought together scientists, engineers and practitioners from industry and academia to form a lasting, interdisciplinary research community who set out to jointly address the challenges of Real VR.

Cite as

Marcus A. Magnor and Alexander Sorkine-Hornung. Real VR - Importing the Real World into Immersive VR and Optimizing the Perceptual Experience of Head-Mounted Displays (Dagstuhl Seminar 19272). In Dagstuhl Reports, Volume 9, Issue 6, pp. 143-156, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2019)


Copy BibTex To Clipboard

@Article{magnor_et_al:DagRep.9.6.143,
  author =	{Magnor, Marcus A. and Sorkine-Hornung, Alexander},
  title =	{{Real VR - Importing the Real World into Immersive VR and Optimizing the Perceptual Experience of Head-Mounted Displays (Dagstuhl Seminar 19272)}},
  pages =	{143--156},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2019},
  volume =	{9},
  number =	{6},
  editor =	{Magnor, Marcus A. and Sorkine-Hornung, Alexander},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.9.6.143},
  URN =		{urn:nbn:de:0030-drops-114915},
  doi =		{10.4230/DagRep.9.6.143},
  annote =	{Keywords: immersive digital reality, perception in vr, real-world virtual reality}
}
Document
Real-World Visual Computing (Dagstuhl Seminar 13431)

Authors: Oliver Grau, Marcus A. Magnor, Olga Sorkine-Hornung, and Christian Theobalt

Published in: Dagstuhl Reports, Volume 3, Issue 10 (2014)


Abstract
Over the last decade, the tremendous increase in computational power of graphics hardware, in conjunction with equally improved rendering algorithms, have led to the situation today where real-time visual realism is computationally attainable on almost any PC, if only the digital models to be rendered were sufficiently detailed and realistic. With rapidly advancing rendering capabilities, the modeling process has become the limiting factor in realistic computer graphics applications. Following the traditional rendering paradigm, higher visual realism can be attained only by providing more detailed and accurate scene descriptions. However, building realistic digital scene descriptions consisting of 3D geometry and object texture, surface reflectance characteristics and scene illumination, character motion and emotion is a highly labor-intensive, tedious process. Goal of this seminar is to find new ways to overcome the looming stalemate in realistic rendering caused by traditional, time-consuming modeling. One promising alternative consists of creating digital models from real-world examples if ways can be found how to endow reconstructed models with the flexibility customary in computer graphics. The trend towards model capture from real-world examples is bolstered by new sensor technologies becoming available at mass-market prices, such as Microsoft's Kinect and time-of-flight 2D depth imagers, or Lytro's Light Field camera. Also, the pervasiveness of smart-phones containing camera, GPS and orientation sensors allows for developing new capturing paradigms of real-world events based on a swarm of networked smart-phones. With the advent of these exciting new acquisition technologies, investigating how to best integrate these novel capture modalities into the digital modeling pipeline or how to alter traditional modeling to make optimal use of new capture technologies, has become a top priority in visual computing research. To address these challenges, interdisciplinary approaches are called for that encompass computer graphics, computer vision, and visual media production. The overall goal of the seminar is to form a lasting, interdisciplinary research community which jointly identifies and addresses the challenges in modeling from the real world and determines which research avenues will be the most promising ones to pursue over the course of the next years.

Cite as

Oliver Grau, Marcus A. Magnor, Olga Sorkine-Hornung, and Christian Theobalt. Real-World Visual Computing (Dagstuhl Seminar 13431). In Dagstuhl Reports, Volume 3, Issue 10, pp. 72-91, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2014)


Copy BibTex To Clipboard

@Article{grau_et_al:DagRep.3.10.72,
  author =	{Grau, Oliver and Magnor, Marcus A. and Sorkine-Hornung, Olga and Theobalt, Christian},
  title =	{{Real-World Visual Computing (Dagstuhl Seminar 13431)}},
  pages =	{72--91},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2014},
  volume =	{3},
  number =	{10},
  editor =	{Grau, Oliver and Magnor, Marcus A. and Sorkine-Hornung, Olga and Theobalt, Christian},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.3.10.72},
  URN =		{urn:nbn:de:0030-drops-44322},
  doi =		{10.4230/DagRep.3.10.72},
  annote =	{Keywords: Image Aquisition, Scene Modeling/Rendering, Image/3D Sensors, Photorealism, Visual Effects, Motion Reconstruction, Animation}
}
Document
10411 Abstracts Collection – Computational Video

Authors: Daniel Cremers, Marcus A. Magnor, and Lihi Zelnik-Manor

Published in: Dagstuhl Seminar Proceedings, Volume 10411, Computational Video (2011)


Abstract
From 10.10.2010 to 15.10.2010, the Dagstuhl Seminar 10411 ``Computational Video '' was held in Schloss Dagstuhl~--~Leibniz Center for Informatics. During the seminar, several participants presented their current research, and ongoing work and open problems were discussed. Abstracts of the presentations given during the seminar as well as abstracts of seminar results and ideas are put together in this paper. The first section describes the seminar topics and goals in general. Links to extended abstracts or full papers are provided, if available.

Cite as

Daniel Cremers, Marcus A. Magnor, and Lihi Zelnik-Manor. 10411 Abstracts Collection – Computational Video. In Computational Video. Dagstuhl Seminar Proceedings, Volume 10411, pp. 1-22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{cremers_et_al:DagSemProc.10411.1,
  author =	{Cremers, Daniel and Magnor, Marcus A. and Zelnik-Manor, Lihi},
  title =	{{10411 Abstracts Collection – Computational Video }},
  booktitle =	{Computational Video},
  pages =	{1--22},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2011},
  volume =	{10411},
  editor =	{Daniel Cremers and Marcus A. Magnor and Lihi Zelnik-Manor},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10411.1},
  URN =		{urn:nbn:de:0030-drops-29195},
  doi =		{10.4230/DagSemProc.10411.1},
  annote =	{Keywords: Video Processing, Image Processing, Computer Vision}
}
Document
10411 Executive Summary – Computational Video

Authors: Daniel Cremers, Marcus A. Magnor, and Lihi Zelnik-Manor

Published in: Dagstuhl Seminar Proceedings, Volume 10411, Computational Video (2011)


Abstract
Dagstuhl seminar 10411 "Computational Video" took place October 10-15, 2010. 43 researchers from North America, Asia, and Europe discussed the state- of-the-art, contemporary challenges and future research in imaging, processing, analyzing, modeling, and rendering of real-world, dynamic scenes. The seminar was organized into 11 sessions of presentations, discussions, and special-topic meetings. The seminar brought together junior and senior researchers from computer vision, computer graphics, and image communication, both from academia and industry to address the challenges in computational video. Participants included international experts from Kyoto University, Stanford University, University of British Columbia, University of New Mexico, University of Toronto, MIT, Hebrew University of Jerusalem, Technion - Haifa, ETH Zrich, Heriot-Watt Uni- versity - Edinburgh, University of Surrey, and University College London as well as professionals from Adobe Systems, BBC Research & Development, Disney Research and Microsoft Research.

Cite as

Daniel Cremers, Marcus A. Magnor, and Lihi Zelnik-Manor. 10411 Executive Summary – Computational Video. In Computational Video. Dagstuhl Seminar Proceedings, Volume 10411, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2011)


Copy BibTex To Clipboard

@InProceedings{cremers_et_al:DagSemProc.10411.2,
  author =	{Cremers, Daniel and Magnor, Marcus A. and Zelnik-Manor, Lihi},
  title =	{{10411 Executive Summary – Computational Video}},
  booktitle =	{Computational Video},
  series =	{Dagstuhl Seminar Proceedings (DagSemProc)},
  ISSN =	{1862-4405},
  year =	{2011},
  volume =	{10411},
  editor =	{Daniel Cremers and Marcus A. Magnor and Lihi Zelnik-Manor},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagSemProc.10411.2},
  URN =		{urn:nbn:de:0030-drops-29208},
  doi =		{10.4230/DagSemProc.10411.2},
  annote =	{Keywords: Video Processing, Image Processing, Computer Vision}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail