Creative Commons Attribution 4.0 International license
Generative models that allow synthesis of realistic 3D models have been of interest in computer vision and graphics for over 2 decades. While traditional methods use morphable models for this task, more recent works have adopted powerful tools from the 2D image domain such as generative adversarial networks, neural fields and diffusion models, and have achieved impressive results. The question of which tools are most suitable for applications such as reconstructing 3D geometry from partial data, and creating digital 3D content remains open. This report documents the program and outcomes of Dagstuhl Seminar 25202 titled "Generative Models for 3D Vision". This meeting of 25 researchers covered a variety of topics such as generative models and priors for 2D tasks, medical applications, and digital representations of humans, including how to evaluate and benchmark different methods. We summarise the discussions, presentations, and results of this seminar.
@Article{neschen_et_al:DagRep.15.5.96,
author = {Neschen, Laura and Egger, Bernhard and Kortylewski, Adam and Smith, William and Wuhrer, Stefanie},
title = {{Generative Models for 3D Vision (Dagstuhl Seminar 25202)}},
pages = {96--113},
journal = {Dagstuhl Reports},
ISSN = {2192-5283},
year = {2025},
volume = {15},
number = {5},
editor = {Neschen, Laura and Egger, Bernhard and Kortylewski, Adam and Smith, William and Wuhrer, Stefanie},
publisher = {Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
address = {Dagstuhl, Germany},
URL = {https://drops.dagstuhl.de/entities/document/10.4230/DagRep.15.5.96},
URN = {urn:nbn:de:0030-drops-252774},
doi = {10.4230/DagRep.15.5.96},
annote = {Keywords: 3D Computer Vision, Computer Graphics, Generative Models, Implicit Representations, Neural Rendering, Statistical Modelling}
}