DagRep.13.2.47.pdf
- Filesize: 1.91 MB
- 24 pages
Deep generative models, such as variational autoencoders, generative adversarial networks, normalizing flows, and diffusion probabilistic models, have attracted a lot of recent interest. However, we believe that several challenges hinder their more widespread adoption: (C1) the difficulty of objectively evaluating the generated data; (C2) challenges in designing scalable architectures for fast likelihood evaluation or sampling; and (C3) challenges related to finding reproducible, interpretable, and semantically meaningful latent representations. In this Dagstuhl Seminar, we have discussed these open problems in the context of real-world applications of deep generative models, including (A1) generative modeling of scientific data, (A2) neural data compression, and (A3) out-of-distribution detection. By discussing challenges C1-C3 in concrete contexts A1-A3, we have worked towards identifying commonly occurring problems and ways towards overcoming them. We thus foresee many future research collaborations to arise from this seminar and for the discussed ideas to form the foundation for fruitful avenues of future research. We proceed in this report by summarizing the main results of the seminar and then giving an overview of the different contributed talks and working group discussions.
Feedback for Dagstuhl Publishing