Search Results

Documents authored by Oelke, Daniela


Document
Interactive Visualization for Fostering Trust in ML (Dagstuhl Seminar 22351)

Authors: Polo Chau, Alex Endert, Daniel A. Keim, and Daniela Oelke

Published in: Dagstuhl Reports, Volume 12, Issue 8 (2023)


Abstract
The use of artificial intelligence continues to impact a broad variety of domains, application areas, and people. However, interpretability, understandability, responsibility, accountability, and fairness of the algorithms' results - all crucial for increasing humans' trust into the systems - are still largely missing. The purpose of this seminar is to understand how these components factor into the holistic view of trust. Further, this seminar seeks to identify design guidelines and best practices for how to build interactive visualization systems to calibrate trust.

Cite as

Polo Chau, Alex Endert, Daniel A. Keim, and Daniela Oelke. Interactive Visualization for Fostering Trust in ML (Dagstuhl Seminar 22351). In Dagstuhl Reports, Volume 12, Issue 8, pp. 103-116, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@Article{chau_et_al:DagRep.12.8.103,
  author =	{Chau, Polo and Endert, Alex and Keim, Daniel A. and Oelke, Daniela},
  title =	{{Interactive Visualization for Fostering Trust in ML (Dagstuhl Seminar 22351)}},
  pages =	{103--116},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2023},
  volume =	{12},
  number =	{8},
  editor =	{Chau, Polo and Endert, Alex and Keim, Daniel A. and Oelke, Daniela},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.12.8.103},
  URN =		{urn:nbn:de:0030-drops-177161},
  doi =		{10.4230/DagRep.12.8.103},
  annote =	{Keywords: accountability, artificial intelligence, explainability, fairness, interactive visualization, machine learning, responsibility, trust, understandability}
}
Document
Interactive Visualization for Fostering Trust in AI (Dagstuhl Seminar 20382)

Authors: Daniela Oelke, Daniel A. Keim, Polo Chau, and Alex Endert

Published in: Dagstuhl Reports, Volume 10, Issue 4 (2021)


Abstract
Artificial intelligence (AI), and in particular machine learning algorithms, are of increasing importance in many application areas but interpretability and understandability as well as responsibility, accountability, and fairness of the algorithms' results, all crucial for increasing the humans' trust into the systems, are still largely missing. Big industrial players, including Google, Microsoft, and Apple, have become aware of this gap and recently published their own guidelines for the use of AI in order to promote fairness, trust, interpretability, and other goals. Interactive visualization is one of the technologies that may help to increase trust in AI systems. During the seminar, we discussed the requirements for trustworthy AI systems as well as the technological possibilities provided by interactive visualizations to increase human trust in AI.

Cite as

Daniela Oelke, Daniel A. Keim, Polo Chau, and Alex Endert. Interactive Visualization for Fostering Trust in AI (Dagstuhl Seminar 20382). In Dagstuhl Reports, Volume 10, Issue 4, pp. 37-42, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2021)


Copy BibTex To Clipboard

@Article{oelke_et_al:DagRep.10.4.37,
  author =	{Oelke, Daniela and Keim, Daniel A. and Chau, Polo and Endert, Alex},
  title =	{{Interactive Visualization for Fostering Trust in AI (Dagstuhl Seminar 20382)}},
  pages =	{37--42},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2021},
  volume =	{10},
  number =	{4},
  editor =	{Oelke, Daniela and Keim, Daniel A. and Chau, Polo and Endert, Alex},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.10.4.37},
  URN =		{urn:nbn:de:0030-drops-137360},
  doi =		{10.4230/DagRep.10.4.37},
  annote =	{Keywords: accountability, artificial intelligence, explainability, fairness, interactive visualization, machine learning, responsibility, trust, understandability}
}
Document
Machine Learning Meets Visualization to Make Artificial Intelligence Interpretable (Dagstuhl Seminar 19452)

Authors: Enrico Bertini, Peer-Timo Bremer, Daniela Oelke, and Jayaraman Thiagarajan

Published in: Dagstuhl Reports, Volume 9, Issue 11 (2020)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 19452 "Machine Learning Meets Visualization to Make Artificial Intelligence Interpretable".

Cite as

Enrico Bertini, Peer-Timo Bremer, Daniela Oelke, and Jayaraman Thiagarajan. Machine Learning Meets Visualization to Make Artificial Intelligence Interpretable (Dagstuhl Seminar 19452). In Dagstuhl Reports, Volume 9, Issue 11, pp. 24-33, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@Article{bertini_et_al:DagRep.9.11.24,
  author =	{Bertini, Enrico and Bremer, Peer-Timo and Oelke, Daniela and Thiagarajan, Jayaraman},
  title =	{{Machine Learning Meets Visualization to Make Artificial Intelligence Interpretable (Dagstuhl Seminar 19452)}},
  pages =	{24--33},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2020},
  volume =	{9},
  number =	{11},
  editor =	{Bertini, Enrico and Bremer, Peer-Timo and Oelke, Daniela and Thiagarajan, Jayaraman},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.9.11.24},
  URN =		{urn:nbn:de:0030-drops-119820},
  doi =		{10.4230/DagRep.9.11.24},
  annote =	{Keywords: Visualization, Machine Learning, Interpretability}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail