Explainable AI for Sequential Decision Making (Dagstuhl Seminar 24372)

Authors Hendrik Baier, Mark T. Keane, Sarath Sreedharan, Silvia Tulli, Abhinav Verma, Stylianos Loukas Vasileiou and all authors of the abstracts in this report



PDF
Thumbnail PDF

File

DagRep.14.9.67.pdf
  • Filesize: 2.63 MB
  • 37 pages

Document Identifiers

Author Details

Hendrik Baier
  • Eindhoven University of Technology, NL
Mark T. Keane
  • University College Dublin, IE
Sarath Sreedharan
  • Colorado State University - Fort Collins, US
Silvia Tulli
  • Sorbonne Université - Paris, FR
Abhinav Verma
  • Pennsylvania State University - University Park, US
Stylianos Loukas Vasileiou
  • Washington University - St. Louis, US
and all authors of the abstracts in this report

Cite As Get BibTex

Hendrik Baier, Mark T. Keane, Sarath Sreedharan, Silvia Tulli, Abhinav Verma, and Stylianos Loukas Vasileiou. Explainable AI for Sequential Decision Making (Dagstuhl Seminar 24372). In Dagstuhl Reports, Volume 14, Issue 9, pp. 67-103, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025) https://doi.org/10.4230/DagRep.14.9.67

Abstract

As more and more AI applications have become ubiquitous in our lives, the research area of explainable AI (XAI) has rapidly developed, with goals such as enabling transparency, enhancing collaboration, and increasing trust in AI. However, the focus of XAI to date has largely been on explaining the input-output mappings of "black box" models like neural networks, which have been seen as the central problem for the explainability of AI systems. The challenge of explaining intelligent behavior that extends over time, such as that of robots collaborating with humans or software agents engaged in complex ongoing tasks, has only recently gained attention. We may have AIs that can beat us in Go, but can they teach us how to play?
This Dagstuhl Seminar brought together academic researchers and industry experts from communities such as reinforcement learning, planning, game AI, robotics, and cognitive science to discuss their work on explainability in sequential decision-making contexts. The seminar aimed to move towards a shared understanding of the field and develop a common roadmap for moving it forward. This report documents the program and its results.

Subject Classification

ACM Subject Classification
  • Computing methodologies → Artificial intelligence
  • Human-centered computing
Keywords
  • Explainable artificial intelligence
  • explainable agents
  • sequential decision making
  • planning
  • learning

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail