DagRep.14.9.67.pdf
- Filesize: 2.63 MB
- 37 pages
As more and more AI applications have become ubiquitous in our lives, the research area of explainable AI (XAI) has rapidly developed, with goals such as enabling transparency, enhancing collaboration, and increasing trust in AI. However, the focus of XAI to date has largely been on explaining the input-output mappings of "black box" models like neural networks, which have been seen as the central problem for the explainability of AI systems. The challenge of explaining intelligent behavior that extends over time, such as that of robots collaborating with humans or software agents engaged in complex ongoing tasks, has only recently gained attention. We may have AIs that can beat us in Go, but can they teach us how to play? This Dagstuhl Seminar brought together academic researchers and industry experts from communities such as reinforcement learning, planning, game AI, robotics, and cognitive science to discuss their work on explainability in sequential decision-making contexts. The seminar aimed to move towards a shared understanding of the field and develop a common roadmap for moving it forward. This report documents the program and its results.
Feedback for Dagstuhl Publishing