Model Learning for Improved Trustworthiness in Autonomous Systems (Dagstuhl Seminar 23492)

Authors Ellen Enkel, Nils Jansen, Mohammad Reza Mousavi, Kristin Yvonne Rozier and all authors of the abstracts in this report



PDF
Thumbnail PDF

File

DagRep.13.12.24.pdf
  • Filesize: 1.88 MB
  • 24 pages

Document Identifiers

Author Details

Ellen Enkel
  • Universität Duisburg-Essen, DE
Nils Jansen
  • Ruhr-Universität Bochum, DE
Mohammad Reza Mousavi
  • King’s College London, GB
Kristin Yvonne Rozier
  • Iowa State University - Ames, US
and all authors of the abstracts in this report

Cite AsGet BibTex

Ellen Enkel, Nils Jansen, Mohammad Reza Mousavi, and Kristin Yvonne Rozier. Model Learning for Improved Trustworthiness in Autonomous Systems (Dagstuhl Seminar 23492). In Dagstuhl Reports, Volume 13, Issue 12, pp. 24-47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)
https://doi.org/10.4230/DagRep.13.12.24

Abstract

The term of a model has different meanings in different communities, e.g., in psychology, computer science, and human-computer interaction, among others. Well-defined models and specifications are the bottleneck of rigorous analysis techniques in practice: they are often non-existent or outdated. The constructed models capture various aspects of system behaviours, which are inherently heterogeneous in nature in contemporary autonomous systems. Once these models are in place, they can be used to address further challenges concerning autonomous systems, such as validation and verification, transparency and trust, and explanation. The seminar brought together the best experts in a diverse range of disciplines such as artificial intelligence, formal methods, psychology, software and systems engineering, and human-computer interaction as well as others dealing with autonomous systems. The goal was to consolidate these understanding of models in order to address three grand challenges in trustworthiness and trust: (1) understanding and analysing the dynamic relationship of trustworthiness and trust, (2) the understanding of mental modes and trust, and (3) rigorous and model-based measures for trustworthiness and calibrated trust.

Subject Classification

ACM Subject Classification
  • General and reference → Reliability
  • General and reference → Validation
  • General and reference → Verification
  • Computing methodologies → Artificial intelligence
  • Applied computing → Psychology
Keywords
  • artificial intelligence
  • automata learning
  • autonomous systems
  • cyber-physical systems
  • formal methods
  • machine learning
  • safety
  • safety-critical systems
  • self-adaptive systems
  • software evolution
  • technology acceptance
  • trust

Metrics

  • Access Statistics
  • Total Accesses (updated on a weekly basis)
    0
    PDF Downloads
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail