Search Results

Documents authored by Jansen, Nils


Document
Model Learning for Improved Trustworthiness in Autonomous Systems (Dagstuhl Seminar 23492)

Authors: Ellen Enkel, Nils Jansen, Mohammad Reza Mousavi, and Kristin Yvonne Rozier

Published in: Dagstuhl Reports, Volume 13, Issue 12 (2024)


Abstract
The term of a model has different meanings in different communities, e.g., in psychology, computer science, and human-computer interaction, among others. Well-defined models and specifications are the bottleneck of rigorous analysis techniques in practice: they are often non-existent or outdated. The constructed models capture various aspects of system behaviours, which are inherently heterogeneous in nature in contemporary autonomous systems. Once these models are in place, they can be used to address further challenges concerning autonomous systems, such as validation and verification, transparency and trust, and explanation. The seminar brought together the best experts in a diverse range of disciplines such as artificial intelligence, formal methods, psychology, software and systems engineering, and human-computer interaction as well as others dealing with autonomous systems. The goal was to consolidate these understanding of models in order to address three grand challenges in trustworthiness and trust: (1) understanding and analysing the dynamic relationship of trustworthiness and trust, (2) the understanding of mental modes and trust, and (3) rigorous and model-based measures for trustworthiness and calibrated trust.

Cite as

Ellen Enkel, Nils Jansen, Mohammad Reza Mousavi, and Kristin Yvonne Rozier. Model Learning for Improved Trustworthiness in Autonomous Systems (Dagstuhl Seminar 23492). In Dagstuhl Reports, Volume 13, Issue 12, pp. 24-47, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2024)


Copy BibTex To Clipboard

@Article{enkel_et_al:DagRep.13.12.24,
  author =	{Enkel, Ellen and Jansen, Nils and Mousavi, Mohammad Reza and Rozier, Kristin Yvonne},
  title =	{{Model Learning for Improved Trustworthiness in Autonomous Systems (Dagstuhl Seminar 23492)}},
  pages =	{24--47},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2024},
  volume =	{13},
  number =	{12},
  editor =	{Enkel, Ellen and Jansen, Nils and Mousavi, Mohammad Reza and Rozier, Kristin Yvonne},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.13.12.24},
  URN =		{urn:nbn:de:0030-drops-198543},
  doi =		{10.4230/DagRep.13.12.24},
  annote =	{Keywords: artificial intelligence, automata learning, autonomous systems, cyber-physical systems, formal methods, machine learning, safety, safety-critical systems, self-adaptive systems, software evolution, technology acceptance, trust}
}
Document
Invited Paper
Safe Reinforcement Learning Using Probabilistic Shields (Invited Paper)

Authors: Nils Jansen, Bettina Könighofer, Sebastian Junges, Alex Serban, and Roderick Bloem

Published in: LIPIcs, Volume 171, 31st International Conference on Concurrency Theory (CONCUR 2020)


Abstract
This paper concerns the efficient construction of a safety shield for reinforcement learning. We specifically target scenarios that incorporate uncertainty and use Markov decision processes (MDPs) as the underlying model to capture such problems. Reinforcement learning (RL) is a machine learning technique that can determine near-optimal policies in MDPs that may be unknown before exploring the model. However, during exploration, RL is prone to induce behavior that is undesirable or not allowed in safety- or mission-critical contexts. We introduce the concept of a probabilistic shield that enables RL decision-making to adhere to safety constraints with high probability. We employ formal verification to efficiently compute the probabilities of critical decisions within a safety-relevant fragment of the MDP. These results help to realize a shield that, when applied to an RL algorithm, restricts the agent from taking unsafe actions, while optimizing the performance objective. We discuss tradeoffs between sufficient progress in the exploration of the environment and ensuring safety. In our experiments, we demonstrate on the arcade game PAC-MAN and on a case study involving service robots that the learning efficiency increases as the learning needs orders of magnitude fewer episodes.

Cite as

Nils Jansen, Bettina Könighofer, Sebastian Junges, Alex Serban, and Roderick Bloem. Safe Reinforcement Learning Using Probabilistic Shields (Invited Paper). In 31st International Conference on Concurrency Theory (CONCUR 2020). Leibniz International Proceedings in Informatics (LIPIcs), Volume 171, pp. 3:1-3:16, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2020)


Copy BibTex To Clipboard

@InProceedings{jansen_et_al:LIPIcs.CONCUR.2020.3,
  author =	{Jansen, Nils and K\"{o}nighofer, Bettina and Junges, Sebastian and Serban, Alex and Bloem, Roderick},
  title =	{{Safe Reinforcement Learning Using Probabilistic Shields}},
  booktitle =	{31st International Conference on Concurrency Theory (CONCUR 2020)},
  pages =	{3:1--3:16},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-160-3},
  ISSN =	{1868-8969},
  year =	{2020},
  volume =	{171},
  editor =	{Konnov, Igor and Kov\'{a}cs, Laura},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2020.3},
  URN =		{urn:nbn:de:0030-drops-128155},
  doi =		{10.4230/LIPIcs.CONCUR.2020.3},
  annote =	{Keywords: Safe Reinforcement Learning, Formal Verification, Safe Exploration, Model Checking, Markov Decision Process}
}
Document
Machine Learning and Model Checking Join Forces (Dagstuhl Seminar 18121)

Authors: Nils Jansen, Joost-Pieter Katoen, Pusmeet Kohli, and Jan Kretinsky

Published in: Dagstuhl Reports, Volume 8, Issue 3 (2018)


Abstract
This report documents the program and the outcomes of Dagstuhl Seminar 18121 "Machine Learning and Model Checking Join Forces". This Dagstuhl Seminar brought together researchers working in the fields of machine learning and model checking. It helped to identify new research topics on the one hand and to help with current problems on the other hand.

Cite as

Nils Jansen, Joost-Pieter Katoen, Pusmeet Kohli, and Jan Kretinsky. Machine Learning and Model Checking Join Forces (Dagstuhl Seminar 18121). In Dagstuhl Reports, Volume 8, Issue 3, pp. 74-93, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2018)


Copy BibTex To Clipboard

@Article{jansen_et_al:DagRep.8.3.74,
  author =	{Jansen, Nils and Katoen, Joost-Pieter and Kohli, Pusmeet and Kretinsky, Jan},
  title =	{{Machine Learning and Model Checking Join Forces (Dagstuhl Seminar 18121)}},
  pages =	{74--93},
  journal =	{Dagstuhl Reports},
  ISSN =	{2192-5283},
  year =	{2018},
  volume =	{8},
  number =	{3},
  editor =	{Jansen, Nils and Katoen, Joost-Pieter and Kohli, Pusmeet and Kretinsky, Jan},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/DagRep.8.3.74},
  URN =		{urn:nbn:de:0030-drops-92988},
  doi =		{10.4230/DagRep.8.3.74},
  annote =	{Keywords: artificial intelligence, cyber-physical systems, formal methods, formal verification, logics, machine learning, model checking, quantitative verification, safety-critical systems}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail