Search Results

Documents authored by Henry, Léo


Document
Compositional Active Learning of Synchronizing Systems Through Automated Alphabet Refinement

Authors: Léo Henry, Mohammad Reza Mousavi, Thomas Neele, and Matteo Sammartino

Published in: LIPIcs, Volume 348, 36th International Conference on Concurrency Theory (CONCUR 2025)


Abstract
Active automata learning infers automaton models of systems from behavioral observations, a technique successfully applied to a wide range of domains. Compositional approaches for concurrent systems have recently emerged. We take a significant step beyond available results, including those by the authors, and develop a general technique for compositional learning of a synchronizing parallel system with an unknown decomposition. Our approach automatically refines the global alphabet into component alphabets while learning the component models. We develop a theoretical treatment of distributions of alphabets, i.e., sets of possibly overlapping component alphabets. We characterize counter-examples that reveal inconsistencies with global observations, and show how to systematically update the distribution to restore consistency. We present a compositional learning algorithm implementing these ideas, where learning counterexamples precisely correspond to distribution counterexamples under well-defined conditions. We provide an implementation, called CoalA, using the state-of-the-art active learning library LearnLib. Our experiments show that in more than 630 subject systems, CoalA delivers orders of magnitude improvements (up to five orders) in membership queries and in systems with significant concurrency, it also achieves better scalability in the number of equivalence queries.

Cite as

Léo Henry, Mohammad Reza Mousavi, Thomas Neele, and Matteo Sammartino. Compositional Active Learning of Synchronizing Systems Through Automated Alphabet Refinement. In 36th International Conference on Concurrency Theory (CONCUR 2025). Leibniz International Proceedings in Informatics (LIPIcs), Volume 348, pp. 20:1-20:22, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2025)


Copy BibTex To Clipboard

@InProceedings{henry_et_al:LIPIcs.CONCUR.2025.20,
  author =	{Henry, L\'{e}o and Mousavi, Mohammad Reza and Neele, Thomas and Sammartino, Matteo},
  title =	{{Compositional Active Learning of Synchronizing Systems Through Automated Alphabet Refinement}},
  booktitle =	{36th International Conference on Concurrency Theory (CONCUR 2025)},
  pages =	{20:1--20:22},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-389-8},
  ISSN =	{1868-8969},
  year =	{2025},
  volume =	{348},
  editor =	{Bouyer, Patricia and van de Pol, Jaco},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.CONCUR.2025.20},
  URN =		{urn:nbn:de:0030-drops-239700},
  doi =		{10.4230/LIPIcs.CONCUR.2025.20},
  annote =	{Keywords: Active learning, Compositional methods, Concurrency theory, Labelled transition systems, Formal methods}
}
Document
Reinforcement Planning for Effective ε-Optimal Policies in Dense Time with Discontinuities

Authors: Léo Henry, Blaise Genest, and Alexandre Drewery

Published in: LIPIcs, Volume 284, 43rd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2023)


Abstract
Lately, the model of (Decision) Stochastic Timed Automata (DSTA) has been proposed, to model those Cyber Physical Systems displaying dense time (physical part), discrete actions and discontinuities such as timeouts (cyber part). The state of the art results on controlling DSTAs are however not ideal: in the case of infinite horizon, optimal controllers do not exist, while for timed bounded behaviors, we do not know how to build such controllers, even ε-optimal ones. In this paper, we develop a theory of Reinforcement Planning in the setting of DSTAs, for discounted infinite horizon objectives. We show that optimal controllers do exist in general. Further, for DSTAs with 1 clock (which already generalize Continuous Time MDPs with e.g. timeouts), we provide an effective procedure to compute ε-optimal controllers. It is worth noting that we do not rely on the discretization of the time space, but consider symbolic representations instead. Evaluation on a DSTA shows that this method can be more efficient. Last, we show on a counterexample that this is the furthest this construction can go, as it cannot be extended to 2 or more clocks.

Cite as

Léo Henry, Blaise Genest, and Alexandre Drewery. Reinforcement Planning for Effective ε-Optimal Policies in Dense Time with Discontinuities. In 43rd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2023). Leibniz International Proceedings in Informatics (LIPIcs), Volume 284, pp. 13:1-13:18, Schloss Dagstuhl – Leibniz-Zentrum für Informatik (2023)


Copy BibTex To Clipboard

@InProceedings{henry_et_al:LIPIcs.FSTTCS.2023.13,
  author =	{Henry, L\'{e}o and Genest, Blaise and Drewery, Alexandre},
  title =	{{Reinforcement Planning for Effective \epsilon-Optimal Policies in Dense Time with Discontinuities}},
  booktitle =	{43rd IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science (FSTTCS 2023)},
  pages =	{13:1--13:18},
  series =	{Leibniz International Proceedings in Informatics (LIPIcs)},
  ISBN =	{978-3-95977-304-1},
  ISSN =	{1868-8969},
  year =	{2023},
  volume =	{284},
  editor =	{Bouyer, Patricia and Srinivasan, Srikanth},
  publisher =	{Schloss Dagstuhl -- Leibniz-Zentrum f{\"u}r Informatik},
  address =	{Dagstuhl, Germany},
  URL =		{https://drops.dagstuhl.de/entities/document/10.4230/LIPIcs.FSTTCS.2023.13},
  URN =		{urn:nbn:de:0030-drops-193866},
  doi =		{10.4230/LIPIcs.FSTTCS.2023.13},
  annote =	{Keywords: reinforcement planning, timed automata, planning}
}
Questions / Remarks / Feedback
X

Feedback for Dagstuhl Publishing


Thanks for your feedback!

Feedback submitted

Could not send message

Please try again later or send an E-mail