Abstract 1 Introduction 2 Related work 3 Preliminaries 4 Distributions 5 Compositional Learning Algorithm 6 Experiments 7 Conclusion References Appendix A Proofs

Compositional Active Learning of Synchronizing Systems Through Automated Alphabet Refinement

Léo Henry ORCID Royal Holloway, University of London, UK Mohammad Reza Mousavi ORCID King’s College London, UK Thomas Neele ORCID Eindhoven University of Technology, The Netherlands Matteo Sammartino ORCID Royal Holloway, University of London, UK
Abstract

Active automata learning infers automaton models of systems from behavioral observations, a technique successfully applied to a wide range of domains. Compositional approaches for concurrent systems have recently emerged. We take a significant step beyond available results, including those by the authors, and develop a general technique for compositional learning of a synchronizing parallel system with an unknown decomposition. Our approach automatically refines the global alphabet into component alphabets while learning the component models. We develop a theoretical treatment of distributions of alphabets, i.e., sets of possibly overlapping component alphabets. We characterize counter-examples that reveal inconsistencies with global observations, and show how to systematically update the distribution to restore consistency. We present a compositional learning algorithm implementing these ideas, where learning counterexamples precisely correspond to distribution counterexamples under well-defined conditions. We provide an implementation, called CoalA, using the state-of-the-art active learning library LearnLib. Our experiments show that in more than 630 subject systems, CoalA delivers orders of magnitude improvements (up to five orders) in membership queries and in systems with significant concurrency, it also achieves better scalability in the number of equivalence queries.

Keywords and phrases:
Active learning, Compositional methods, Concurrency theory, Labelled transition systems, Formal methods
Funding:
Léo Henry: EPSRC project Verification of Hardware Concurrency via Model Learning (CLeVer) – EP/S028641/1.
Mohammad Reza Mousavi: UKRI Trustworthy Autonomous Systems Node in Verifiability – EP/V026801/2; EPSRC project Verified Simulation for Large Quantum Systems (VSL-Q) – EP/Y005244/1; EPSRC project Robust and Reliable Quantum Computing (RoaRQ), Investigation 009 Model-based monitoring and calibration of quantum computations (ModeMCQ) – EP/W032635/1; ITEA/InnovateUK projects GENIUS (600642) and GreenCode (600643).
Thomas Neele: NWO grant VI.Veni.232.224.
Matteo Sammartino: EPSRC project Verification of Hardware Concurrency via Model Learning (CLeVer) – EP/S028641/1.
Copyright and License:
[Uncaptioned image] © Léo Henry, Mohammad Reza Mousavi, Thomas Neele, and Matteo Sammartino; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Active learning
Supplementary Material:
Software  (Source Code): https://doi.org/10.5281/zenodo.15170685
Acknowledgements:
We thank the reviewers for their thorough comments and suggestions, and the authors of [27] for their feedback on an earlier version of this paper.
Editors:
Patricia Bouyer and Jaco van de Pol

1 Introduction

Automata learning [16] has been successfully applied to learn widely-used protocols such as TCP [12], SSH [13], and QUIC [11], CPU caching policies [38] and finding faults in their black-box actual implementations. Moreover it has been applied to a wide range of applications such as bank cards [1] and biometric passports [2, 28]. There are already accessible expositions of the success stories in this field [20, 36]. However, it is well known that the state-of-the-art automata learning algorithms do not scale beyond systems with more than a few hundreds of input and output symbols in their alphabets [36].

Scalability to larger systems with large input alphabets is required for many real-life systems. This requirement has inspired some recent attempts [25, 31, 27] to come up with compositional approaches to automata learning, to address the scalability issues. Some of the past compositional approaches [31] relied on an a-priori knowledge of the system decomposition, while others [25] considered non-synchronizing state machines or a subset of synchronizing ones to help learning the decomposition [27]. These assumptions need to be relaxed to enable dealing with legacy and black box systems and dealing with synchronizing components. In particular, in the presence of an ever-increasing body of legacy automatically-generated code, architectural discovery is a significant challenge [23, 15, 32]. This is a significant challenge, because refactoring and rejuvenating legacy systems has been posed as a significant application of automata learning [33].

In this paper, we take a significant step beyond the available results and develop a compositional automata learning approach that does not assume any pre-knowledge of the decomposition of the alphabet and allows for an arbitrary general synchronization scheme, common in the theory of automata and process calculi [19]. To this end, we take inspirations from the realizability problem in concurrency theory and use iterative refinements of the alphabet decomposition (called distributions [30]) to arrive in a provably sound decomposition while learning the components’ behavior. To our knowledge this is the first result of its kind and the first extension of realizability into the domain of automata learning.

To summarize, the contributions of our paper are listed below:

  • We develop a novel theory of system decomposition for LTS synchronization that formally characterizes which alphabet decompositions can accurately model observed behaviors, establishing a theoretical foundation for automated component discovery. Proofs of our theorems are given in the appendix.

  • Based on this, we propose a compositional active learning algorithm that dynamically refines component alphabets during the learning process, supporting standard synchronization mechanisms without requiring a priori knowledge of the system’s decomposition.

  • We implemented our approach as the prototype tool CoalA, built on the state-of-the-art LearnLib framework [22], and evaluated it on over 630 systems from three benchmark sets. Compared to a monolithic approach, CoalA achieved substantial reductions in queries, with up to five orders of magnitude fewer membership queries and one order fewer equivalence queries across most of our benchmark systems with parallel components, resulting in better overall scalability. The replication package is available at [18].

2 Related work

Realizability of sequential specifications in terms of parallel components has been a long-standing problem in concurrency theory. In the context of Petri nets, this has been pioneered by Ehrenfeucht and Rozenberg [10], followed up by the work of Castellani, Mukund and Thiagarajan [8]. Realizability has been further investigated in other models of concurrency such as team automata [34], session types [5], communicating automata [17] and labelled transition systems (LTSs) [35]. Related to this line of research is the decomposition of LTSs into prime processes [26]. We are inspired by the work of Mukund [30], characterizing the transition systems that can be synthesized into an equivalent parallel system given a decomposition of their alphabet (called distribution). Mukund explores this characterization for two notions of parallel composition (loosely cooperating- and synchronous parallel composition) and three notions of equivalence (isomorphism, language equivalence, and bisimulation). We base our work on the results of Mukund for loosely cooperating systems and language equivalence. We extend it to define consistency between observations and distributions and refining distributions to reinstate consistency.

Our work integrates two recent approaches on compositional learning: we extend the work on learning synchronous parallel composition of automata [31] by automatically learning the decomposition of the alphabets, through refinement of distributions; moreover, we extend the work on learning interleaving parallel composition of automata [25] by enabling a generic synchronization scheme among components. In parallel to our work, an alternative proposal [27] has appeared to allow for synchronization among interleaving automata; however, the proposed synchronization scheme assumes that whenever two components are not ready to synchronize, e.g., because they produce different outputs on the same input, a special output is produced (or otherwise, the semantic model is output deterministic). We do not assume any such additional information and use a synchronization scheme widely used in the theory of automata and process calculi [19]. Other contributions related to compositional learning include the active learning of product automata, a variation of Mealy machine where the output is the combination of outputs of several Mealy machines – as in our case, the component Mealy machines are learned individually [29]; learning of systems of procedural automata [14], sets of automata that can call each other in a way similar to procedure calls; learning asynchronously-communicating finite state machines via queries in the form of message sequence charts [6], though using a monolithic approach.

3 Preliminaries

We use Σ to denote a finite alphabet of action symbols, and Σ to denote the set of finite sequences of symbols in Σ, which we call traces; we use ϵΣ to denote the empty trace. Given two traces σ1,σ2Σ, we denote their concatenation by σ1σ2. We refer to the ith element of σ by σ[i]. The projection σΣ of σ on an alphabet ΣΣ is the sequence of symbols in σ that are also contained in Σ: ϵΣ=ϵ and σaΣ=σΣa if aΣ and σΣ otherwise. We generalize this notation to sets (and thus languages), such that SΣ={σΣσS}. Given a set S, we write |S| for its cardinality. We write 𝗂𝗆𝗀(f) for the image of a function f.

3.1 Labelled Transition Systems

In this work we represent the state-based behavior of a system as a labelled transition system.

Definition 3.1 (Labelled Transition System).

A labelled transition system (LTS) is a four-tuple T=(S,Σ,,s^), where S is a set of states; Σ is a finite alphabet of actions; S×Σ×S is a transition relation; s^S is an initial state.

We write in infix notation s𝑎t for (s,a,t). We say that an action a is enabled in s, written s𝑎, if there is t such that s𝑎t. The transition relation and the notion of enabled-ness are also extended to traces σΣ, yielding s𝜎t and s𝜎.

Definition 3.2 (Language of an LTS).

The language of T is the set of traces enabled from the starting state, formally: (T)={σΣs^𝜎}.

Note that languages of LTSs are always prefix-closed, because every prefix of an enabled trace is necessarily enabled. LTSs correspond exactly to prefix-closed automata.

The parallel composition of a finite set of LTSs is a product model representing all possible behaviors when the LTSs synchronize on shared actions. Intuitively, an action a can be performed when all LTSs that have a in their alphabet can perform it in their current state. The other LTSs remain idle during the transition.

Definition 3.3 (Parallel composition).

Given n LTSs Ti=(Si,Σi,i,s^i) for 1in, their parallel composition, denoted i=1nTi, is an LTS (S1××Sn,i=1nΣi,,(s^1,,s^n)), where the transition relation is given by the following rule:

si𝑎itifor all i such that aΣisj=tjfor all j such that aΣj(s1,,sn)𝑎(t1,,tn)

We say that an action a is local if there is exactly one i such that aΣi; otherwise, it is called synchronizing. The parallel composition of LTSs thus forces individual LTSs to cooperate on synchronizing actions; local actions can be performed independently. We typically refer to the LTSs that make up a composite LTS as components. For a parallel composition of two LTSs, we use the infix notation, i.e., TT, when convenient. Synchronization of components corresponds to communication between components in the real world.

We define the corresponding notion for languages on restricted alphabets.

Definition 3.4 (Parallel composition of languages).

Given n languages and alphabets (i,Σi) such that iΣi for all 1in, let Σ=i=1nΣi. We define i=1n(i,Σi) as

{σΣ1in.σΣii}.
Example 3.5 (Running example).

Consider the LTSs T1 and T2 given in Figure 1, with the respective alphabets {a,b,c} and {b,d}. Their parallel composition is depicted at the bottom of Figure 1.

Figure 1: The parallel composition of two LTSs.

Here a, c and d are local actions, whereas b is synchronizing. Note that, although T2 can perform b from its initial state t0, there is no b transition from (s0,t0) in T1T2, because b is not enabled in s0. Action b can only be performed in T1T2 after T1 does an a or a c and moves to s1, which is captured as the a and c transitions from (s0,t0).

3.2 Active Automata Learning

In active automata learning [4], a Learner infers an automaton model of an unknown language by querying a Teacher, which knows and answers two query types (see Figure 2):

  • Membership queries: is a trace σ in ? The Teacher replies yes/no.

  • Equivalence queries: given a hypothesis model , is ()=? The Teacher either replies yes or provides a counter-example – a trace that is in one language but not in the other.

Figure 2: Active automata learning for a target language .

Algorithms based on this framework – Angluin’s 𝙻 being the classical example – converge to a canonical model (e.g., the minimal DFA) of the target language. In practice, the Teacher is realized as an interface to the System Under Learning (SUL): membership queries become tests on the SUL, and equivalence queries are approximated via systematic testing strategies [7, 24].

During learning, the learner gathers observations about the SUL. While these observations are typically organized in a data structure (e.g., a table or a tree), they can be abstractly represented as a partial function mapping traces to their accepted (+) or rejected () status.

Definition 3.6 (Observation function).

An observation function over Σ is a partial function 𝖮𝖻𝗌:Σ{+,}.

We write 𝖣𝗈𝗆(𝖮𝖻𝗌) for the domain of 𝖮𝖻𝗌 and only consider observation functions with a finite domain. We sometimes represent an observation function 𝖮𝖻𝗌 as the set of pairs {(σ,𝖮𝖻𝗌(σ))σ𝖣𝗈𝗆(𝖮𝖻𝗌)}.

Definition 3.7 (Observation function/language agreement).

An observation function 𝖮𝖻𝗌 agrees with a language , notation 𝖮𝖻𝗌, whenever σ𝖮𝖻𝗌(σ)=+, for all σ𝖣𝗈𝗆(𝖮𝖻𝗌).

To compositionally learn a model formulated as a parallel composition of LTSs i=1nTi, we define how to derive the local observation functions that will be used for the components.

Definition 3.8 (Local observation function).

Given a sub-alphabet ΣiΣ, a local observation function 𝖮𝖻𝗌Σi:Σi{+,} is defined such that 𝖣𝗈𝗆(𝖮𝖻𝗌Σi)=𝖣𝗈𝗆(𝖮𝖻𝗌)Σi and 𝖮𝖻𝗌Σi(σ)={σσ𝖣𝗈𝗆(𝖮𝖻𝗌)σΣi=σ}𝖮𝖻𝗌(σ), for all σ𝖣𝗈𝗆(𝖮𝖻𝗌Σi).

This definition is taken to mimic the behavior of parallel composition, i.e., a component Ti accepts σ if and only if there is some σ such that σΣi=σ and i=1nTi accepts σ.

Example 3.9.

Consider again the LTSs from Figure 1 and suppose we are given the following observation function for T1T2: 𝖮𝖻𝗌:a+;aa;abd+. The local observation functions we obtain for T1 and T2 are, respectively:

𝖮𝖻𝗌{a,b,c}:a+;aa;ab+𝖮𝖻𝗌{b,d}:ϵ+;bd+.

The observation 𝖮𝖻𝗌(abd)=+ requires both components to cooperate, hence 𝖮𝖻𝗌{a,b,c}(ab)=+ and 𝖮𝖻𝗌{b,d}(bd)=+. We derive 𝖮𝖻𝗌{b,d}(ϵ)=+ from 𝖮𝖻𝗌(a)𝖮𝖻𝗌(aa)=+, since the projection of both these traces to {b,d} is ϵ.

4 Distributions

In this section, we first discuss how we decompose the global alphabet into a distribution, i.e., a set of potentially overlapping local alphabets. We then give some properties of distributions and their relation to observation functions. Based on these, we explain how to extend a distribution to model a given observation function.

4.1 Distributions and Observations

In a model expressed as a parallel composition iTi, permuting symbols belonging to different local alphabets does not affect membership of the language. For example, in Figure 1, because abcd is in the language, we directly know that abdc is too, as both c and d are local to different components. To formalize this, we first formally define distributions as follows.

Definition 4.1 (Distribution).

A distribution of an alphabet Σ is a set Ω={Σ1,,Σn} such that i=1nΣi=Σ.

For the rest of this section, we fix an alphabet Σ, a distribution Ω={Σ1,,Σn} of Σ and an observation function 𝖮𝖻𝗌 over Σ unless otherwise specified.

For a given distribution Ω, we define below the class of languages, called product languages over Ω, that can be represented over that distribution.

Definition 4.2 (Product language).

is a product language over Ω, notation Ω, iff there exists a family of languages {i}1in, where iΣi for all 1in, such that =i=1n(i,Σi).

Example 4.3.

In Example 3.5, it is clear by construction that {{a,b,c},{b,d}}(T1T2). However, (T1T2) is not a product language over Ωsingles={{a},{b},{c},{d}} because any product language over Ωsingles should allow for permuting a and b and thus, would fail to capture the fact that b can only come after one a.

We recall the following key lemma for product languages.

Lemma 4.4 ([30], Lemma 5.2).

A language is a product language over Ω if and only if =i=1n(Σi,Σi).

We can now define product observations over Ω, i.e., an observation that can be generated by a product language over Ω.

Definition 4.5 (Product observation).

𝖮𝖻𝗌 is a product observation over Ω, notation Ω𝖮𝖻𝗌, iff there exists a language such that Ω and 𝖮𝖻𝗌. We conversely say that Ω models 𝖮𝖻𝗌.

While Definition 4.5 does not prescribe how to find such a distribution given an observation function, it can be used to detect precisely when a current distribution is not consistent with observations and must be updated. This results in the following proposition linking local and global observations for a given distribution: an observation is a product observation over a distribution if and only if its projections according to the distribution hold the same information as the observation itself. The proof follows largely from Lemma 4.4.

Proposition 4.6.

Ω𝖮𝖻𝗌 if and only if for all traces σ𝖣𝗈𝗆(𝖮𝖻𝗌) it holds that 𝖮𝖻𝗌(σ)=ΣiΩ𝖮𝖻𝗌Σi(σΣi).

Example 4.7.

Following from Example 4.3, consider the following observation function based on (T1T2) from Example 3.5:

𝖮𝖻𝗌:ϵ+;a+;ab+;b;c+;d.

Using the above proposition, we can verify that Ωsingles={{a},{b},{c},{d}}⊧̸𝖮𝖻𝗌. This is because 𝖮𝖻𝗌(b)=, whereas for all ΣiΩsingles, 𝖮𝖻𝗌Σi(bΣi)=+ since ab{b}=b causes 𝖮𝖻𝗌{b}(b)=+ and for other alphabets bΣi=ϵ and 𝖮𝖻𝗌Σi(ϵ)=+. In contrast, Ω{a,b}={{a,b},{c},{d}}𝖮𝖻𝗌 since the alphabet {a,b} allows for distinguishing observations b and ab.

In our algorithm, this check on local and global observation functions is used to trigger an update of the current distribution exactly when necessary.

Based on the above proposition, we now define counter-examples to a distribution. By definition of local observations, if 𝖮𝖻𝗌(σ)=+, then 𝖮𝖻𝗌Σi(σΣi)=+. Hence, to obtain 𝖮𝖻𝗌(σ)ΣiΩ𝖮𝖻𝗌Σi(σΣi), we must have a globally negative observation σN and a set of globally positive observations whose projections to the local components match the projections of σN, indicating a mismatch between global and local observations.

Definition 4.8 (Counter-example to a distribution).

A counter-example to Ω𝖮𝖻𝗌 is a pair (σN,P)𝖣𝗈𝗆(𝖮𝖻𝗌)×𝖣𝗈𝗆(𝖮𝖻𝗌)Ω with

  • σN a negative observation 𝖮𝖻𝗌(σN)=;

  • P a function that maps each ΣiΩ to a positive observation σΣi, i.e., 𝖮𝖻𝗌(σΣi)=+, such that σNΣi=σΣiΣi.

We call 𝗂𝗆𝗀(P) the positive image of the counter-example. We write 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) for the set of such counter-examples.

Although these counter-examples are not necessarily related to learning, we use the same terminology as in active learning. This is because the two concepts are directly linked in our case, as will be explained later.

Example 4.9.

Reusing the observation function 𝖮𝖻𝗌 and the singleton distribution Ωsingles defined in Example 4.7, for every element of 𝐶𝐸𝐷(𝖮𝖻𝗌,Ωsingles) we have σN=b. and P({b})=ab. For the remaining elements of Ω, there are more choices: {a} can be mapped to either ϵ or c; {c} to either ϵ, a or ab; and {d} to either ϵ, a, ab or c.

Proposition 4.6, specialized to our definition of counter-examples, yields the following corollary, which will be used in the following to detect that a distribution is a model.

Corollary 4.10.

Ω𝖮𝖻𝗌𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)=.

4.2 Resolving a Counter-example

Given a distribution Ω and a fixed observation function 𝖮𝖻𝗌, one key question is how to extend Ω to a new distribution Ω modelling 𝖮𝖻𝗌. This is a difficult problem, as new counter-examples can arise when extending a distribution. In this subsection, we explain how to resolve a single counter-example as a first step.

When a counter-example (σN,P) to Ω𝖮𝖻𝗌 exists, it reveals a limitation in the distribution Ω: the projections of σN coincide with projections of elements in P, making them indistinguishable under the current components. To resolve such counter-examples, it is thus necessary and sufficient to augment Ω with new components that disrupt this matching. In the following, we will fix (σN,P)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) as a counter-example to Ω𝖮𝖻𝗌.

More precisely, for each pair of traces (σN,σ) with σP, it suffices to identify a discrepancy between them. There are two types of discrepancies: multiplicity discrepancies and order discrepancies. A multiplicity discrepancy is a symbol occurring a different number of times in each trace. For this, given a trace σ, let Σm(σ) denote the multiset of symbols occurring in σ. Note that Σm(σ)=Σm(σ) if and only if σ is a permutation of σ. The symmetric difference of multisets A and B is denoted AΔB.

Definition 4.11 (Multiplicity discrepancy).

Given a ΣiΩ, the set of multiplicity discrepancies for Σi is 𝒟mΣi(σN,P)=Σm(σN)ΔΣm(P(Σi)).

We now define an order discrepancy, i.e., a pair of symbols whose relative positions differ between the traces. We do this by considering whether symbols that are not a multiplicity discrepancy, i.e., those appearing the same number of times in both traces, are permuted. We choose the permutation such that the relative order of identical symbols is maintained.

Definition 4.12 (Order discrepancy).

Given ΣiΩ, let θ=Σ(Σm(σN)ΔΣm(P(Σi))) be the symbols on which σN and P(Σi) agree and define σN=σNθ. Let π be the unique permutation such that σN=π(P(Σi)θ) and σN[j]=σN[k]π(j)<π(k), for all j<k. The set of order discrepancies for Σi is then:

𝒟oΣi(σN,P)={{σN[j],σN[k]}k<jπ(k)>π(j)}

Multiplicity and order discrepancies can be found in linear and quadratic time, respectively. Finally, we define the discrepancies for a counter-example as sets that contain at least a discrepancy of either type for each alphabet in Ω.

Definition 4.13 (Discrepancy set).

A set δΣ is a discrepancy for (σN,P) iff for all ΣiΩ, either 𝒟mΣi(σN,P)δ or there is δΣi𝒟oΣi(σN,P) such that δΣiδ. We write 𝒟(σN,P) for the set of all discrepancies for the counter-example (σN,P).

For a set of counter-examples {ce1,,cen}, we write 𝒟({ce1,,cen})={{δ1,,δn}i.δi𝒟(cei)}, representing all possible selections of one discrepancy per counter-example.

Example 4.14 (Multiplicity discrepancy).

Following Example 4.9 with the singleton distribution Ωsingles={{a},{b},{c},{d}}, consider the counter-example (σN,P)=(b,({a}ϵ,{b}ab,{c}ϵ,{d}ϵ)). We find the following multiplicity discrepancies: 𝒟mΣi(σN,P)={b}, for Σi{{a},{c},{d}}, because b occurs once in σN vs. zero times in P(Σi), and 𝒟m{b}(σN,P)={a}. Hence, 𝒟(σN,P) includes all subsets of {a,b,c,d} containing {a,b}. For a different counter-example such as (σN,P)=(b,({a}c,{b}ab,{c}ab,{d}c)), we obtain 𝒟m{a}(σN,P)=𝒟m{d}(σN,P)={b,c} and 𝒟m{b}(σN,P)=𝒟m{c}(σN,P)={a}, so 𝒟(σN,P) includes any subset of {a,b,c,d} that contains either {a,b} or {a,c}.

Example 4.15 (Order discrepancy).

Consider a singleton distribution Ωsingles={{a},{b},{c}} with 𝖮𝖻𝗌(abc)=+ and 𝖮𝖻𝗌(bac)=. This yields the counter-example (σN,P)=(bac,({a}abc,{b}abc,{c}abc)). For each ΣiΩsingles, we find 𝒟oΣi(σN,P)={{a,b}}. Intuitively, these discrepancies reveals that singleton components allow for all permutations of a and b, but the observation function forbids some of them. Therefore, 𝒟(σN,P) includes all subsets of {a,b,c} containing {a,b}.

We can now state that discrepancies are both sufficient and necessary additions to a distribution in order to eliminate their counter-examples.

Proposition 4.16.

Suppose there exists (σN,P)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌). For each discrepancy δ𝒟(σN,P), (σN,P)𝐶𝐸𝐷(Ω{δ},𝖮𝖻𝗌). Conversely, for any distribution Ω of Σ where δΣi, for all δ𝒟(σN,P) and all ΣiΩ, (σN,P) is a counter-example to Ω𝖮𝖻𝗌.

4.3 Extending a Distribution to Model an Observation Function

Using the previous subsection as a basis, we leverage structural properties of distributions to restrict the possible counter-examples that can appear when updating the distribution. Finally, we devise an iterative process that is guaranteed to converge to a distribution modelling 𝖮𝖻𝗌.

Pre-ordering of Distributions

Distributions can be preordered by their “connecting power”, i.e., by the extent to which they connect symbols together as part of the same alphabets.

Definition 4.17 (Connectivity preorder).

Given two distributions Ω and Ω of alphabet Σ, we say that Ω is less connecting than Ω and write ΩΩ when ΣiΩ.ΣjΩ.ΣiΣj (equivalently, Ω is said to be more connecting than Ω). The relation is strict, written ΩΩ, when Ω⋠Ω. The relation forms a preorder with finite chains.

We relate this notion to the sets of counter-examples for a fixed observation function to show that adding connections in a distribution makes the counter-example set progress along a preorder. For this, we first define a notion of inclusion for counter-examples.

Definition 4.18 (Counter-example inclusion).

Consider two distributions Ω and Ω of Σ, (σN,P) a counter-example to Ω𝖮𝖻𝗌 and (σN,P) a counter-example to Ω𝖮𝖻𝗌. We write (σN,P)(σN,P) whenever σN=σN and 𝗂𝗆𝗀(P)𝗂𝗆𝗀(P). The strict inclusion (σN,P)(σN,P) holds whenever σN=σN and 𝗂𝗆𝗀(P)𝗂𝗆𝗀(P).

In its simplest form, progress means eliminating counter-examples from the current set of counter-examples 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌). However, a counter-example ce might be replaced by new counter-examples ce such that cece, which emerge when new connections are added to the distribution. Hence, progress means that some counter-examples are either eliminated or replaced by subsuming ones, as depicted in Figure 3.

Figure 3: The different relations between the elements of two sets of counter-examples CECE.
Definition 4.19 (Counter-example set preordering).

Consider CE and CE sets of counter-examples. We write CECE when ceCE.ceCE.cece. We write CECE when, furthermore, either cece or CECE{ce}, for ceCE and ceCE.

Example 4.20.

We give a short example of the preorder on set of counter-examples inspired by our running example, that will later appear in Example 5.5:

{(b,({a,b}cb,{c}ϵ,{d}ϵ))}{(b,({a,b}cb,{b,c}ab,{d}ϵ))}.

Notice that these two singleton sets’ only difference is {c}ϵ replaced by {b,c}ab such that the image of the first is strictly included in the image of the second.

Using the above definitions, we can prove that increasing the connecting power of a distribution ensures that the set of counter-example progresses.

Proposition 4.21.

Consider two distributions Ω and Ω of Σ. We have ΩΩ𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌).

As an immediate consequence, whenever Ω has no counter-examples, any distribution Ω that is more connecting than Ω will also have none, i.e., both distributions will model the same observations.

Corollary 4.22.

Let Ω be a distribution of Σ such that Ω𝖮𝖻𝗌. For any distribution Ω of Σ such that ΩΩ, we have Ω𝖮𝖻𝗌.

Fixing the distribution

Using Propositions 4.16 and 4.21, from an initial distribution Ω⊧̸𝖮𝖻𝗌 we can create a more connecting one that entails a strict progression in counter-examples.

Corollary 4.23.

Suppose that 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌). For (σN,P)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌), we pick a discrepancy δ(σN,P)𝒟(σN,P). For any non-empty subset S of 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌), let Ω=Ω{δceceS}. Then ΩΩ and 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌).

 Remark 4.24.

Corollary 4.23 gives us the freedom to select any discrepancy δce, for each counter-example ce. We can select discrepancies that result in a least connecting distribution, which yields a locally optimal greedy strategy for progress. The intuition behind this choice is that it leads to: (1) more components that are individually smaller and easier to learn; and (2) fewer synchronizing actions between components, thus reducing the complexity of coordination among learners (see Section 5.1 for details).

By iteratively applying this Corollary, we can eliminate counter-examples until reaching a distribution that models the observations. This leads to the following convergence result:

Theorem 4.25.

Suppose Ω⊧̸𝖮𝖻𝗌. The above process converges to a distribution Ω𝖮𝖻𝗌 such that ΩΩ after finitely many of steps. For choosing S=𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) at each step, the number of steps is bounded by |𝒫(Σ)|.

Canonical distributions: inducing a partial order

Distributions have few constraints: they only need to span the entire alphabet, which leaves room for redundancies. We propose to remove redundancies without affecting the distribution’s connecting power, by removing alphabets completely contained within another.

Definition 4.26 (Canonical distribution).

Consider a distribution Ω={Σ1,,Σn} and Sub={ΣiΩΣjΩ.ΣiΣj}. The associated canonical distribution is Ω=ΩSub.

As one would expect collapses equivalence classes of the preorder (i.e., ΩΩ and ΩΩ) to create a strict partial order. Canonical distributions allow minimizing the number of alphabets in the distribution while retaining the same connecting power. This means that counter-examples can be easily translated between a distribution and its canonical form, and hence the following proposition.

Proposition 4.27.

𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)=𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)=

5 Compositional Learning Algorithm

Figure 4: Overview of the algorithm.

In this section, we present our algorithm to compositionally learn an unknown system 𝑆𝑈𝐿=M1M2Mn consisting of the parallel composition of n LTSs, given only a Teacher for the whole SUL and knowledge of the global alphabet Σ𝑆𝑈𝐿.

A bird’s eye view of the algorithm is provided in Figure 4. The key idea is to learn each component via a separate learner. Each learner Li poses membership queries independently, which are suitably translated to queries for the global Teacher, until it produces a hypothesis i. Hypotheses returned by local learners are combined to create a global equivalence query. Counter-examples obtained through equivalence queries (σ,b)Σ𝑆𝑈𝐿×{,+} are classified as either global or local. They are global when the updated observations 𝖮𝖻𝗌=𝖮𝖻𝗌{(σ,b)} and the current distribution Ω of Σ𝑆𝑈𝐿 are such that Ω⊧̸𝖮𝖻𝗌, and local otherwise. Global counter-examples are used to refine the distribution Ω, possibly creating components/learners. Local counter-examples are used to update the state of local learners.

We briefly recall the local learning procedure, which was introduced in previous work, before moving on to presenting the details of our main algorithm in Section 5.2.

5.1 Local learners

For each alphabet Σi in the distribution Ω, we spawn a learner Li that is tasked with formulating a hypothesis for the corresponding component. A key feature of this learner is the ability to translate local membership queries into global ones for the entire SUL, and to interpret the results of global queries at the local level [31]. The main difficulty is that this translation is not always feasible: when a membership query contains synchronizing actions, cooperation with the other learners is required. This should be done in a way that is consistent with the current distribution, i.e., if Ω𝖮𝖻𝗌, then for any global membership query result (σ,b)Σ𝑆𝑈𝐿×{,+}, we require Ω𝖮𝖻𝗌{(σ,b)}. Furthermore, due to the nature of local observation functions, it is possible that at first 𝖮𝖻𝗌Σi(σ)=, and later this becomes 𝖮𝖻𝗌Σi(σ)=+ as the set of global observations is extended through counter-examples; the learners must account for this. In summary, local learners should:

  • translate local queries to global ones, preserving Ω𝖮𝖻𝗌. If a local query cannot be translated, the answer is “unknown”;

  • be able to handle “unknown” entries in the learning data structure (e.g., an observation table), ensuring progress even in the presence of incomplete information; and

  • be able to correct negative counter-examples on the basis of a later positive counter-example.

Our previous work [31] shows one way to implement such a learner as an extension of the 𝙻 algorithm [4], in which LTSs are represented as prefix-closed DFAs. However, we remark that the algorithms proposed in the present work are independent of the implementation of these local learners, as long as they satisfy the requirements above. Thus, our 𝙻-based implementation can be swapped out for other active learner algorithms, such as TTT [21].

5.2 Main algorithm

The main algorithm is presented in Algorithm 1. Initially, 𝖮𝖻𝗌 is empty, and the distribution Ω contains singletons of the alphabet of the SUL. The algorithm iteratively performs the following steps.

Each learner is run in parallel until producing a hypothesis. Observations 𝖮𝖻𝗌 are suitably updated to record the interactions with the Teacher. Next, the local hypotheses are composed in parallel to form , which is submitted to the Teacher as an equivalence query 𝑇𝑒𝑎𝑐ℎ𝑒𝑟(). If the query returns no counter-example, the algorithm returns and terminates. Otherwise, the returned counter-example (σcex,b) is added to 𝖮𝖻𝗌.

Crucially, when (σcex,b) is a global counter-example, it corresponds exactly to counter-examples to the distribution (Definition 4.8), as shown in the following lemma.

Lemma 5.1.

Given a global counter-example (σ,b), let 𝖮𝖻𝗌=𝖮𝖻𝗌{(σ,b)}:

  • if b=, then there is P𝖣𝗈𝗆(𝖮𝖻𝗌)Ω such that (σ,P)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌).

  • else, there is σN𝖣𝗈𝗆(𝖮𝖻𝗌), SΩ and P𝖣𝗈𝗆(𝖮𝖻𝗌)ΩS such that (σN,P(Σiσ)ΣiS)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)

Furthermore, all of the elements of 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) have the above structure.

Therefore, based on this lemma, the distribution is augmented with discrepancies for a chosen subset S of distribution counter-examples, following Corollary 4.23. This process eventually converges to provide Ω such that Ω𝖮𝖻𝗌 (Theorem 4.25). The new distribution is then optimized by making it canonical (Definition 4.26) and, if desired, increasing its connectivity. The optimization step does not affect counter-example-freeness (by Proposition 4.27 and Corollary 4.22) and may be used to reduce synchronizations, which improves performance (see Section 6). New learners are then started over the updated alphabets.111In practice, learners leverage 𝖮𝖻𝗌 to partially initialize their observation tables.

If, instead, the counter-example is local, its projections are forwarded to the local learners, and the next iteration starts.

 Remark 5.2.

We leave the selection of counter-example set S as an implementation choice. While S=𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) maximizes counter-example elimination, finding all counter-examples may be expensive. In our implementation, we process just one counter-example at a time, which in practice often yields a valid distribution after a single update step.

Our main theorem for this section states that the algorithm terminates and returns a correct model of the SUL.

Algorithm 1 Main learning algorithm.
Theorem 5.3.

Let 𝑆𝑈𝐿=M1M2Mn consist of n parallel LTSs. Algorithm 1 terminates and returns 1,2,,k such that 12k𝑆𝑈𝐿.

 Remark 5.4.

We make no claims regarding the number of components returned by the algorithm, nor whether iMi, for all i. This is because the final distribution may vary depending on counter-example and discrepancy choices. Moreover, different sets of component LTSs can result in the same parallel composition (see [31, Remark 2]). While the composition 12k may not be canonical (i.e., minimal), each i is guaranteed to be a canonical model of the local observation function 𝖮𝖻𝗌Σi, as each local learner is a (slightly modified) instance of 𝙻, for which we have minimality guarantees.

Example 5.5 (Example run).

We give an example run where the target SUL is the model of Figure 1. For the sake of simplicity, we focus on the global counter-examples and the subsequent distribution updates, considering only one distribution counter-example per step. Moreover, we consistently select a smallest discrepancy for each counter-example as our greedy strategy to minimize the connectivity of the resulting distribution.

We start from Ωsingles={{a},{b},{c},{d}}. The local alphabets initially contain only one symbol, so local learners will make membership queries about traces containing exclusively that symbol. This leads to the components depicted below.

The first global counter-example is (ab,+), yielding several counter-examples to Ωsingles𝖮𝖻𝗌, of which we consider (b,({a}ϵ,{b}ab,{c}ϵ,{d}ϵ)). The smallest discrepancy for this counter-example is {a,b}. We use it to update the distribution and obtain Ωab={{a,b},{c},{d}},222We made the distribution canonical (Definition 4.26) and removed the {a} and {b} components., which models the current observations. The new component over {a,b} is then learned locally, producing (the {c} and {d} components are unchanged):

The next global counter-example (cb,+), leads to distribution counter-example (b,({a,b}cb,{c}ϵ,{d}ϵ)). Its smallest discrepancy is {b,c} and the new distribution is Ωab,bc={{a,b},{b,c},{d}}. Although the counter-example has been handled, Ωab,bc does not model the observations, as 𝐶𝐸𝐷(Ωab,bc,𝖮𝖻𝗌) contains (b,({a,b}cb,{b,c}ab,{d}ϵ)). Its smallest discrepancy {a,b,c} gives Ωabc={{a,b,c},{d}}, modelling the observations.

To finish our example, the next global counter-example is (abd,+). The corresponding distribution counter-example is (d,({a,b,c}ϵ,{d}abd)). There are two smallest discrepancies for this counter-example: {a,d} and {b,d}. Selecting {b,d} leads to {{a,b,c},{b,d}}, which models the target language and exactly corresponds to the decomposition of Figure 1. Selecting {a,d} creates unnecessary connections, resulting (after some omitted steps) in either {{a,b,c},{a,d},{b,d}} or {{a,b,c},{a,b,d}} as a final distribution.

Our current implementation selects either discrepancy as both are locally optimal. Finding efficient ways to explore multiple discrepancy choices for globally optimal distributions remains an open challenge for future work.

6 Experiments

We evaluate the effectiveness of our approach in terms of savings in membership and equivalence queries. We did not expect to gain efficiency (absolute execution time) over the mature available tools, because our current implementation of local learners [31] interfaces with an external SAT solver for forming local hypotheses. To start with, we extended the tool Coal [31] into CoalA (for COmpositional Automata Learner with Alphabet refinement), by adding the ability to refine the alphabets based on global counter-examples. The tool is based on LearnLib 0.18.0 [22]. As discussed in Section 5.2, the theory allows optimizing the distribution. Our implementation of this is based on greedily finding a (clique) edge cover in the hypergraph (Σ,Δ), where Δ is the set of all discrepancies found thus far. Since the learners Li perform better when their alphabet contains more local actions, our algorithm tries to optimise for this. We sometimes also merge components to convert synchronizations into local actions.

To validate our approach, we experiment with learning LTSs obtained from three sources:

  1. 1.

    300 randomly generated LTSs from our previous work [31], varying in structure and size.

  2. 2.

    328 LTSs obtained from Petri nets that are provided by the Hippo tool [40] website;333https://hippo.uz.zgora.pl/ these are often more sequential in nature than our other models.

  3. 3.

    Two scalable realistic models, namely CloudOpsManagement from the 2019 Model Checking Contest [3] and a producers/consumers model [41, Fig. 8].

Using a machine with four Intel Xeon 6136 processors and 3TB of RAM running Ubuntu 20.04, we apply each of three approaches: (i) our black-box compositional approach (CoalA), (ii) compositional learning with given alphabets (Coal[31], and (iii) monolithic learning with 𝙻 (as implemented in LearnLib). Coal can be viewed as an idealized (best-case) baseline where the knowledge of the system decomposition is already available. Each run has a time-out of 30 minutes. We record the number of membership and equivalence queries posed to the Teacher, which we assume answers queries in constant time. This eliminates variations in runtime caused by the Teacher, and ensures that local negative counter-examples (see Section 5.1) are always eventually corrected. Resolving this limitation is orthogonal to the current work and left for future research. A complete replication package is at [18].

Figure 5: Performance of 𝙻 and compositional learning on 300 randomly generated models. Dashed lines indicate time-outs. Results obtained with Coal are in gray.

Random models

Figure 5 shows the results on our random models. The colors indicate various communication structures (see [31] for details). As a reference, the results obtained with Coal are given in gray. We observe that CoalA requires significantly fewer membership queries than 𝙻 (note the logarithmic scale) and is closer to the theoretical optimum of Coal; the result show 5-6 orders of magnitude of improvement in a large number of concurrent systems. The number of equivalence queries required by CoalA is typically slightly higher, but results of larger instances suggest that CoalA scales better than its monolithic counterpart. The data shows that also in the case of equivalence queries, it is not uncommon to gain an order of magnitude of saving by using our approach. We note that CoalA timeouts occur not due to a high query count, but because local learners use SAT solving to construct minimal hypotheses from observation tables that contain unknowns. This process can be computationally expensive.

Petri Net models

The results of learning the Hippo models are given in Figure 6. Here we are not able to run Coal, since the component alphabets are not known. CoalA does not perform as well as on random models, in particular it requires more equivalence queries than monolithic 𝙻. This is explained by the fact that these Petri nets contain mostly sequential behavior, i.e., the language roughly has the shape 1a2 for some languages 1Σ1 and 2Σ2. Even though our learner is able to find the decomposition {Σ1{a},Σ2{a}}, we do not gain much due to the absence of concurrent behavior. In the Hippo benchmark set, CoalA typically finds between two and nine components.

Figure 6: Performance of 𝙻 and compositional learning on Hippo models. Dashed lines indicate time-outs or out-of-memory.

Realistic models

Finally, Table 1 shows the results of learning two scalable models with relevant parameters indicated. No timeouts are reported in the table because all systems were successfully learned within the time limit. CoalA scales well as the SUL size increases, requiring roughly a constant factor more queries than Coal for both CloudOps and producers/consumers. We remark that practically all runtime of CoalA and Coal is spent in the local learners: they require expensive SAT queries to construct a local hypothesis. Improving the implementation of local learners would decrease these times significantly; we stress that this is orthogonal to the goal of the current work. Furthermore, in any practical scenario, the processing of queries by the Teacher forms the bottleneck and 𝙻 would be much slower than CoalA.

Table 1: Performance of CoalA and 𝙻 for realistic composite systems. Reported runtimes are in seconds. The number of refinement iterations is listed under “it.” and the number of components found under “com”.
CoalA Coal 𝙻
model states time memQ eqQ it. com time memQ eqQ com time memQ eqQ
CloudOps W=1,C=1,N=3 690 225.21 7 686 106 67 9 1.27 880 24 5 3.29 2 740 128 88
CloudOps W=1,C=1,N=4 1 932 235.97 9 521 115 74 10 1.93 923 26 6 29.62 22 252 120 216
CloudOps W=2,C=1,N=3 3 858 232.87 25 968 98 63 8 357.51 9 812 29 5 12.69 12 574 560 99
CloudOps W=2,C=1,N=4 10 824 708.95 33 616 111 72 10 555.82 9 532 30 6 143.28 91 178 900 227
ProdCons K=3,P=2,C=2 1 664 3.84 685 26 23 5 1.40 301 13 5 3.05 2 141 165 43
ProdCons K=5,P=1,C=1 170 1.03 451 22 14 3 0.88 118 11 3 0.40 160 126 30
ProdCons K=5,P=2,C=1 662 1.59 379 24 17 4 0.93 158 13 4 2.79 2 523 625 91
ProdCons K=5,P=2,C=2 2 240 5.53 697 28 25 5 2.35 282 14 5 6.40 6 984 705 93
ProdCons K=5,P=3,C=2 8 750 17.82 1 305 31 30 6 6.33 604 15 6 72.87 60 186 235 187
ProdCons K=5,P=3,C=3 30 344 73.69 2 454 34 37 7 32.59 1 269 17 7 321.54 222 567 729 193
ProdCons K=7,P=2,C=2 2 816 5.26 715 29 25 5 2.20 307 15 5 16.92 15 792 997 135

7 Conclusion

We presented a novel active learning algorithm that automatically discovers component decompositions using only global observations. Unlike previous approaches, our technique handles a general synchronization scheme common in automata theory and process calculi, without any prior knowledge of component structure. We developed a theory of alphabet distributions and their relation to observations, formally characterizing counter-examples that indicate inconsistencies between distributions and observations, and providing systematic methods to resolve them. The algorithm spawns a local learner for each component, dynamically refining the distribution of the alphabet based on counter-examples. Our CoalA implementation dramatically reduces membership queries and achieves better query scalability than monolithic learning on highly concurrent systems. Future work will focus on:

  • developing a theory of counter-example and discrepancy selection to characterize optimal distributions;

  • extending our approach to apartness-based learning [37] and more expressive formalisms such as register automata [9] or timed automata [39];

  • leveraging our compositional techniques to analyze AI-generated code [32]; and

  • improving the efficiency of the local learners  [24].

References

  • [1] Fides Aarts, Joeri de Ruiter, and Erik Poll. Formal models of bank cards for free. In ICST, pages 461–468, 2013. doi:10.1109/ICSTW.2013.60.
  • [2] Fides Aarts, Julien Schmaltz, and Frits W. Vaandrager. Inference and abstraction of the biometric passport. In ISoLA, volume 6415, pages 673–686, 2010. doi:10.1007/978-3-642-16558-0_54.
  • [3] Elvio Amparore et al. Presentation of the 9th Edition of the Model Checking Contest. In TACAS, volume 11429, pages 50–68, 2019. doi:10.1007/978-3-030-17502-3_4.
  • [4] Dana Angluin. Learning regular sets from queries and counterexamples. Information and computation, 75(2):87–106, 1987. doi:10.1016/0890-5401(87)90052-6.
  • [5] Franco Barbanera, Mariangiola Dezani-Ciancaglini, Ivan Lanese, and Emilio Tuosto. Composition and decomposition of multiparty sessions. Journal of Logical and Algebraic Methods in Programming, 119:100620, 2021. doi:10.1016/J.JLAMP.2020.100620.
  • [6] Benedikt Bollig, Joost-Pieter Katoen, Carsten Kern, and Martin Leucker. Learning Communicating Automata from MSCs. IEEE Transactions on Software Engineering, 36(3):390–408, 2010. doi:10.1109/TSE.2009.89.
  • [7] Manfred Broy, Bengt Jonsson, Joost-Pieter Katoen, Martin Leucker, and Alexander Pretschner, editors. Model-Based Testing of Reactive Systems, Advanced Lectures [The volume is the outcome of a research seminar that was held in Schloss Dagstuhl in January 2004], volume 3472, 2005. doi:10.1007/B137241.
  • [8] Ilaria Castellani, Madhavan Mukund, and P. S. Thiagarajan. Synthesizing distributed transition systems from global specification. In FSTTCS, volume 1738, pages 219–231, 1999. doi:10.1007/3-540-46691-6_17.
  • [9] Simon Dierl, Paul Fiterau-Brostean, Falk Howar, Bengt Jonsson, Konstantinos Sagonas, and Fredrik Tåquist. Scalable tree-based register automata learning. In TACAS, volume 14571, pages 87–108, 2024. doi:10.1007/978-3-031-57249-4_5.
  • [10] Andrzej Ehrenfeucht and Grzegorz Rozenberg. Partial (set) 2-structures. part II: state spaces of concurrent systems. Acta Informatica, 27(4):343–368, 1990. doi:10.1007/BF00264612.
  • [11] Tiago Ferreira, Harrison Brewton, Loris D’Antoni, and Alexandra Silva. Prognosis: closed-box analysis of network protocol implementations. In SIGCOMM, pages 762–774, 2021. doi:10.1145/3452296.3472938.
  • [12] Paul Fiterau-Brostean, Ramon Janssen, and Frits W. Vaandrager. Combining Model Learning and Model Checking to Analyze TCP Implementations. In CAV, volume 9780, pages 454–471, 2016. doi:10.1007/978-3-319-41540-6_25.
  • [13] Paul Fiterau-Brostean, Toon Lenaerts, Erik Poll, Joeri de Ruiter, Frits W. Vaandrager, and Patrick Verleg. Model learning and model checking of SSH implementations. In SPIN, pages 142–151, 2017. doi:10.1145/3092282.3092289.
  • [14] Markus Frohme and Bernhard Steffen. Compositional learning of mutually recursive procedural systems. International Journal on Software Tools for Technology Transfer, 23(4):521–543, 2021. doi:10.1007/s10009-021-00634-y.
  • [15] Dominik Fuchß, Haoyu Liu, Tobias Hey, Jan Keim, and Anne Koziolek. Enabling architecture traceability by llm-based architecture component name extraction. In ICSA, pages 1–12, 2025. doi:10.1109/ICSA65012.2025.00011.
  • [16] E. Mark Gold. Complexity of automaton identification from given data. Information and Control, 37(3):302–320, 1978. doi:10.1016/S0019-9958(78)90562-4.
  • [17] Roberto Guanciale and Emilio Tuosto. Realisability of pomsets via communicating automata. In ICE, volume 279, pages 37–51, 2018. doi:10.4204/EPTCS.279.6.
  • [18] Léo Henry, Mohammad Mousavi, Thomas Neele, and Matteo Sammartino. Replication package for the paper “Compositional Active Learning of Synchronous Systems through Automated Alphabet Refinement”, April 2025. doi:10.5281/zenodo.15170685.
  • [19] C. A. R. Hoare. Communicating Sequential Processes. Prentice Hall International, 2004.
  • [20] Falk Howar and Bernhard Steffen. Active automata learning in practice - an annotated bibliography of the years 2011 to 2016. In Machine Learning for Dynamic Software Analysis, volume 11026, pages 123–148, 2018. doi:10.1007/978-3-319-96562-8_5.
  • [21] Malte Isberner, Falk Howar, and Bernhard Steffen. The TTT algorithm: A redundancy-free approach to active automata learning. In RV, volume 8734, pages 307–322, 2014. doi:10.1007/978-3-319-11164-3_26.
  • [22] Malte Isberner, Falk Howar, and Bernhard Steffen. The Open-Source LearnLib: A Framework for Active Automata Learning. In CAV, volume 9206, pages 487–495, 2015. doi:10.1007/978-3-319-21690-4_32.
  • [23] Rainer Koschke. Architecture Reconstruction. Springer-Verlag, 2009. doi:10.1007/978-3-540-95888-8.
  • [24] Loes Kruger, Sebastian Junges, and Jurriaan Rot. Small test suites for active automata learning. In TACAS, volume 14571, pages 109–129, 2024. doi:10.1007/978-3-031-57249-4_6.
  • [25] Faezeh Labbaf, Jan Friso Groote, Hossein Hojjat, and Mohammad Reza Mousavi. Compositional learning for interleaving parallel automata. In FoSSaCS, volume 13992, pages 413–435, 2023. doi:10.1007/978-3-031-30829-1_20.
  • [26] Bas Luttik. Unique parallel decomposition in branching and weak bisimulation semantics. Theoretical Computer Science, 612:29–44, 2016. doi:10.1016/j.tcs.2015.10.013.
  • [27] Aryan Bastany Mahboubeh Samadi and Hossein Hojjat. Compositional learning for synchronous parallel automata. In FASE, volume 15693, pages 101–121, 2025. doi:10.1007/978-3-031-90900-9_6.
  • [28] Stefan Marksteiner, Marjan Sirjani, and Mikael Sjödin. Automated passport control: Mining and checking models of machine readable travel documents. In ARES, 2024. doi:10.1145/3664476.3670454.
  • [29] Joshua Moerman. Learning product automata. In ICGI, volume 93, pages 54–66, 2018. URL: http://proceedings.mlr.press/v93/moerman19a.html.
  • [30] Madhavan Mukund. From global specifications to distributed implementations. In Synthesis and Control of Discrete Event Systems, pages 19–35, 2002. doi:10.1007/978-1-4757-6656-1_2.
  • [31] Thomas Neele and Matteo Sammartino. Compositional automata learning of synchronous systems. In FASE, volume 13991, pages 47–66, 2023. doi:10.1007/978-3-031-30826-0_3.
  • [32] Marc North, Amir Atapour-Abarghouei, and Nelly Bencomo. Code gradients: Towards automated traceability of llm-generated code. In RE, pages 321–329, 2024. doi:10.1109/RE59067.2024.00038.
  • [33] Mathijs Schuts, Jozef Hooman, and Frits W. Vaandrager. Refactoring of legacy software using model learning and equivalence checking: An industrial experience report. In IFM, volume 9681, pages 311–325, 2016. doi:10.1007/978-3-319-33693-0_20.
  • [34] Maurice H. ter Beek, Rolf Hennicker, and José Proença. Team automata: Overview and roadmap. In COORDINATION, volume 14676, pages 161–198, 2024. doi:10.1007/978-3-031-62697-5_10.
  • [35] Maurice H. ter Beek, Rolf Hennicker, and José Proença. Overview and roadmap of team automata. CoRR, abs/2501.13589, 2025. doi:10.48550/arXiv.2501.13589.
  • [36] Frits W. Vaandrager. Model learning. Communications of the ACM, 60(2):86–95, 2017. doi:10.1145/2967606.
  • [37] Frits W. Vaandrager, Bharat Garhewal, Jurriaan Rot, and Thorsten Wißmann. A new approach for active automata learning based on apartness. In TACAS, volume 13243, pages 223–243, 2022. doi:10.1007/978-3-030-99524-9_12.
  • [38] Pepe Vila, Pierre Ganty, Marco Guarnieri, and Boris Köpf. CacheQuery: Learning Replacement Policies from Hardware Caches. In PLDI, pages 519–532, 2020. doi:10.1145/3385412.3386008.
  • [39] Masaki Waga. Active learning of deterministic timed automata with myhill-nerode style characterization. In CAV, pages 3–26, 2023. doi:10.1007/978-3-031-37706-8_1.
  • [40] Remigiusz Wiśniewski, Grzegorz Bazydło, Marcin Wojnakowski, and Mateusz Popławski. Hippo-CPS: A Tool for Verification and Analysis of Petri Net-Based Cyber-Physical Systems. In Petri Nets 2023, volume 13929, pages 191–204, 2023. doi:10.1007/978-3-031-33620-1_10.
  • [41] W.M. Zuberek. Petri net models of process synchronization mechanisms. In SMC, volume 1, pages 841–847, 1999. doi:10.1109/ICSMC.1999.814201.

Appendix A Proofs

A.1 Section 4

Proposition 4.6. [Restated, see original statement.]

Ω𝖮𝖻𝗌 if and only if for all traces σ𝖣𝗈𝗆(𝖮𝖻𝗌) it holds that 𝖮𝖻𝗌(σ)=ΣiΩ𝖮𝖻𝗌Σi(σΣi).

Proof.

We make the proof by double implication.

Suppose Ω𝖮𝖻𝗌. By definition this means that there is Ω such that 𝖮𝖻𝗌. Since L is a product language, Lemma 4.4 yields that ={σ1in.σΣii} where i={σΣiσ} for all i.

Now for any σ𝖣𝗈𝗆(𝖮𝖻𝗌), we wish to show that

𝖮𝖻𝗌(σ)=+ iff ΣiΩ𝖮𝖻𝗌Σi(σΣi)=+.

We separate the two implications.

If 𝖮𝖻𝗌(σ)=+, then 𝖮𝖻𝗌Σi(σΣi)=+ for all i by definition of local observation functions and we have our result.

If ΣiΩ𝖮𝖻𝗌Σi(σΣi)=+, it means that for any i, 𝖮𝖻𝗌Σi(σΣi)=+. By definition of local observations, for all i there is σi𝖣𝗈𝗆(𝖮𝖻𝗌) such that 𝖮𝖻𝗌(σi)=+ and σΣi=σiΣi. As 𝖮𝖻𝗌, σi.

This implies that i.σiσΣi=σiΣi . Hence for all i, σΣii by definition of i. It follows by definition of that σ. As 𝖮𝖻𝗌, we have 𝖮𝖻𝗌(σ)=+.


Conversely, suppose that σ𝖣𝗈𝗆(𝖮𝖻𝗌),𝖮𝖻𝗌(σ)=ΣiΩ𝖮𝖻𝗌Σi(σΣi).

Consider the language ={σ𝖣𝗈𝗆(𝖮𝖻𝗌)𝖮𝖻𝗌(σ)=+} and, for all i, let i={σ𝖣𝗈𝗆(𝖮𝖻𝗌Σi)𝖮𝖻𝗌Σi(σ)=+}. Clearly, we have that 𝖮𝖻𝗌. Furthermore, by assumption, ={σ1in,σΣii}. Hence Ω by definition. Ω𝖮𝖻𝗌 follows.

Corollary 4.10. [Restated, see original statement.]

Ω𝖮𝖻𝗌𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)=.

Proof.

We make the proof by double implication.

When 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) there is a counter-example (σN,P) to 𝖮𝖻𝗌𝖮𝖻𝗌. In particular 𝖮𝖻𝗌(σN)= and for all ΣiΩ 𝖮𝖻𝗌Σi(σNΣi)=𝖮𝖻𝗌Σi(P(Σi)Σi)=+ by definition of a counter-example and local observation functions. Hence by  Proposition 4.6, Ω⊧̸𝖮𝖻𝗌.

Conversely, when 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)= then for any σ𝖣𝗈𝗆(𝖮𝖻𝗌) such that 𝖮𝖻𝗌(σ)= there is some i such that for any σ𝖣𝗈𝗆(𝖮𝖻𝗌) verifying 𝖮𝖻𝗌Σi(σΣi)=𝖮𝖻𝗌Σi(σΣi), 𝖮𝖻𝗌(σ)=.

It follows that for any σ𝖣𝗈𝗆(𝖮𝖻𝗌) such that 𝖮𝖻𝗌(σ)=, 1in𝖮𝖻𝗌Σi(σΣi=). As by definition of local observations, for any σ𝖣𝗈𝗆(𝖮𝖻𝗌) such that 𝖮𝖻𝗌(σ)=+, 1in𝖮𝖻𝗌Σi(σΣi=+) we have Ω𝖮𝖻𝗌 by Proposition 4.6.

Proposition 4.16. [Restated, see original statement.]

Suppose there exists (σN,P)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌). For each discrepancy δ𝒟(σN,P), (σN,P)𝐶𝐸𝐷(Ω{δ},𝖮𝖻𝗌). Conversely, for any distribution Ω of Σ where δΣi, for all δ𝒟(σN,P) and all ΣiΩ, (σN,P) is a counter-example to Ω𝖮𝖻𝗌.

Proof.

Consider a discrepancy δ. We first prove that introducing δ into Ω is sufficient to remove the counter-example. As 𝖮𝖻𝗌 is not changed by the change from Ω to Ω we have to prove that there is no σΣi𝗂𝗆𝗀(P) such that σNδ=σΣiδ.

Fix σΣi𝗂𝗆𝗀(P).

  • If 𝒟mΣi(σN,P)δ then there is aδ that occurs in different multiplicities in σN and σΣi. Hence, a also occurs in different multiplicities in σNδ and σδ. Hence, σδσiδ.

  • Otherwise, following the notations of Definition 4.12, σN=π(σΣiθ) and for all j<k such that σN[j]=σN[k] it holds that π(j)<π(k). By preserving the order of equal symbols, we maintain that σ[j]=a is the l-th occurrence of a in σN iff σ[π(j)]=a is the l-th occurrence of a in σi.

    Now {a,b}={σN[j],σN[k]}δ by definition of a discrepancy and {a,b} is created from a so-called inversion in π: a pair (j,k) such that j<k and π(j)>π(k). By our assumption on π, we know that ab and furthermore that in σ at least one more copy of b precedes this a (namely the b at σ[π(k)]). As a result of this and the fact that {a,b}δ, we have σNδσΣiθδ. It follows directly that σNδσΣiδ.

We now prove that it is necessary. Consider Ω defined as above and fix ΣjΩ. In the following, we show that there is σ𝗂𝗆𝗀(P) such that σNΣj=σΣj.

For this, as there is no discrepancy δ𝒟(σN,P) subset to Σj we know that there is at least one ΣiΩ such that no order nor multiplicity discrepancy for σ δΣi is a subset of Σj. Because of this, Σm(σN)ΔΣm(P(Σi))Σj=. Hence P(Σi)Σj and σNΣj are equal up to permutation. Consider any permutation π such that σN=π(σi) (following again the notations of Definition 4.12). Without loss of generality, we do not permute the position of equal symbols, which fixes a unique permutation π=π. For any {{σN[j],σN[k]}j<kπ(j)>π(k)}, we know (as these are elements of 𝒟oΣi(σN,P)) that we don’t have {σN[j],σN[k]}Σj. It follows that the restriction of π to the indices corresponding to symbols in Σj is non-decreasing and thus the identity. It follows that σNΣj=P(Σi)Σj.

Proposition 4.21. [Restated, see original statement.]

Consider two distributions Ω and Ω of Σ. We have ΩΩ𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌).

Proof.

We first prove that 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌). Suppose ΩΩ. If 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)= then we have our result. Else, take (σN,P)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌).

We know that for each ΣiΩ, there is ΣjiΩ such that ΣiΣji. Fix such a Σji. We have that σNΣi=P(Σji)Σi as ΣiΣji and (σN,P)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌).

By doing this for all ΣiΩ, we get (σN,(P(Σji))ΣiΩ)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) and by definition (σN,(P(Σji))ΣiΩ)(σN,P).

Corollary 4.23. [Restated, see original statement.]

Suppose that 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌). For (σN,P)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌), we pick a discrepancy δ(σN,P)𝒟(σN,P). For any non-empty subset S of 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌), let Ω=Ω{δceceS}. Then ΩΩ and 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌).

Proof.

ΩΩ follows directly from the definition of Ω: all elements of Ω are preserved. Furthermore, for ceS, for all ΣiΩ we have δceΣiΩ as σNδσΣiδ and σNΣi=σΣiΣi. Hence Ω⋠Ω and ΩΩ. From Proposition 4.21 we get that 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌). Furthermore, we know by Proposition 4.16 that for a fixed ceS, 𝐶𝐸𝐷(ΩDce,𝖮𝖻𝗌) does not contain ce, ensuring that 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)𝐶𝐸𝐷(Ωδce,𝖮𝖻𝗌). As furthermore 𝐶𝐸𝐷(Ωδce,𝖮𝖻𝗌)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) by construction, we have that 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌).

Theorem 4.25. [Restated, see original statement.]

Suppose Ω⊧̸𝖮𝖻𝗌. The above process converges to a distribution Ω𝖮𝖻𝗌 such that ΩΩ after finitely many of steps. For choosing S=𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) at each step, the number of steps is bounded by |𝒫(Σ)|.

Proof.

In case of convergence, ΩΩ is ensured directly by each step of the process by Corollary 4.23. We now prove that convergence occurs in a finite amount of steps. Each step eliminates all counter-examples that are part of the non-empty set S chosen, as proven in Propositions 4.16 and 4.23. As 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) the only possible new counter-examples are ones that contain strictly counter-examples of 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌). It follows that iterating this process makes strict progress in the following sense: either counter-examples are eliminated or they are replaced by ones with a strictly larger positive image. Because the positive image size of counter-examples is bounded by the size of the distribution, itself bounded by |𝒫(Σ)|, we know that this converges in a finite number of steps.

If S=𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) at each step, we furthermore have that the size of the smallest positive image of a counter-example increases by at least one at each step, which bounds the number of steps by |𝒫(Σ)|.

Proposition 4.27. [Restated, see original statement.]

𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)=𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)=

Proof.

We make the proof for a fixed ΣiSub, the result following by induction. By definition ΩΩ, which entails that 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌). From there it follows that 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)=𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)=. To get the other one, we reason by contrapositive. Suppose 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) and consider (σN,P) in it. Then clearly, taking P=(P(Σj))ji, (σN,P)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) and we have our result.

A.2 Section 5

Lemma 5.1. [Restated, see original statement.]

Given a global counter-example (σ,b), let 𝖮𝖻𝗌=𝖮𝖻𝗌{(σ,b)}:

  • if b=, then there is P𝖣𝗈𝗆(𝖮𝖻𝗌)Ω such that (σ,P)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌).

  • else, there is σN𝖣𝗈𝗆(𝖮𝖻𝗌), SΩ and P𝖣𝗈𝗆(𝖮𝖻𝗌)ΩS such that (σN,P(Σiσ)ΣiS)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)

Furthermore, all of the elements of 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌) have the above structure.

Proof.

We know that Ω𝖮𝖻𝗌 and Ω⊧̸𝖮𝖻𝗌 since the counter-example is global. From Corollary 4.10 we get that 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌)= and 𝐶𝐸𝐷(Ω,𝖮𝖻𝗌).

Consider (σN,P)𝐶𝐸𝐷(Ω,𝖮𝖻𝗌). Since it is not a counter-example to Ω𝖮𝖻𝗌, σ must appear in it. If b= we must have that σN=σ by definition of counter-examples to a distribution. Similarly, if b=+ then σ𝗂𝗆𝗀(P). Observing that 𝖣𝗈𝗆(𝖮𝖻𝗌)=𝖣𝗈𝗆(𝖮𝖻𝗌){σ}, we obtain our result.

Theorem 5.3. [Restated, see original statement.]

Let 𝑆𝑈𝐿=M1M2Mn consist of n parallel LTSs. Algorithm 1 terminates and returns 1,2,,k such that 12k𝑆𝑈𝐿.

Proof.

The correctness of the returned hypothesis is guaranteed by the Teacher.

The algorithm’s termination is established by showing that the “while True” loop cannot execute indefinitely. This is because the two types of counter-examples can only occur finitely many times:

  • For local counter-examples, when the distribution Ω is fixed, all the learners eventually converge to a hypothesis after encountering finitely many local counter-examples, as shown in [31, Theorem 2].

  • For global counter-examples, each global counter-example leads to an updated distribution Ω such that ΩΩ (Theorem 4.25). This update can only happen finitely many times, as the top element of is {ΣSUL}.