Abstract 1 Introduction 2 Preliminaries 3 Characterisation of objectives in BC(𝚺𝟐𝟎) with memory 𝒌 4 Union of objectives 5 Conclusions and open questions References

The Memory of ω-Regular and BC(𝚺20) Objectives

Antonio Casares ORCID University of Warsaw, Poland Pierre Ohlmann ORCID CNRS, Laboratoire d’Informatique et des Systèmes (LIS), Marseille, France
Abstract

In the context of 2-player zero-sum infinite duration games played on (potentially infinite) graphs, the memory of an objective is the smallest integer k such that in any game won by Eve, she has a strategy with k states of memory. For ω-regular objectives, checking whether the memory equals a given number k was not known to be decidable. In this work, we focus on objectives in BC(𝚺20), i.e. recognised by a potentially infinite deterministic parity automaton. We provide a class of automata that recognise objectives with memory k, leading to the following results:

  • for ω-regular objectives, the memory can be computed in NP;

  • given two objectives W1 and W2 in BC(𝚺20) and assuming W1 is prefix-independent, the memory of W1W2 is at most the product of the memories of W1 and W2.

Our results also apply to chromatic memory, the variant where strategies can update their memory state only depending on which colour is seen.

Keywords and phrases:
Infinite duration games, Strategy complexity, Omega-regular
Category:
Track B: Automata, Logic, Semantics, and Theory of Programming
Funding:
Antonio Casares: Supported by the Polish National Science Centre (NCN) grant “Polynomial finite state computation” (2022/46/A/ST6/00072).
Copyright and License:
[Uncaptioned image] © Antonio Casares and Pierre Ohlmann; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Logic and verification
Related Version:
Full Version: https://arxiv.org/abs/2502.05840 [13]
Acknowledgements:
We thank Nathan Lhote for his participation in the scientific discussions which led to this paper. We also thank Pierre Vandenhove for pointing us to references concerning lifting results for memory.
Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis

1 Introduction

Context: Strategy complexity in infinite duration games

We study infinite duration games on graphs in which two players, called Eve and Adam, interact by moving a token along the edges of a (potentially infinite) edge-coloured directed graph. Each vertex belongs to one player, who chooses where to move next during a play. This interaction goes on for an infinite duration, producing an infinite path in the graph. The winner is determined according to a language of infinite sequences of colours W, called the objective of the game; Eve aims to produce a path coloured by a sequence in W, while Adam tries to prevent this. This model is widespread for its use in verification and synthesis [15].

In order to achieve their goal, players use strategies, which are representations of the course of all possible plays together with instructions on how to act in each scenario. In this work, we are interested in optimal strategies for Eve, that is, strategies that guarantee a victory whenever this is possible. More precisely, we are interested in the complexity of such strategies, or in other words, in the succinctness of the representation of the space of plays.

Positionality.

The simplest strategies are those that assign in advance an outgoing edge to each vertex owned by Eve, and always play along this edge, disregarding all the other features of the play. All the information required to implement such a strategy appears in the game graph itself. Objectives for which such strategies are sufficient to play optimally are called positional (or memoryless). Understanding positionality has been the object of a long line of research. The landmark results of Gimbert and Zielonka [22] and Colcombet and Niwinski [18] gave a good understanding of which objectives are bi-positional, i.e. positional for both players.

More recently, Ohlmann proposed to use universal graphs as a tool for studying positionality (taking Eve’s point of view) [27]. This led to many advances in the study of positionality [2, 28], and most notably, a characterisation of positional ω-regular objectives by Casares and Ohlmann [10], together with a polynomial time decision procedure (and some other important corollaries, more discussion below).

Strategies with memory.

However, in many scenarios, playing optimally requires distinguishing different plays that end in the same vertex. A seminal result of Büchi and Landweber [7] states that in finite games where the objective is an ω-regular language, the winner has a winning strategy that can be implemented by a finite automaton processing the edges of the game; this result was later extended to infinite game graphs by Gurevich and Harrington [23]. Here, the states of the automaton are interpreted as memory states of the strategy, and a natural measure of the complexity of a strategy is the number of such states. More precisely, the memory of an objective W is the minimal k such that whenever Eve wins a game with objective W, she has a winning strategy with k states of memory. For ω-regular objectives, this is always finite [7, 23], while the case of positionality discussed above corresponds to memory k=1.

Characterising the memory of objectives has been a recurrent research subject since the 1990s. Notable results include the characterisation for Muller objectives by Dziembowski, Jurdziński, and Walukiewicz [19], or for closed objectives by Colcombet, Fijalkow and Horn [16]. However, these are all rather restricted classes of ω-regular objectives. The problem of deciding the memory of ω-regular objectives has been raised in many occasions (see for instance [24, Sect. 9.2], [4, Sect. 4], [17, Conclusions], or [3]), but prior to this work, even computing the memory of open ω-regular objectives (sometimes called regular reachability objectives) was not known to be decidable.

Chromatic memory.

In the special case where the automata implementing strategies are only allowed to read the colours on the edges of the game graph, we speak of chromatic memory. In his PhD thesis, Kopczyński showed that, for prefix-independent ω-regular objectives and over finite game graphs, the chromatic memory can be computed in exponential time [24, Theorem 8.14]. Recently, it was shown that computing the chromatic memory of some restricted subclasses of ω-regular objectives is in fact NP-complete: for Muller objectives [8] and for topologically open or closed objectives [3]. However, the chromatic and the unconstrained memory of objectives may differ, even exponentially [8, 9].

Unions of objectives.

The driving question in Kopczyński’s PhD thesis [24] is whether prefix-independent positional objectives are closed under union, which has become known as Kopczyński’s conjecture. Recently, Kozachinskiy [25] disproved this conjecture, but only for positionality over finite game graphs, and using non-ω-regular objectives. In fact, the conjecture is now known to hold for ω-regular objectives [10] and 𝚺20 objectives [28]. Casares and Ohlmann proposed a generalisation of this conjecture from positional objectives to objectives requiring memory [12, Conjecture 7.1] (see also [24, Proposition 8.11]):

Conjecture 1 (Generalised Kopczyński’s conjecture).

Let W1,W2SSω be two prefix-independent objectives with memory k1 and k2, respectively. Then W1W2 has memory at most k1k2.

Using the characterisation of [19], it is not hard to verify that the conjecture holds for Muller objectives.

𝐁𝐂(𝚺𝟐𝟎) languages.

The results in this work apply not only to ω-regular languages, but to the broader class of BC(𝚺20) languages. These are boolean combinations of languages in 𝚺20 (countable unions of closed languages), or equivalently, recognised by deterministic parity automata with infinitely many states [31, Sect. 5]. This class includes typical non-ω-regular examples such as energy or mean-payoff objectives (and therefore also their boolean combinations with ω-regular ones) but also broader classes such as unambiguous ω-petri nets [20] and deterministic ω-Turing machines (Turing machines with a Muller condition).

Contributions

Our main contribution is a characterisation of BC(𝚺20) objectives with memory k, stated in Theorem 7. It captures both the memory and the chromatic memory of objectives over infinite game graphs.111In the full version of this paper [13] we show that, for ω-regular languages, this characterisation also captures the memory over finite graphs, and therefore both quantities coincide in the ω-regular case. The characterisation is based on the notion of k-wise ε-completable automata, which are parity automata with states partitioned in k chains, where each chain is endowed with a tight hierarchical structure encoded in the ε-transitions of the automaton.

From this characterisation, we derive the following corollaries:

  1. 1.

    Decidability in NP. Given a deterministic parity automaton 𝒜, the memory (resp. chromatic memory) of L(𝒜) can be computed in NP.

  2. 2.

    Generalised Kopczyński’s conjecture. We establish (and strengthen) Conjecture 1 in the case of BC(𝚺20) objectives: if W1 and W2 are BC(𝚺20) objectives with memory k1 and k2, and one of them is prefix-increasing, then the memory of W1W2 is k1k2.

Main technical tools: Universal graphs and 𝜺-completable automata.

As mentioned above, Ohlmann proposed a characterisation of positionality by means of universal graphs [27]. In 2023, Casares and Ohlmann extended this characterisation to objectives with memory k by considering partially ordered universal graphs [12]. Until now, universal graphs have been mainly used to show that certain objectives have memory k (usually for k=1); this is done by constructing a universal graph for the objective. One technical novelty of this work is to exploit both directions of the characterisation, as we also rely on the existence of universal graphs to obtain decidability results.

Our characterisation is based on the notion of k-wise ε-completable automata, which extends the key notion of [10] from positionality to finite memory.

Comparison with [10].

In 2024, Casares and Ohlmann characterised positional ω-regular objectives [10], establishing decidability of positionality in polynomial time, and settling Kopczyński’s conjecture for ω-regular objectives. Although the current paper generalises the notions and some of the results from [10] to the case of memory, as well as potentially infinite automata, the proof techniques are significantly different: while [10] is based on intricate successive transformations of parity automata, ours is based on an extraction method in the infinite and manipulates ordinal numbers. Though somewhat less elementary, our proof is notably shorter, and probably easier to read.

When instantiated to the case of memory 1, we thus extend Kopczyński’s conjecture from ω-regular objectives to positional BC(𝚺20) objectives. This also gives an easier proof of decidability of positionality in polynomial time222It is explained in [10, Theorem 5.3] how decidability of positionality in polynomial time can be derived, with reasonable efforts, from Kopczyński’s conjecture. for ω-regular objectives. However, some of the results of [10] are not recovered with our methods: the finite-to-infinite and 1-to-2-player lifts, as well as closure of positionality under addition of neutral letters.

2 Preliminaries

We let Σ be a countable alphabet333We restrict our study to countable alphabets, as if SS is uncountable, the topological space SSω is not Polish and the class BC(𝚺20) is not as well-behaved. and εΣ be a fresh symbol that should be interpreted as a neutral letter. Given a word w(Σ{ε})ω we write πΣ(w) for the (finite or infinite) word obtained by removing all ε’s from w; we call πΣ(w) the projection of w on Σ. An objective is a set WΣω. Given an objective WΣω, we let Wε denote πΣ1(W)(Σ{ε})ω.

Throughout the paper, we use Von Neumann’s notation for ordinals: λ denotes the set of ordinals <λ. This also applies to finite numbers, e.g. k={0,,k1}. As is standard, we also identify cardinals with their initial ordinals.

2.1 Graphs, games and memory

We introduce notions pertaining to games and strategy complexity, as they will be central in the statement of our results. Nevertheless, we note that all our technical proofs will use these definitions through Theorem 5 below, and will not explicitly use games.

Graphs.

A SS-graph G is given by a set of vertices V(G) and a set of coloured, directed edges E(G)V(G)×SS×V(G). We write v𝑐v for edges (v,c,v). A path is a sequence of edges with matching endpoints (v0c0v1)(v1c0v2) which we write as v0c0v1c1. Paths can be empty, finite, or infinite, and have a label c0c1. Throughout the paper, graphs are implicitly assumed to be without dead-end: every vertex has an outgoing edge.

We say that a vertex v in a Σ-graph (resp. a (Σ{ε})-graph) satisfies an objective WΣω if the label of any infinite path from v belongs to W (resp. to Wε). A pointed graph is a graph with an identified initial vertex. A pointed graph satisfies an objective WΣω if the initial vertex satisfies W; a non-pointed graph satisfies an objective if all its vertices do. An infinite tree is a sinkless pointed graph whose initial vertex is called the root, and with the property that every vertex admits a unique path from the root.

A morphism from a Σ-graph G to a Σ-graph H is a map ϕ:V(G)V(H) such that for any edge v𝑐v in G, it holds that ϕ(v)𝑐ϕ(v) is an edge in H. Morphisms between pointed graphs should moreover send the initial vertex of G to the initial vertex of H. Morphisms need not be injective. We write GH when there exists a morphism ϕ:GH.

Games and strategies.

A game is given by a pointed (Σ{ε})-graph G together with an objective WΣω, and a partition of the vertex set V(G)=VEveVAdam into the vertices controlled by Eve and those controlled by Adam. A strategy444We follow the terminology from [12]. The classical notion of a strategy as a function f:E(G)V(G) can be recovered by considering the graph with vertices E(G), and edges ρ𝑒ρe. (for Eve) is a pointed graph together with a morphism π towards G, satisfying that for every edge v𝑐v in the game, where vVAdam, and for all uπ1(v), there is an edge u𝑐u such that uπ1(v). A strategy is winning if it satisfies the objective W of the game. We say that Eve wins if there exists a winning strategy.

Memory.

A finite-memory strategy555It is common to define a memory structure as an automaton reading the edges of a game graph. This notion can be recovered by taking k as the states of the automaton. is a strategy with vertices V(G)×k and projection π(v,m)=v, with the additional requirement that for every edge (v,m)𝜀(v,m), it holds that m=m. We say that k is the memory of the strategy, and numbers 1,,k are called memory states. Informally, the requirement above says that when reading an ε-transition in the game, we are not allowed to change the memory state; this is called ε-memory in [12] (to which we refer for more discussion), but since it is the main kind of memory in this paper, we will simply call it the memory.

A finite-memory strategy is called chromatic if there is a map χ:k×(Σ{ε})k such that for every edge (v,m)𝑐(v,m) in the strategy, it holds that m=χ(m,c). We say that χ is the chromatic update. Note that necessarily, we have χ(m,ε)=m for every memory state m.

The (chromatic) memory of an objective W is the minimal k such that for every game with objective W, if Eve has a winning strategy, she has a winning (chromatic) strategy with memory k.

2.2 Automata

We write d for the set {0,,d}. A parity automaton 𝒜 (or just automaton in the following) with index d – an even number – and alphabet Σ, is a pointed ((Σ{ε})×d)-graph. Vertices are called states, edges are called transitions and written qc:yq, where cΣ{ε} and yd. Elements in d are called priorities. Generally, we use the convention that even priorities are denoted with letter x, whereas y can be used to denote any priority. Transitions of the form qε:yq are called ε-transitions; note that they also carry priorities.

Infinite paths from the initial state q0 are called runs. A run is accepting if the projection of its label on the second coordinate belongs to

Parityd={y0y1dωlim inf(y) is even}. (Note the use of min-parity.)

The language L(𝒜) of 𝒜 is πΣ(L), where L(Σ{ε})ω is the set of projections on the first coordinate of runs which are accepting. We require that all these projections are infinite words; stated differently, there is no accepting run from q0 labelled by a word in (Σ{ε})εω. An automaton is deterministic if there are no ε-transitions and for any state qV(𝒜) and any letter aΣ there is at most one transition qa:__. We say that an automaton is determinisable by pruning if one can make it deterministic by removing some transitions, without modifying its language. A language belongs to BC(𝚺20) if it is the language of a deterministic automaton, and it is ω-regular if the automaton is moreover finite.

We will often identify pointed graphs with automata of index 0, by labelling all transitions with priority 0. Note that in this case, all runs are accepting. This requires making sure that there is no accepting run labelled by a word from Σεω, which, up to assuming that all vertices are accessible from the initial one, amounts to saying that there is no infinite path of 𝜀. We say that such a graph is well-founded.

Blowups and 𝒌-automata.

A k-blowup of an automaton 𝒜 is any automaton with V()V(𝒜)×{1,,k}, with initial state in {q0}×k and such that for each transition qa:yq in 𝒜 and each mk, there is a transition (q,m)a:y(q,_). We also allow for extra transitions in . Note that in this case L(𝒜)L().

A k-automaton is just an automaton whose states are a subset of Q×k, for some set Q; for instance, k-blowups are k-automata. Equivalently, these are automata with a partition of their states in k subsets. For a state (q,m) in a k-automaton, m is called its memory state. A k-automaton is called chromatic if there is a map χ:{1,,k}×(Σ{ε}){1,,k} such that for all transition (q,m)a:y(q,m) it holds that m=χ(m,c).

Cascade products.

Let 𝒜 be an automaton with alphabet Σ and index d, and let S be a d-graph. We define their cascade product 𝒜S to be the (Σ{ε})-graph with vertices V(𝒜)×V(S) and edges

(q,s)𝑐(q,s)y,[qc:yq and s𝑦s].

If S is a pointed graph with intial vertex s0, then 𝒜S is pointed with initial vertex (q0,s0). It is easy to check that we then have the following lemma.

Lemma 2.

Let 𝒜 be an automaton with index d and S be a d-graph satisfying Parityd. Then 𝒜S is well-founded and satisfies L(𝒜).

2.3 𝜺-completability and universal graphs

We now introduce and discuss the key notion used in our main characterisation, which adapts the notion from [10] from positionality to finite memory.

𝒌-wise 𝜺-completability.

A k-automaton 𝒜 with index d is called k-wise ε-complete if for each even xd, for each memory state m and each ordered pair of states (q,m),(q,m):

either(q,m)ε:x(q,m) or (q,m)ε:x+1(q,m).

Intuitively, having an edge (q,m)ε:x(q,m) means that “(q,m) is much better than (q,m)”, as one may freely go from (q,m) to (q,m) and even see a good priority on the way. Similarly, (q,m)ε:x+1(q,m) means that “(q,m) is not much worse than (q,m)”.

It is also useful to consider how the definition instantiates when applied to both ordered pairs (q,m),(q,m) and (q,m),(q,m). Since automata exclude accepting runs which are ultimately comprised of ε-transitions, we cannot have (q,m)ε:x(q,m), and therefore ε-completability rewrites as: For each x, each memory state m and each unordered pair (q,m),(q,m) of states,

either(q,m)ε:x+1ε:x(q,m) or (q,m)ε:x+1(q,m).

Hence an alternative, maybe more useful view (this is the point of view adopted in [10]) is that, up to applying some adequate closure properties, a k-wise ε-complete automaton is endowed with the following structure: for each even priority x and each memory state m, the states with memory state m are totally preordered by the relation x+1, and the relation 𝑥 is the strict version of this preorder. Moreover, for x>x, the x-preorder is a refinement of the x-preorder.

A k-automaton 𝒜 is called k-wise ε-completable if one may add ε-transitions to it so as to turn it into a k-wise ε-complete automaton 𝒜ε satisfying L(𝒜ε)=L(𝒜). We simply say “ε-complete” (resp. “ε-completable”) as a shorthand for “1-wise ε-complete” (resp. “1-wise” ε-completable).

Example 3.

Let Σ={a,b,c} and

W=𝙽𝚘(b)𝙵𝚒𝚗(aa)𝙸𝚗𝚏(cc).

We show a 2-wise ε-complete automaton recognising W in Figure 1. By Theorem 7, the memory of W is 2 (and it is easy to see that this bound is tight).

Figure 1: A 2-automaton recognising W=𝙽𝚘(b)𝙵𝚒𝚗(aa)𝙸𝚗𝚏(cc), where p is assumed to have a different memory state than q0,q1 and q2. It is 2-wise ε-completable by adding the indicated ε-transitions. A completion also contains the transitions q2ε:0,1,2q0, omitted for ease of reading. However, it is not chromatic since reading c may or may not switch the memory state.

The following theorem, a key result in [12] where it is called the structuration lemma, will also be crucial to this work. Recall that we see well-founded pointed graphs as automata with only 0-transitions, and we apply the terms k-blowup and ε-completable to them accordingly.

Theorem 4 (Adapted from Lemma 3.4 in [12]).

Let G be a well-founded pointed graph satisfying an objective W which is assumed to have (chromatic) memory k over games of size 2|G|. There is a (chromatic) k-blowup G of G which is well-founded, k-wise ε-complete, and satisfies W.

Universal graphs.

Given an objective W and a cardinal κ, we say that a graph U is (κ,W)-universal if for any infinite tree T of cardinality |V(T)|<κ satisfying W, there is a morphism ϕ:TU such that ϕ(t0) satifies W in U, where t0 is the root of T. We may now rephrase the main theorem of [12] in terms of ε-complete universal graphs.

Theorem 5 (Theorem 3.1 in [12]).

Let W be an objective. Then W has (chromatic) memory k if and only if for every cardinal κ there exists a (κ,W)-universal graph which is (chromatic and) k-wise ε-complete.

We now give an explicit definition of a (κ,Parityd)-universal graph Sdκ which is ε-complete. These ideas date back to the works of Streett and Emerson, who coined the name signatures [30], and were made more explicit by Walukiewicz [33]. Vertices are tuples of ordinals <κ, indexed by odd priorities in d and ordered lexicographically (with the smaller indices being the most significant). For a tuple s and index y, we let s<y be the tuple obtained from s by restricting to coordinates with index <y. Edges are given by

s𝑦s{s<ys<y and y is even; orsy>sy and y is odd,

and ε-edges are given by s𝜀s if and only if ss.

Lemma 6 ([11, Lemma 2.7]).

The graph Sdκ is (κ,Parityd)-universal.

We will work extensively with such tuples, as well as their prefixes and suffixes. For readability, we also use subscripts to indicate which (odd) coordinates are concerned, for instance s<x will be our notation for tuples of ordinals <κ indexed with odd priorities <x, and similarly for s>x. Concatenation of tuples is written like for words, therefore s<xs>x denotes a tuple indexed by all odd priorities (i.e. an vertex of Sκd). We treat s<x and s>x as different variables, which we may quantify independently.

3 Characterisation of objectives in BC(𝚺𝟐𝟎) with memory 𝒌

We state our main characterisation theorem and its decidability consequences for ω-regular languages. We assume that the alphabet Σ is countable, therefore automata can also be taken with countable sets of states.

Theorem 7 (Main characterisation).

Let W be a BC(𝚺20) objective and let k. The following are equivalent:

  1. (i.)

    W has memory k (resp. chromatic memory k) on games of size 220.

  2. (ii.)

    For any automaton 𝒜 recognising W, there is a (chromatic) k-blowup of 𝒜 which is k-wise ε-complete and recognises W.

  3. (iii.)

    There is a deterministic (chromatic) k-automaton 𝒜 which is k-wise ε-completable and recognises W. If W is recognised by a deterministic automaton of size n, then 𝒜 can be taken of size kn.

  4. (iv.)

    For every cardinal κ, there is a (chromatic) (κ,W)-universal graph which is well-founded and k-wise ε-complete.

  5. (v.)

    W has (chromatic) memory k on arbitrary games.

For ω-regular W given by a deterministic automaton of size n, this allows to compute the (chromatic) memory in NP. First, we note that the memory of L() is at most n, as the automaton itself can serve as a (chromatic) memory. Therefore, we can guess kn, a deterministic automaton 𝒜 of size kn and a (chromatic) k-wise ε-completion 𝒜ε, and check if L()L(𝒜) and if L(𝒜ε)L(), which can be done in polynomial time, since 𝒜 and are deterministic [14]. Prior to our work, neither computing the memory nor the chromatic memory was known to be decidable (although the chromatic memory over finite graphs can be computed in exponential time [24]).

Corollary 8 (Decidability in NP).

Given an integer k and a deterministic automaton 𝒜, the problem of deciding if L(𝒜) has (chromatic) memory k belongs to NP.

Our main contribution is the implication from (i) to (ii), which is the object of Section 3.1. We proceed in Section 3.2 to show that (ii) implies (iii) which is straightforward. The implication (iii) (iv) is adapted from [10] and presented in Section 3.3. Finally, the implication (iv) (v) is the result of [12] (Theorem 5), and the remaining one is trivial.

3.1 Existence of 𝒌-wise 𝜺-complete automata: Proof overview

We start with the more challenging and innovative implication: how to obtain a k-wise ε-complete automaton given an objective in BC(𝚺20) with memory k (that is, (i) (ii)). In this Section, we give a detailed overview of the proof.

Proposition 9.

Let W be an objective recognised by an automaton 𝒜, and assume that W has (chromatic) memory k on games of size 220. Then there is a (chromatic) k-blowup of 𝒜 recognising W which is k-wise ε-complete.

We assume that W has memory k on games of size κ; we discuss the chromatic case at the end of the section. Let 𝒜 be an automaton recognising W; we aim to construct a k-blowup of 𝒜 which is ε-complete. We let S denote Sdκ, the (κ,Parityd)-universal graph defined above.

We consider the cascade product 𝒜S; this is a (Σ{ε})-graph which intuitively encodes all possible accepting behaviours in 𝒜. Then we apply the structuration result (Theorem 4) to 𝒜S which yields a k-blowup G of 𝒜S which is well-founded and k-wise ε-complete (as a graph). Stated differently, up to blowing the graph 𝒜S into k copies, we have been able to endow it with many ε-transitions, so that over each copy, 𝜀 defines a well-order. Note that the states of G are of the form (q,m,s), with qV(𝒜), m and sV(S).

The states of will be V()=V(𝒜)×k. The challenge lies in defining the transitions in , based on those of G.

Given a state (q,m)V() and a transition qc:yq in 𝒜, where cΣ{ε}, by applying the definitions we get transitions of the form (q,m,s)𝑐(q,m,s) in G, for different values of m, whenever s𝑦s in S. We will therefore define transition (q,m)c:y(q,m) in ε if m matches suitably many transitions as above; for now, we postpone the precise definition.

We should then verify that the obtained automaton :

  • is a k-blowup of 𝒜,

  • is ε-complete, and

  • recognises W.

The first two items above state that should have many transitions: at least those inherited from 𝒜, and in addition a number of ε-transitions. This creates a tension with the third item, which states that even with all these added transitions, the automaton should not accept too many words.

Let us focus on the third item for now, which will lead to a correct definition for . Take an accepting run

(q0,m0)c0:y0(q1,m1)c1,y1

in , where x=lim infiyi is even; for the sake of simplicity, assume that all yi’s are x. We should show that its labelling w=c0c1 belongs to W. To this end, we will decorate the run with labels s0,s1,S so that

(q0,m0,s0)c0(q1,m1,s1)c1

defines a path in G, which concludes since G satisfies W.

Recall that the elements of S are tuples of ordinals <κ indexed by d/2 odd priorities. We use s<x (resp. s>x)to refer to a tuple indexed by priorities up to x1 (resp. from x+1). We also let xodd={1,3,,x1}.

To construct the si’s, we fix a well-chosen prefix s<xκxodd which will be constant, and proceed as follows.

  1. (a.)

    If yi=x, then we set si=s<xs>x, for some s>x which depends only on ci.

  2. (b.)

    If yi<x, then we set si=s<xs>x, for some s>x which depends on ci as well as si+1.

At this stage the reader may be worried that the backward induction underlying the above definition is not well-founded; however, since the first case occurs infinitely often, the backward induction from the second item is only performed over finite blocks, see also Figure 2.

Figure 2: The run in (in black), and the order in which the si’s are computed (in red).

This leads to the following definition for , where x is an even priority:

(q,m)c:x(q,m) in s<xs>xs>x,(q,m,s<xs>x)𝑐(q,m,s<xs>x) in G,(q,m)c:x+1(q,m) in s<xs>xs>x,(q,m,s<xs>x)𝑐(q,m,s<xs>x) in G.

The first line corresponds to point (a.) above, where s>x can be chosen independently of s>x, whereas the second line corresponds to point (b.), since the choice of s>x is conditioned on the value of s>x. (For priorities >x+1, we may apply either the first or second line, depending on the parity, to get the required conclusion.)

The remaining issue is that the choice of the fixed common prefix s<x should be made uniformly, regardless of the transition. This is achieved thanks to an adequate extraction lemma (which extends the pigeonhole principle to the case at hands), which finds a large enough subset T of κ[d]odd, so that transitions (q,m,s)(q,m,s) are similar for different choices of s,sT. This ensures that s<x can be chosen uniformly.

There remains to verify that is indeed a k-blowup of 𝒜 and that it is ε-complete, which will follow easily from the definitions and ε-completeness of G (this is because, after removing “s<x” from the definition above, the second line resemble the negation of the first).

For the chromatic case, the proof is exactly the same, we should simply check that if G is chromatic (which is guaranteed by Theorem 4), then the so is the obtained automaton .

3.2 Existence of deterministic 𝜺-completable automata

We now prove the implication from (ii) to (iii) in Theorem 7. Take 𝒜 to be a deterministic automaton recognising W, and let ε be the obtained k-blowup of 𝒜 which is (chromatically) k-wise ε-complete and recognises W. Then let be obtained from ε by only keeping, for each state (q,m)V() and each transition qa:yq in 𝒜, a single transition of the form (q,m)a:y(q,m) chosen arbitrarily. Note that is a k-blowup of 𝒜, so we have: L(𝒜)L()L(ε). We conclude that is a deterministic (chromatically) k-wise ε-completable automaton recognising W, as required.

3.3 From deterministic 𝜺-completable automata to universal graphs

We now prove the implication from (iii) to (iv) in Theorem 7. This result was already proved in [11, Prop. 5.30] for the case of k=1; extending to greater values for k presents no difficulty.

Let be a deterministic666For the purpose of this proof, history-determinism would be sufficient (we refer to [1] for the definition and context on history-determinism). (chromatic) k-wise ε-completable automaton recognising W and ε be an ε-completion. Fix a cardinal κ and let S denote Sdκ. Consider the cascade product U=εS. We claim that U is (chromatic) k-wise ε-complete and (κ,W)-universal. Universality follows from the facts that is deterministic and S is (κ,Parityd)-universal.

Claim 10.

The graph U=εS is (κ,W)-universal.

Proof.

Take a tree T of size <κ which satisfies W. We should show that TU. Since is deterministic, we can define a labelling ρ:V(T)V() by mapping t0 to q0 and tq if the run of on the finite word labelling the path t0t ends in q. Then, any infinite path from t0 in T is mapped to a run in that is accepting (since T satisfies W). Therefore the tree T obtained by taking T and replacing edge-labels by the priorities appearing in their ρ-images satisfies Parityd and has size <κ, so there is a morphism μ:TS. It is a direct check that (ρ,μ):V(T)V()×V(S)=V(U) indeed defines a morphism.

Showing that U is k-wise ε-complete is slightly trickier.

Claim 11.

The graph U=εS is well-founded and (chromatic) k-wise ε-complete.

Proof.

Well-foundedness of U follows directly from Lemma 2. Let us write B1,,Bk for the k parts of ε; the k parts of U will be B1×V(S),,Bk×V(S). If applicable, chromaticity is a direct check. Let (b,s),(b,s)V(U) be in the same part, i.e. b,bBi for some i. Let x0 be the minimal even priority such that bε:x0b in ε (if such an x does not exist then x0=d+2). Then let y0 be the minimal odd priority such that sy0>sy0 (as previously, if s=s then we let y0=d+1). We distinguish two cases.

  1. (1)

    If x0<y0. Then we have bε:x0b in ε and s<x0=s<x0 which gives sx0s in S. Thus we get (b,s)𝜀(b,s) in U.

  2. (2)

    If y0<x0. Then sy0>sy0, which gives sy0s in S. Since ε is k-wise ε-complete and by definition of x0, we also have bε:y0b in ε, therefore (b,s)𝜀(b,s) in U.

We conclude that either (b,s)𝜀(b,s) or (b,s)𝜀(b,s), as required.

4 Union of objectives

In this section, we establish a strong form of the generalised Kopczyński conjecture for BC(𝚺20) objectives. An objective WΣω is prefix-increasing777In other papers [26, 27, 12] this notion is called prefix-decreasing, as Eve is seen as a “minimiser” player who aims to minimise some quantity. if for all aΣ and wΣω, it holds that if wW then awW. In words, one remains in W when adding a finite prefix to a word of w. Examples of prefix-increasing objectives include prefix-independent and closed objectives.

Theorem 12.

Let W1,W2SSω be two BC(𝚺20) objectives over the same alphabet, such that W2 is prefix-increasing. Assume that W1 has memory k1 and W2 has memory k2. Then W1W2 has memory k1k2.

 Remark 13.

The assumption that one of the two objectives is prefix-increasing is indeed required: for instance if W1=aa(a+b)ω and W2=bb(a+b)ω, which are positional but not prefix-increasing, the union (aa+bb)(a+b)ω is not positional (it has memory 2).

 Remark 14.

The bound k1k2 in Theorem 12 is tight: For every k1,k2, there are objectives W1, W2 with memories k1,k2 respectively, such that W1W2 has memory exactly k1k2. One such example is as follows: let SS={a1,,ak1,b1,,bk2} and W1={ww contains at least two different ai infinitely often} and W2={ww contains at least two bi infinitely often}. We can see that W1, W2 and W1W2 have memory, respectively, k1,k2, and k1k2 by building the Zielonka tree of these objectives and applying [19, Thms. 6, 14].

The rest of the section is devoted to the proof of Theorem 12. We explicit a construction for the union of two parity automata, inspired from the Zielonka tree of the union of two parity conditions, and show that it is (k1k2)-wise ε-completable.

Union of parity languages.

We give an explicit construction of a deterministic parity automaton 𝒯 recognising the union of two parity languages, which may be of independent interest. This corresponds to the automaton given by the Zielonka tree of the union (a reduced and structured version of the LAR).

We let [d1]={0,1,,d1} and [d2]={1,,d2}. Let [d1]odd and [d2]odd denote the restrictions of [d1] and [d2] to odd elements. By a slight abuse of notation, we sometimes treat elements in [d2] as natural numbers (e.g. when comparing them).

Alphabet. The input alphabet is [d1]×[d2], and we write letters as (y,z). The index of 𝒯 is d1+d2, we use the letter t for its output priorities.

States. States are given by interleavings of two strictly increasing sequences of [d1]odd and [d2]odd. Any state can be taken as initial. For instance, for d1=d2=6, an example of a state is:



We use τ to denote such a sequence, which we index from 1 to (d1+d2)/2, and write τ[i] for its i-th element. For y[d1], y odd, we let 𝗂𝗇𝖽τ(y)=i for the index such that τ[i]=y. For y2 even, we let 𝗂𝗇𝖽τ(y) be the index i such that τ[i]=y1. We let 𝗂𝗇𝖽τ(0)=0. We use the same notation for z[d2], z1. For example, in the state above, 𝗂𝗇𝖽τ(2)=3.

The intuition is that a state stores a local order of importance between input priorities.

Transitions. Let τ be a state and (y,z) an input letter. We define the transition τ(y,z):tτ as follows:

Let i=min{𝗂𝗇𝖽τ(y),𝗂𝗇𝖽τ(z)}. In the following, we assume i=𝗂𝗇𝖽τ(y) (the definition for i=𝗂𝗇𝖽τ(z) is symmetric). We let t=2i, if y even, and t=2i1 if y is odd. If y is even, we let τ=τ. If y is odd, let i be the smallest index i<i such that τ[i][d2], and let τ be the sequence obtained by inserting τ[i] on the left of τ[i] (or τ=τ if no such index i exists). Formally,

τ[j]=τ[j] for j<i and i<j,τ[i]=τ[i] and τ[j]=τ[j1] for i<ji.

For example, for the state above, we have: 1,3,1,5,3,5(3,2):31,1,3,5,3,5.

Lemma 15.

The automaton 𝒯 recognises the language

L={w([d1]×[d2])ωπ1(w)Parityd1orπ2(w)Parityd2}.

Proof.

We show that LL(𝒯), the other inclusion is similar (and implied by Lemma 19 below). Let (y1,z1)(y2,z2)W, and assume w.l.o.g. that y1y2Parityd1. Let ymin=lim infyi, which is even, and let n0 be so that for all nn0 we have yminyn. Let

τ0(y1,z1):t1τ1(y2,z2):t2

denote the corresponding run in 𝒯. Let in=𝗂𝗇𝖽τn(ymin) be the index where ymin1 appears in τn. Note that the sequence (in)nn0 is decreasing. Let n1 be the moment where this sequence stabilises, i.e., in=in1 for nn1. By definition of the transitions of 𝒯, for nn1 all output priorities are 2in1, and priority 2in1 is produced every time that a letter (ymin,zn) is read. We conclude that 𝒯 accepts w.

𝟎-freeness of automata for prefix-increasing objectives.

The fact that W2 is prefix-increasing will be used via the following lemma. It recasts the fact that we can add ε:1 transitions everywhere to automata recognising prefix-increasing objectives.

Lemma 16.

Let W a be prefix-increasing objective with memory k. There exists a deterministic k-wise ε-completable automaton 𝒜 recognising W and an ε-completion 𝒜ε of 𝒜 such that 𝒜ε does not have any transition with priority 0.

Proof.

First, take any automaton 𝒜0 recognising W. Then let 𝒜1 be obtained by shifting every priority from 𝒜0 by 2. Clearly 𝒜1 also recognises W and does not have any transition with priority 0. Then apply Proposition 9 to get a k-blowup 𝒜 of 𝒜1 which is k-wise ε-completable, and let 𝒜ε denote the corresponding ε-completion. Since 𝒜 has no transition with priority 0, the only possible such transitions in 𝒜ε are ε-transition. Then remove all ε-transitions with priority 0 in 𝒜ε, and add ε:1 transitions between all pairs of states in 𝒜ε (in both directions).

Clearly, the obtained automaton 𝒜~ε is ε-complete and has no transition with priority 0. There remains to prove that it recognises W. Take an accepting run in 𝒜~ε and observe that the priority 1 is only seen finitely often. Hence from some moment on, the run coincides with a run in 𝒜ε. We conclude since W is prefix-increasing.

Main proof: 𝜺-completion of the product.

We now proceed with the proof of Theorem 12. Using Theorem 7, for l=1,2, we take deterministic kl-wise ε-completable automata 𝒜l of index dl recognising Wl, and its ε-completion 𝒜lε. For l=2, we assume thanks to Lemma 16 that 𝒜2ε does not have any transition with priority 0.

We consider the product 𝒜=(𝒜1×𝒜2)𝒯, with states V(𝒜1)×V(𝒜2)×V(𝒯) and transitions (q1,q2,τ)a:t(q1,q2,τ) if q1a:yq1 in 𝒜1, q2a:zq2 in 𝒜2, and τ(y,z):tτ in 𝒯. The correctness of such a construction is folklore.888𝒜1×𝒜2 can be seen as a Muller automaton with acceptance condition the union of two parity languages. The composition with 𝒯 yields a correct parity automaton, as 𝒯 recognises the acceptance condition.

Claim 17.

The automaton 𝒜 is deterministic and recognises W=W1W2.

Therefore, there only remains to show the following lemma.

Lemma 18.

The automaton 𝒜=(𝒜1×𝒜2)𝒯 is (k1k2)-wise ε-completable.

The ε-completion of 𝒜 will be a variant of a product of the form 𝒜1ε×𝒜2ε𝒯¯, where 𝒯¯ is a non-deterministic extension of 𝒯 with more transitions, but which still recognises the same language.

The automaton 𝒯¯. Intuitively, we obtain 𝒯¯ by allowing to reconfigure the elements of index >i in a state τ by paying an odd priority 2i1, as well as allowing to move elements of [d1] to the left. We precise this idea next.

We order the states of 𝒯 lexicographically, where we assume that y<z for y[d1] and z[d2]. Formally, we let τ<τ if for the first position j where τ and τ differ, τ[j][d1] (and therefore necessarily τ[j][d2]). We let τ[..i] be the prefix of τ up to (and including) τ[i]. We write τ<iτ if τ[..i]<τ[..i].

Let τ(y,z):t be a transition in 𝒯 as above, and i0=min{𝗂𝗇𝖽τ(y),𝗂𝗇𝖽τ(z)} be the index determining t (i.e. t{2i01,2i}). The automaton 𝒯¯ contains a transition τ(y,z):tτ if:

  1. 1.

    t=t is odd, and τi01τ; or

  2. 2.

    t=t is even, and τi0τ; or

  3. 3.

    t{2i1,2i} for some ii0 and τ<iτ. (Note that, if t is odd, this includes all (possibly even) tt+1.)

In words, we are allowed to output a small (i.e. important) priority when following a strict decrease on sufficiently small components in τ. Note that transitions in 𝒯 also belong to 𝒯¯ thanks to the rules (1) and (2).

Lemma 19.

The automaton 𝒯¯ recognises the same language as 𝒯.

Proof.

It is clear that L(𝒯)L(𝒯¯). We show the other inclusion.

Consider

τ0(y1,z1):t1τ1(y2,z2):t2, an accepting run in 𝒯¯.

Let tmin=lim inft1t2, which is even. Let imin=tmin/2 be the index responsible for producing priority tmin. Let n0 be such that tntmin for all nn0. Observe that for all nn0, we have τn(imin1)τn+1, and therefore there is n1n0 such that the prefix τn[..imin1] is the same for all nn1. In fact, the prefix τn[..imin] must be constant too, as we can only modify τn[imin] using rule (1.) and that would output priority tmin1. Assume w.l.o.g. that τn[imin]=y[d1]odd. Now, for each nn1 it must be that yiy+1 and it must be that yi=y+1 each time that priority tmin is produced. Therefore, lim infy1y2=y+1 is even.

The ε-completion. We define 𝒜ε as a version of the cascade product of 𝒜1ε×𝒜2ε with 𝒯, in which ε-transitions are also allowed to use the transitions of 𝒯¯. We let denote the preference ordering over priorities, given by 13d1d20 (d even). The transitions in 𝒜ε are defined as follows:

  • For aSS: (q1,q2,τ)a:t(q1,q2,τ) if this transition appears in (𝒜1×𝒜2)𝒯.

  • (q1,q2,τ)ε:t(q1,q2,τ) if q1ε:yq1 in 𝒜1ε, q2ε:zq2 in 𝒜2ε, and τ(y,z):tτ in 𝒯¯ with tt.

Note that the condition tt simply allows to output a less favorable priority, so it does not create extra accepting runs. By definition, 𝒜ε has been obtained by adding ε-transitions to 𝒜. It is a folklore result that composition of non-deterministic automata also preserves the language recognised, so this construction is correct.

Claim 20.

The automaton 𝒜ε recognises W.

Therefore, we there only remains to prove the following lemma.

Lemma 21.

The automaton 𝒜ε is (k1k2)-wise ε-complete.

The proof idea is as follows: The k1 parts of 𝒜1ε and the k2 parts of 𝒜2ε naturally induce a partition of the states of 𝒜ε into k1k2 parts. Then, given two states r=(q1,q2,τ) and r=(q1,q2,τ) in the same part of 𝒜ε, we consider the longest common prefix of τ and τ. We perform a case analysis: Depending on the priorities of [d1] or [d2] appearing in this prefix, and the transitions qlε:xql of the automata 𝒜1ε, 𝒜2ε, we will find transitions rε:xr or rε:x+1r for all even x.

5 Conclusions and open questions

We characterised objectives in BC(𝚺20) with memory (or chromatic memory) k as those recognised by a well-identified class of automata. In particular, this gives the first known characterisation of (chromatic) memory for ω-regular objectives, and proves that it is decidable (in fact even in NP). We also settled (a strengthening of) Kopczyński’s conjecture for BC(𝚺20) objectives. We now discuss some directions for future work.

Memory in finite games.

This paper focuses on games over potentially infinite game graphs. A wide body of literature studies the memory over finite game graphs [24, 6]; we believe that, for ω-regular objectives, both notions should coincide, and our results should characterise the memory of ω-regular objectives over finite games too. More precisely, we believe that in item (i) in Theorem 7, it suffices to assume that W has memory k over games of size f(n), for some finite bound f(n), where n is the size of a deterministic automaton representing W.

Question 1 (Version of [32, Conjecture 9.1.2]).

Show that if an ω-regular objective has memory k over finite game graphs, then it has memory k over all game graphs.999This statement is proven in the full version of this paper [13] (as mentioned in footnote 1).

The hypothesis on ω-regularity is necessary, as this statement already fails in the case of positionality and closed objectives [16]. As a follow-up question one could investigate the bound f(n): can it be assumed polynomial?

Exact complexity of computation of memory.

We established that computing the (chromatic) memory of an ω-regular objective is in NP. In fact, computing the chromatic memory is NP-hard already for simple classes of objectives, such as Muller [8] or safety ones [3]. However, no such hardness results are known for non-chromatic memory.

Question 2.

Given a deterministic parity automaton 𝒜 and a number k, can we decide whether the memory of L(𝒜) is k in polynomial time?

This question is open already for the simpler case of regular open objectives (that is, those recognised by reachability automata).

Assymetric 1-to-2-player lifts.

A celebrated result of Gimbert and Zielonka [22] states that if for an objective W both players can play optimally using positional strategies in finite games where all vertices belong to one player, then W is bipositonal over finite games. This result has been extended in two orthogonal directions: to objectives where both players require finite chromatic memory [6, 5] (symmetric lift for memory), and to ω-regular objectives where Eve can play positionally in 1-player games [10] (asymmetric lift for positionality). In this work, we have not provided an asymmetric lift for memory, as in most cases no such result can hold. For BC(𝚺20) objectives, it is known to fail already for positional objectives [21, Section 7]. For non-chromatic memory, it cannot hold for ω-regular objectives neither, because of the example described below.

Proposition 22.

Let Σn={1,,n}. For every n, the objective

Wn={wSSnωw contains two different letters infinitely often}

has memory 2 over games where Eve controls all vertices and memory n over arbitrary games.

Proof sketch.

The fact that Wn has memory n follows from [19].

We sketch the proof that 2 memory states suffice in games G where Eve controls all vertices. For each vertex vV(G), we let χ1(v) and χ2(v) be the two smallest colours in SSn such that there is a path from v producing them, and fix two such finite paths πv1, πv2 of minimal length. We define a 2-memory strategy for Eve as follows: when in a vertex v and memory state j{1,2}, she will take the first edge from πvj. If this edge has colour χj(v), we update the memory state to j¯. It is an easy check that this strategy ensures Wn. In his PhD thesis, Vandenhove conjectures that an assymetric lift for chromatic memory holds for ω-regular objectives [32, Conjecture 9.1.2]. This question remains open.

Question 3.

Is there an ω-regular objective with chromatic memory k over games where Eve controls all vertices and chromatic memory k>k over arbitrary games?

Further decidability results for memory.

As mentioned in the introduction, many extensions of ω-automata (including deterministic ω-Turing machines and unambiguous ω-petri nets [20]) compute languages that are in BC(𝚺20). We believe that our characterisation may lead to decidability results regarding the memory of objectives represented by these models.

Objectives in 𝚫𝟎𝟑.

Some of the questions answered in this work in the case of BC(𝚺20) objectives are open in full generality, for instance, the generalised Kopczyński’s conjecture. A reasonable next step would be to consider the class 𝚫30=𝚺30𝚷30. Objectives in 𝚫30 are those recognised by max-parity automata using infinitely many priorities [29]. Our methods seem appropriate to tackle this class, however, we have been unable to extend the extraction techniques used in the proof of Section 3.1.

References

  • [1] Udi Boker and Karoliina Lehtinen. When a little nondeterminism goes a long way: An introduction to history-determinism. ACM SIGLOG News, 10(1):24–51, 2023. doi:10.1145/3584676.3584682.
  • [2] Patricia Bouyer, Antonio Casares, Mickael Randour, and Pierre Vandenhove. Half-positional objectives recognized by deterministic Büchi automata. Log. Methods Comput. Sci., 20(3), 2024. doi:10.46298/LMCS-20(3:19)2024.
  • [3] Patricia Bouyer, Nathanaël Fijalkow, Mickael Randour, and Pierre Vandenhove. How to play optimally for regular objectives? In ICALP, volume 261 of LIPIcs, pages 118:1–118:18, 2023. doi:10.4230/LIPIcs.ICALP.2023.118.
  • [4] Patricia Bouyer, Mickael Randour, and Pierre Vandenhove. The true colors of memory: A tour of chromatic-memory strategies in zero-sum games on graphs (invited talk). In FSTTCS, volume 250 of LIPIcs, pages 3:1–3:18, 2022. doi:10.4230/LIPICS.FSTTCS.2022.3.
  • [5] Patricia Bouyer, Mickael Randour, and Pierre Vandenhove. Characterizing omega-regularity through finite-memory determinacy of games on infinite graphs. TheoretiCS, 2, 2023. doi:10.46298/THEORETICS.23.1.
  • [6] Patricia Bouyer, Stéphane Le Roux, Youssouf Oualhadj, Mickael Randour, and Pierre Vandenhove. Games where you can play optimally with arena-independent finite memory. Log. Methods Comput. Sci., 18(1), 2022. doi:10.46298/LMCS-18(1:11)2022.
  • [7] J. Richard Büchi and Lawrence H. Landweber. Solving sequential conditions by finite-state strategies. Transactions of the American Mathematical Society, 138:295–311, 1969. URL: http://www.jstor.org/stable/1994916.
  • [8] Antonio Casares. On the minimisation of transition-based Rabin automata and the chromatic memory requirements of Muller conditions. In CSL, volume 216, pages 12:1–12:17, 2022. doi:10.4230/LIPIcs.CSL.2022.12.
  • [9] Antonio Casares, Thomas Colcombet, and Karoliina Lehtinen. On the size of good-for-games Rabin automata and its link with the memory in Muller games. In ICALP, volume 229, pages 117:1–117:20, 2022. doi:10.4230/LIPIcs.ICALP.2022.117.
  • [10] Antonio Casares and Pierre Ohlmann. Positional ω-regular languages. In LICS, pages 21:1–21:14. ACM, 2024. doi:10.1145/3661814.3662087.
  • [11] Antonio Casares and Pierre Ohlmann. Positional ω-regular languages. CoRR, abs/2401.15384, 2024. doi:10.48550/arXiv.2401.15384.
  • [12] Antonio Casares and Pierre Ohlmann. Characterising memory in infinite games. Logical Methods in Computer Science, Volume 21, Issue 1, March 2025. doi:10.46298/lmcs-21(1:28)2025.
  • [13] Antonio Casares and Pierre Ohlmann. The memory of ω-regular and BC(Σ20) objectives. CoRR, abs/2502.05840, 2025. doi:10.48550/arXiv.2502.05840.
  • [14] Edmund M. Clarke, I. A. Draghicescu, and Robert P. Kurshan. A unified approch for showing language inclusion and equivalence between various types of omega-automata. Inf. Process. Lett., 46(6):301–308, 1993. doi:10.1016/0020-0190(93)90069-L.
  • [15] Edmund M. Clarke, Thomas A. Henzinger, Helmut Veith, and Roderick Bloem, editors. Handbook of Model Checking. Springer, Cham, 2018. doi:10.1007/978-3-319-10575-8_2.
  • [16] Thomas Colcombet, Nathanaël Fijalkow, and Florian Horn. Playing safe. In FSTTCS, volume 29 of LIPIcs, pages 379–390, 2014. doi:10.4230/LIPIcs.FSTTCS.2014.379.
  • [17] Thomas Colcombet, Nathanaël Fijalkow, and Florian Horn. Playing safe, ten years later. CoRR, abs/2212.12024, 2022. doi:10.48550/arXiv.2212.12024.
  • [18] Thomas Colcombet and Damian Niwiński. On the positional determinacy of edge-labeled games. Theor. Comput. Sci., 352(1-3):190–196, 2006. doi:10.1016/j.tcs.2005.10.046.
  • [19] Stefan Dziembowski, Marcin Jurdziński, and Igor Walukiewicz. How much memory is needed to win infinite games? In LICS, pages 99–110, 1997. doi:10.1109/LICS.1997.614939.
  • [20] Olivier Finkel, Michał Skrzypczak, Ryszard Janicki, Slawomir Lasota, and Natalia Sidorova. On the expressive power of non-deterministic and unambiguous petri nets over infinite words. Fundamenta Informaticae, 183(3-4):243–291, 2022. doi:10.3233/FI-2021-2088.
  • [21] Hugo Gimbert and Edon Kelmendi. Submixing and shift-invariant stochastic games, 2022. Version 2. URL: https://arxiv.org/abs/1401.6575v2.
  • [22] Hugo Gimbert and Wieslaw Zielonka. Games where you can play optimally without any memory. In CONCUR, volume 3653, pages 428–442, 2005. doi:10.1007/11539452_33.
  • [23] Yuri Gurevich and Leo Harrington. Trees, automata, and games. In STOC, pages 60–65, 1982. doi:10.1145/800070.802177.
  • [24] Eryk Kopczyński. Half-positional Determinacy of Infinite Games. PhD thesis, University of Warsaw, 2008. URL: https://www.mimuw.edu.pl/˜erykk/papers/hpwc.pdf.
  • [25] Alexander Kozachinskiy. Energy games over totally ordered groups. In CSL, volume 288 of LIPIcs, pages 34:1–34:12, 2024. doi:10.4230/LIPICS.CSL.2024.34.
  • [26] Pierre Ohlmann. Monotonic graphs for parity and mean-payoff games. (Graphes monotones pour jeux de parité et à paiement moyen). PhD thesis, Université Paris Cité, France, 2021. URL: https://tel.archives-ouvertes.fr/tel-03771185.
  • [27] Pierre Ohlmann. Characterizing positionality in games of infinite duration over infinite graphs. TheoretiCS, Volume 2, January 2023. doi:10.46298/theoretics.23.3.
  • [28] Pierre Ohlmann and Michal Skrzypczak. Positionality in Σ20 and a completeness result. In STACS, volume 289, pages 54:1–54:18, 2024. doi:10.4230/LIPICS.STACS.2024.54.
  • [29] Michal Skrzypczak. Topological extension of parity automata. Inf. Comput., 228:16–27, 2013. doi:10.1016/J.IC.2013.06.004.
  • [30] Robert S. Streett and E. Allen Emerson. An automata theoretic decision procedure for the propositional mu-calculus. Inf. Comput., 81(3):249–264, 1989. doi:10.1016/0890-5401(89)90031-X.
  • [31] Wolfgang Thomas. Automata on infinite objects. In Handbook of Theoretical Computer Science, Volume B: Formal Models and Sematics, 1991. URL: https://api.semanticscholar.org/CorpusID:33060294.
  • [32] Pierre Vandenhove. Strategy complexity of zero-sum games on graphs. (Complexité des stratégies des jeux sur graphes à somme nulle). PhD thesis, University of Mons, Belgium, 2023. URL: https://tel.archives-ouvertes.fr/tel-04095220.
  • [33] Igor Walukiewicz. Pushdown processes: Games and model checking. In CAV, volume 1102 of Lecture Notes in Computer Science, pages 62–74, 1996. doi:10.1007/3-540-61474-5_58.