Dependences in Strategy Logic

Strategy Logic (SL) is a very expressive temporal logic for specifying and verifying properties of multi-agent systems: in SL, one can quantify over strategies, assign them to agents, and express LTL properties of the resulting plays. Such a powerful framework has two drawbacks: first, model checking SL has non-elementary complexity; second, the exact semantics of SL is rather intricate, and may not correspond to what is expected. In this paper, we focus on strategy dependences in SL, by tracking how existentially-quantified strategies in a formula may (or may not) depend on other strategies selected in the formula, revisiting the approach of [Mogavero et al., Reasoning about strategies: On the model-checking problem, 2014]. We explain why elementary dependences, as defined by Mogavero et al., do not exactly capture the intended concept of behavioral strategies. We address this discrepancy by introducing timeline dependences, and exhibit a large fragment of SL for which model checking can be performed in 2-EXPTIME under this new semantics.

ATL [2] expresses properties of (executions generated by) behaviours of individual components of the system. This can be used to specify that a controller can enforce safety of a whole system, whatever the other components do. This is usually seen as a game where the controller plays against the other components, with the aim of maintaining safety of the global system; ATL can then express the existence of a winning strategy in such a game. ATL has been extensively studied since its introduction, both about its expressiveness and about its verification algorithms [2,20,28].
Adding strategic interactions in temporal logics.
Strategies in ATL are handled in a very limited way, and there are no real strategic interactions in that logic (which, in return, enjoys a polynomial-time model-checking algorithm). Indeed, ATL expresses properties such as "Player A has a strategy to enforce ϕ" (denoted A ϕ), where ϕ is a property to be fulfilled along any execution resulting from the selected strategy; in other terms, this existential quantification over strategies of A always implicitly contains a universal quantification over all the strategies of all the other players. This only allows to express zero-sum objectives.
Over the last 10 years, various extensions have been defined and studied in order to allow for more strategy interactions [1,11,8,30,39]. Strategy Logic (SL for short) [11,30] is such a powerful approach, in which strategies are first-class objects; formulas can quantify (universally and existentially) over strategies, store those strategies in variables, assign them to players, and express properties of the resulting plays. As a simple example, the existence of a winning strategy for Player A (with objective ϕ A ) against any strategy of Player B would be written as ∃σ A . ∀σ B . assign(A → σ A ; B → σ B ). ϕ A . This precisely corresponds to formula A ϕ A of ATL (if the game only has two players).
SL can express much more: for example, it can express the existence of a strategy for Player A which allows Player B to satisfy one of two goals ϕ B or ϕ B : we would write This expresses collaborative properties which are out of reach of ATL: formula A ( B ϕ B ∧ B ϕ B ) in ATL is equivalent to ( B ϕ B ∧ B ϕ B , since B ϕ B is understood as the existence of a winning strategy against any strategy of the other player(s).
As a last example, SL can express classical concepts in game theory, such as Nash equilibria with Boolean objectives. This provides an easy way of showing decidability of rational synthesis [18,26,14] or assume-admissible synthesis [7]): for instance, the existence of an admissible strategy for objective ϕ of Player A (i.e., a strategy that is strictly dominated by no other strategies [7]) is expressed as Such a formula shows that complex strategy interactions may be useful for expressing classical properties of multi-player games.
This series of examples illustrates how SL is both expressive and convenient, at the expense of a very high complexity: SL model checking has non-elementary complexity (and satisfiability is undecidable, unless the problem is restricted to turn-based game structures) [30,27].
The high expressiveness of this logic, together with the decidability of its model-checking problem, has led to numerous studies around SL, either considering fragments of the logic with more efficient algorithms, or more expressive variants of the logic (e.g. with quantitative aspects), or variations on the notion of strategies (e.g. with limited observation of the game).
On the one hand, limitations have been imposed to strategic interactions in order to get more efficient algorithms [29,32]. A goal is an LTL condition imposed to a strategy profile (built from quantified strategies). The fragment SL[1G] then contains formulas in prenex form with a single goal (and nested combinations thereof); this fragment is very close to ATL [2] in terms of expressiveness, and its model-checking problem is 2 -EXPTIME-complete. A BDD-based implementation of the model-checking algorithm for SL[1G], using a translation to parity games, is implemented in the tool MCMAS [10]. Several other fragments have been considered, e.g. allowing conjunctions (SL[CG]), disjunctions (SL[DG]), or general boolean combinations of goals (SL[BG]); model checking still is in 2 -EXPTIME for the first two fragments [32], but it is non-elementary for SL[BG] [5].
On the other hand, various extensions have also been considered, in order to see how far the logic can be extended while preserving decidable model checking. In Graded SL, (existential) strategy quantifiers are decorated with quantitative constraints on the cardinality of the set of strategies satisfying a formula; this can be used e.g. to express uniqueness on Nash equilibria. Model checking is decidable (with non-elementary complexity) for Graded SL [3]. On a different note, Prompt SL extends SL with a parameterized modality F ≤n ϕ, which bounds the number of steps within which ϕ has to hold. Similarly, Bounded-Outcome SL adds a bound on the number of outcomes that must satisfy a given path formula. Again, model checking is decidable for those extensions [17].
Finally, SL has also been studied with different notions of strategies. When limiting strategy quantification to memoryless strategies, model checking is PSPACEcomplete (as there are exponentially many strategies), but satisfiability is undecidable even for turn-based game structures [27]. Different types of strategies, based on sequences of actions, states or atomic propositions, are also considered in [22], with a focus on bisimulation invariance. When considering partial-observation strategies, model checking is undecidable (as is already the case for ATL [15]); a decidable fragment of SL is identified in [4], with a hierarchical restriction on nested strategy quantifiers. This study of imperfect-information games has been extended with epistemic variants of SL, which allows to reason about the knowledge of agents. Model checking is undecidable in the general case, but several papers identify specific settings where model checking is decidable [21,9,25].
Understanding SL.
It has been noticed in recent works that the nice expressiveness of SL comes with unexpected phenomena. One such phenomenon is induced by the separation of strategy quantification and strategy assignment: when selecting a strategy to be played later, are the intermediary events part of the memory of that strategy? While both options may make sense depending on the applications, only one of them makes model checking decidable [6].
A second phenomenon-which is the main focus of the present paper-concerns strategy dependences [30]: in a formula such as ∀σ A . ∃σ B . ϕ, the existentially-quantified strategy σ B may depend on the whole strategy σ A ; in other terms, the action returned by strategy σ B after some finite history ρ may depend on what strategy σ A would play on any other history ρ . Again, in some contexts, it may be desirable that the value of strategy σ B after history ρ can be computed based solely on what has been observed along ρ (see Fig. 2 for an illustration). This approach was initiated in [30,33], conjecturing that large fragments of SL (subsuming ATL * ) would have 2 -EXPTIME model-checking algorithms with such limited dependences.
Our contributions.
We follow this line of work by performing a more thorough exploration of strategy dependences in (a fragment of) SL. We mainly follow the framework of [33], based on a kind of Skolemization of the formula: for instance, a formula of the form (∀x i ∃y i ) i . ϕ is satisfied if there exists a dependence map θ defining each existentially-quantified strategy y j based on the universally-quantified strategies (x i ) i . In order to recover the classical semantics of SL, it is only required that the strategy θ((x i ) i )(y j ) (i.e. the strategy assigned to the existentially-quantified variable y j by θ(( Based on this definition, other constraints can be imposed on dependence maps, in order to refine the dependences of existentially-quantified strategies on universally-quantified ones. Elementary dependences [33] only allows existentiallyquantified strategy y j to depend on the values of (x i ) i<j along the current history. This gives rise to two different semantics in general, but on several fragments of SL (namely SL[1G], SL [CG] and SL[DG]), the classic and elementary semantics would coincide [29,32].
The coincidence actually only holds for SL[1G]. As we explain in this paper, elementary dependences as defined and used in [29,32] do not exactly capture the intuition that strategies should depend on the "behavior [of universal strategies] on the history of interest only" [32]: indeed, they only allow dependences on universallyquantified strategies that appear earlier in the formula, while we claim that the behaviour of all universally-quantified strategies should be considered. We address this discrepancy by introducing another kind of dependences, which we call timeline dependences, and which extend elementary dependences by allowing existentiallyquantified strategies to additionally depend on all universally-quantified strategies along strict prefixes of the current history (as illustrated on Fig. 5).
We study and compare those three dependences (classic, elementary and timeline), showing that they correspond to three distinct semantics. Because the semantics based on dependence maps is defined in terms of the existence of a witness map, we show that the syntactic negation of a formula may not correspond to its semantic negation: there are cases where both a formula ϕ and its syntactic negation ¬ϕ fail to hold (i.e., none of them has a witness map). This phenomenon is already present, but had not been formally identified, in [30,33]. The main contribution of the present paper is the definition of a large (and, in a sense, maximal) fragment of SL for which syntactic and semantic negations coincide under the timeline semantics. As an (important) side result, we show that model checking this fragment under the timeline semantics is 2 -EXPTIME-complete.

Related works.
To the best of our knowledge, strategy dependences have only been considered in a series of recent works by Mogavero et al. [29,32,30,33], both as a way of making the semantics of SL more realistic in certain situations, and as a way of lowering the algorithmic complexity of verification of certain fragments of SL.
The question of the dependence of quantifiers in first-order logic is an old topic: in [23], branching quantifiers are introduced to define how quantified variables may depend on each other. Similarly, Dependence Logic [38] and Independence-Friendly Logic [24] also add such restrictions on dependences of quantified variables on top of first-order logic. While the settings are quite different to ours, the underlying ideas are similar, and in particular share an interpretation in terms of games of imperfect information.

Concurrent game structures
Let AP be a set of atomic propositions, V be a set of variables, and Agt be a set of agents. A concurrent game structure is a tuple G = (Act, Q, ∆, lab) where Act is a finite set of actions, Q is a finite set of states, ∆ : Q × Act Agt → Q is the transition function, and lab : Q → 2 AP is a labelling function. An element of Act Agt will be called a move vector. For any q ∈ Q, we let succ(q) be the set {q ∈ Q | ∃m ∈ Act Agt . q = ∆(q, m)}. For the sake of simplicity, we assume in the sequel that succ(q) = ∅ for any q ∈ Q. A game G is said turn-based whenever for every state q ∈ Q, there is a player own(q) ∈ Agt (named the owner of q) such that for any two move vectors m 1 and m 2 with m 1 (own(q)) = m 2 (own(q)), it holds ∆(q, m 1 ) = ∆(q, m 2 ). Figure 1 displays an example of a (turn-based) game.
Fix a state q ∈ Q. A play in G from q is an infinite sequence π = (q i ) i∈N of states in Q such that q 0 = q and q i ∈ succ(q i−1 ) for all i > 0. We write Play G (q) for the set of plays in G from q. In this and all similar notations, we might omit to mention G when it is clear from the context, and q when we consider the union over all q ∈ Q. A (strict) prefix of a play π is a finite sequence ρ = (q i ) 0≤i≤L , for some L ∈ N. We write Pref(π) for the set of strict prefixes of play π. Such finite prefixes are called histories, and we let Hist G (q) = Pref(Play G (q)). We extend the notion of strict prefixes and the notation Pref to histories in the natural way, requiring in particular that ρ / ∈ Pref(ρ). A (finite) extension of a history ρ is any history ρ such that ρ ∈ Pref(ρ ). Let ρ = (q i ) i≤L be a history. We define first(ρ) = q 0 and last(ρ) = q L . Let ρ = (q j ) j≤L be a history from last(ρ). The concatenation of ρ and ρ is then defined as the path ρ · ρ = (q k ) k≤L+L such that q k = q k when k ≤ L and q k = q k−L when L ≥ k (notice that we required q 0 = q L ).
A strategy from q is a mapping δ : Hist G (q) → Act. We write Strat G (q) for the set of strategies in G from q. Given a strategy δ ∈ Strat(q) and a history ρ from q, the translation δ− → ρ of δ by ρ is the strategy δ− → ρ from last(ρ) defined by δ− → ρ (ρ ) = δ(ρ · ρ ) for any ρ ∈ Hist(last(ρ)). A context (sometimes also called valuation) from q is a partial function χ : V ∪ Agt Strat(q). As usual, for any partial function f , we write dom(f ) for the domain of f . Let q ∈ Q and χ be a context from q. If Agt ⊆ dom(χ), then χ induces a unique play from q, called its outcome, and defined as out(q, χ) = (q i ) i∈N such that q 0 = q and for every i ∈ N,

Strategy Logic with boolean goals
Strategy Logic (SL for short) was introduced in [11], and further extended and studied in [34,30], as a rich logical formalism for expressing properties of games. SL manipulates strategies as first-order elements, assigns them to players, and expresses LTL properties on the outcomes of the resulting strategic interactions. This results in a very expressive temporal logic, for which satisfiability is undecidable [34,31] and model checking is TOWER-complete [30,5]. In this paper, we focus on a restricted fragment of SL, called SL[BG] (where BG stands for boolean goals [30], and the symbol indicates that we do not allow nesting of (closed) subformulas; we discuss this latter restriction below).

Syntax.
Formulas in SL [BG] are built along the following grammar where x ranges over V, σ ranges over the set V Agt of full assignments, and p ranges over AP. A goal is a formula of the form ω in the grammar above; it expresses an LTL property ψ on the outcome of the mapping σ. Formulas in SL [BG] are thus made of an initial block of first-order quantifiers (selecting strategies for variables in V), followed by a boolean combination of such goals.

Free variables.
With any subformula ζ of some formula ϕ ∈ SL[BG] , we associate its set of free agents and variables, which we write free(ζ). It contains the agents and variables that have to be associated with a strategy in order to unequivocally evaluate ζ (as will be seen from the definition of the semantics of SL[BG] below). The set free(ζ) is defined inductively: free(assign(σ). ϕ) = (free(ϕ) ∪ σ(Agt ∩ free(ϕ))) \ Agt Subformula ζ is said to be closed whenever free(ζ) = ∅. We can now comment on our choice of considering the flat fragment of SL[BG]: the full fragment, as defined in [30], allows for nesting closed SL [BG] formulas in place of atomic propositions.
The meaning of such nesting in our setting is ambiguous, because our semantics (in Sections 3 to 5) are defined in terms of the existence of a witness, which does not easily propagate in formulas. In particular, as we explain later in the paper, the semantics of the negation of a formula (there is a witness for ¬ϕ) does not coincide with the negation of the semantics (there is no witness for ϕ); thus substituting a subformula and substituting its negation may return different results.

Semantics.
Fix a state q ∈ Q, and a context χ : V ∪Agt Strat(q). We inductively define the semantics of a subformula α of a formula of SL[BG] at q under context χ, requiring free(α) ⊆ dom(χ). We omit the easy cases of boolean combinations and atomic propositions. Given a mapping σ : Agt → V, the semantics of strategy assignments is defined as follows: Notice that, writing χ = χ[A ∈ Agt → χ(σ(A))], we have free(ψ) ⊆ dom(χ ) if free(α) ⊆ dom(χ), so that our inductive definition is sound.
We now consider path formulas ψ = X ψ 1 and ψ = ψ 1 U ψ 2 . Since Agt ⊆ free(ψ) ⊆ dom(χ), the context χ induces a unique outcome out(q, χ) = (q i ) i∈N from q. For n ∈ N, we write outn(q, χ) = (q i ) i≤n , and define χ− → n as the context obtained by shifting all the strategies in the image of χ by outn(q, χ). Under the same conditions, we also define q− → n = last(outn(q, χ)). We then set In the sequel, we use classical shorthands, such as for p ∨ ¬p (for any p ∈ AP), F ψ for U ψ (eventually ψ), and G ψ for ¬F ¬ψ (always ψ). It remains to define the semantics of the strategy quantifiers. This is actually what this paper is all about. We provide here the original semantics, and discuss alternatives in the following sections: Example 1. We consider the (turn-based) game G is depicted on Fig. 1. We name the players after the shape of the state they control. The SL[BG] formula ϕ to the right of Fig. 1 has four quantified variables and two goals. We show that this formula evaluates to true at q 0 : fix a strategy δy (to be played by player ); because G is turn-based, we identify the actions of the owner of a state with the resulting target state, so that δy(q 0 q 1 ) will be either p 1 or p 2 . We then define strategy δz (to be played by ) as δz(q 0 q 2 ) = δy(q 0 q 1 ). Then clearly, for any strategy assigned to player , one of the goals of formula ϕ holds true, so that ϕ itself evaluates to true.

Subclasses of SL[BG].
Because of the high complexity and subtlety of reasoning with SL and SL[BG], several restrictions of SL [BG] have been considered in the literature [29,32,33], by adding further restrictions to boolean combinations in the grammar defining the syntax: can be combined using the following grammar: ξ :: In the sequel, we write a generic SL[BG] formula ϕ as (Q i x i ) 1≤i≤l . ξ(β j . ψ j ) j≤n where: for every 1 ≤ i ≤ l; ξ(g 1 , ..., gn) is a boolean combination of its arguments; -for all 1 ≤ j ≤ n, β j . ψ j is a goal: β j is a full assignment and ψ j is an LTL formula.

Strategy dependences
We now follow the framework of [30,33] and define the semantics of SL [BG] in terms of dependence maps. This approach provides a fine way of controlling how existentially-quantified strategies depend on other strategies (in a quantifier block).
Using dependence maps, we can limit such dependences.
Dependence maps.

Consider an
, any x i ∈ V ∀ , and any history ρ. In other words, θ(w) extends w to V. This general notion allows any existentially-quantified variable to depend on all universally-quantified ones (dependence on existentially-quantified variables is implicit: all existentially-quantified variables are assigned through a single map, hence they all depend on the others); we add further restrictions later on. Using maps, we may then define new semantics for SL [BG] : generally speaking, formula ϕ = (Q i x i ) 1≤i≤l . ξ(β j . ϕ j ) j≤n holds true if there exists a ϕ-map θ such that, for any w : V ∀ → Strat, the valuation θ(w) makes ξ(β j . ϕ j ) j≤n hold true.
Classic maps are dependence maps in which the order of quantification is respected: In words, if w 1 and w 2 coincide on V ∀ ∩ {x j | j < i}, then θ(w 1 ) and θ(w 2 ) coincide on x i . Elementary maps [30,29] have to satisfy a more restrictive condition: for those maps, the value of an existentially-quantified strategy at any history ρ may only depend on the value of earlier universally-quantified strategies along ρ. This may be written as: In this case, for any history ρ, if two valuations w 1 and w 2 of the universallyquantified variables coincide on the variables quantified before x i all along ρ, then θ(w 1 )(x i ) and θ(w 2 )(x i ) have to coincide at ρ. The difference between both kinds of dependences is illustrated on Fig. 2: for classic maps, the existentially-quantified strategy x 2 may depend on the whole strategy x 1 , while it may only depend on the value of x 1 along the current history for elementary maps. Notice that a map satisfying (E) also satisfies (C). Indeed, consider a map θ satisfying (E), and pick two strategy valuations w 1 and w 2 and an existential variable x i such that In particular, for those x j , we have w 1 (x j )(ρ) = w 2 (x j )(ρ) for any history ρ (hence also for any of its prefixes). By (E), it follows θ(w 1 )(x i )(ρ) = θ(w 2 )(x i )(ρ). Since this holds for any history, we have shown θ(w 1 )(

Pick a formula
. We define: As explained above, this actually corresponds to the usual semantics of SL [BG] as given in Section 2 [30,Theorem 4.6]. When G, q |≡ C ϕ, a map θ satisfying the conditions above is called a C-witness of ϕ for G and q. Similarly, we define the elementary semantics [30] as: Again, when such a map exists, it is called an E-witness. Notice that since Property (E) implies Property (C), we have G, q |≡ E ϕ ⇒ G, q |≡ C ϕ for any ϕ ∈ SL [BG] . This corresponds to the intuition that it is harder to satisfy a SL[BG] formula when dependences are more restricted. The contrapositive statement then raises questions about the negation of formulas.
The syntactic vs. semantic negations.
Looking at the definitions of |≡ C and |≡ E , it could be the case that e.g. G, q |≡ C ϕ and G, q |≡ C ¬ϕ: this only requires the existence of two adequate maps. However, since |≡ C and |= coincide, and since G, q |= ϕ ⇔ G, q |= ¬ϕ in the classical semantics of SL, we get As we now show, the converse implication holds for SL [1G] , but may fail to hold for SL [BG] .
There exist a game G with initial state q 0 and a formula ϕ ∈ SL[BG] such that G, q 0 |≡ E ϕ and G, q 0 |≡ E ¬ϕ.
Proof. Consider the formula and the one-player game of Fig. 3. We start by proving that G, q 0 |≡ E ϕ. For a contradiction, assume that a witness map θ satisfying (E) exists, and pick any valuation w for the universal variable x. First, for the first goal in the conjunction to be fulfilled, the strategy assigned to y must play to q 1 from q 0 . We abbreviate this as θ(w)(y)(q 0 ) = q 1 in the sequel. Now, consider two valuations w 1 and w 2 such that In order to fulfill the second goal under both valuations w 1 and w 2 , we must have θ( We now prove that G, q 0 |≡ E ¬ϕ. Indeed, following the previous discussion, we easily get that G, As explained above, this entails G, q 0 |≡ C ¬ϕ, and G, q 0 |≡ E ¬ϕ. The proof above uses only one player and two quantifiers, but a complex combination of goals. The game and formula of Fig. 1 provide an alternative proof, with three players and four quantifiers, but a formula in SL [DG] (which also entails the result for SL [CG] ).
Indeed, we already proved (see Example 1) that G, q 0 |≡ C ϕ, by making strategy z play in q 2 in the same direction as what strategy y plays in q 1 . Then it cannot be G, q 0 |≡ E ¬ϕ, since this would imply G, q 0 |≡ C ¬ϕ, and both ϕ and ¬ϕ would hold, which is impossible in the classical semantics. Thus G, q 0 |≡ E ¬ϕ. Now, in the elementary semantics, we require the existence of a dependence map θ, defining in particular θ(w)(z)(q 0 · q 2 ), and such that θ(w)(z)(q 0 · q 2 ) = θ(w )(z)(q 0 · q 2 ) whenever w(y)(q 0 ) = w (y)(q 0 ). Consider the following two valuations w and w : then under the strategies prescribed by θ(w), both disjuncts in ϕ are false.
It follows that G, q 0 |≡ E ϕ. We now prove hat this phenomenon does not occur in SL[1G]: Proposition 2. For any game G with initial state q 0 , and any formula ϕ Notice that this result follows from [30,Corollary 4.21], which states that |≡ C and |≡ E coincide on SL[1G]. However, since it is central to our approach, we develop a (new) full proof of this result.
Proof. We begin with intuitive explanations before giving full details. We encode the satisfaction relation G, q 0 |≡ E ϕ into a two-player turn-based parity game: the first player of the parity game will be in charge of selecting the existentiallyquantified strategies, and her opponent will select the universally-quantified ones. This will be encoded by replacing each state of G with a tree-shaped module as depicted on Fig. 4. Following the strategy assignment of the SL[1G] formula ϕ, the strategies selected by those players will define a unique play, along which the LTL objective has to be fulfilled; this verification is encoded into a (doubly-exponential) parity automaton.
We prove that G, q 0 |≡ E ϕ if, and only if, the first player wins; conversely, G, q 0 |≡ E ϕ if the second player wins. Both claims crucially rely on the existence of memoryless optimal strategies for two-player parity games. Finally, by determinacy of those games, we get the expected result.
Notice that in this construction, Player P ∃ has full observation, hence her moves may depend on all moves of Player P ∀ along the current history. As a result, in our encoding, existentally-quantified strategies may depend on the value of all universally-quantified strategies along the current history; in the example of Fig 4, this means that the moves selected by Player P ∃ for x 1 may depend on the moves selected by Player P ∀ for x 2 earlier in the game. However, memoryless strategies are sufficient for both players to win parity games; a memoryles strategy for Player P ∃ then precisely corresponds to an elementary dependence map, which proves our result. We now give a full proof following this intuition.
Building a turn-based parity game H from G and ϕ.
For the rest of the proof, we fix a game G and a SL[1G] formula ϕ = (Q i x i ) i≤l β. ϕ. Each state of G is replaced with a copy of the tree-shaped quantification game depicted on Fig. 4. A quantification game Qϕ is formally defined as follows: it involves two players, P ∃ and P ∀ ; -the set of states is Sϕ = {m ∈ Act * | 0 ≤ |m| ≤ l}, thereby defining a tree of depth l + 1 with directions Act. A state m in Sϕ with 0 ≤ |m| < l belongs to Player P ∃ if, and only if, The empty word ε ∈ Sϕ is the starting node of the quantification game, and currently has no incoming transitions; states with |m| = l also currently have no outgoing transitions.
A leaf (i.e., a state m with |m| = l) in a quantification game represents a move vector of domain V = {x i | 1 ≤ i ≤ l}: we identify each leaf m with the move vector m, hence writing m(x i ) for m(i).
We let D be a deterministic parity automaton over 2 AP associated with ϕ. We write d 0 for the initial state of D. Using quantification games, we can now define the turn-based parity game H: it involves players P ∃ and P ∀ ; -for each state q of G and each state d of D, H contains a copy of the quantification game Qϕ, which we call the (q, d)-copy. Hence the set of states of H is the product of the state spaces of G, D and Qϕ. -the transitions in H are of two types: internal transitions in each copy of the quantification game are preserved; -consider a state (q, d, m) where |m| = l; this is a leaf in the quantification game.
, assigning to each player A ∈ Agt the action m(β(A))); then we add a transition from (q, d, m) to (q , d , ε) where d is the state of D reached from d when reading lab(q ). Notice that (q, d, m) then has at most one outgoing transition. -the priorities are inherited from those in D: state (q, d, m) has the same priority as d.
Correspondence between G and H.
We begin with building a correspondence between the runs and strategies in G and those in H. In a sense, each step of a history in G is split into several steps in H; we thus refine the notion of history in G in order to establish our correspondence.
We can then build a one-to-one application Gp between histories in H and lanes in G. With a history π in H, written having length a · (l + 1) The resulting function Gp is clearly injective (different histories will correspond to different lanes), but also surjective. To prove the latter statement, we build the inverse function Hp: for a lane ((q j ) j≤a , u, b, t), we set Hp((q j ) j≤a , u, b, t) = π where π is the history in H of length a · (l + 1) Because of the coherence condition (1), Hp((q j ) j≤a , u, i, t) is indeed a history in H. From the definitions, one can easily check that Hp(Gp(π)) = π and deduce that Hp is the inverse function of Gp; therefore Lemma 1. The application Gp is a bijection between lanes of G and histories in H, and Hp is its inverse function.
Extending the correspondence.
We can use Gp to describe another correspondence G between strategies for P ∃ in H and maps in G. Remember that a map in G is a function θ : ( , so that we only have to define the map for existentially-quantified variables.
Formally, the application G takes as input a strategy δ for player P ∃ in H, and returns a map in G. It will enjoy the following properties: for any finite outcome π of δ in H ending at the root of a quantification game, there exists a function w such that Gp(π) = (ρ, u, 0, t ∅ ) where ρ is the outcome of G(δ)(w) in G under the assignment defined by β; -conversely, for any path ρ in G that is an outcome of G(δ)(w) for some w and under the assignment defined by β, then letting u( is an outcome of δ in H ending in the root of a quantification game. We fix δ, and for all w, ρ and x i , we define G(δ)(w)(x i )(ρ) by a double induction, first on the length of the history ρ in G, and second on the sequence of variables x i . We prove the properties above alongside the definition.
-Initial step: we begin with the case where ρ is the single state q 0 . We proceed by induction on existentially-quantified variables, merging the initialization step with the induction step as they are similar. Consider an existentially-quantified variable , assuming that they have been defined in the previous induction steps on variables. We can then create the Pick an outcome π of δ in H of length l + 2, and write m for its l + 1-st state: it defines a valuation for the variables in V, hence defining a move vector m β under the assignment β in Act. By construction of H, this outcome ends in the state (q 1 , d 1 , ε) where q 1 = ∆(q 0 , m β ) and d 1 is the successor of the initial state d 0 of D when reading lab(q 1 ). We now prove that q 0 · q 1 is the outcome of G(δ)(w) for some w. For this, we let w( In the end, under assignment β, G(δ)(w) precisely returns the move vector m β , hence proving our result.
The proof of the converse statement follows similar arguments: consider an Hp to a play ending in (q 1 , d 1 , ε), and visiting the leaf m defined as m i = u(x i , q 0 ). By construction, this is an outcome of δ in H. -induction step: we consider a history ρ in G, assuming we have already defined G(δ)(w)(x i )(ρ ) for all prefix ρ of ρ, and for all w and all variable x i . We now define G(δ)(w)(x i )(ρ), by induction on the list of variables. Again, the initialization step is merged with the induction step as they rely on the same arguments.
Consider an existentially-quantified variable x i , and w : , for all prefixes ρ of ρ. We can then create the lane lane i,w = (π, uw, i − 1, t i,w ) and finally define Using the same arguments as in the initial step, we prove our correspondence between the outcomes of δ in H and the outcomes of G(δ) in G.
Notice that in the construction above, However, in case δ is memoryless, we notice that G(δ)(w)(x i )(ρ) only depends on value of δ in the last state of the lane lane i,w , hence in particular not on uw. This removes the above dependence, and makes G(δ) elementary.
Finally, notice that we can define a dual correspondence G relating strategies of Player P ∀ and elementary maps in G where existential and universal variables are swapped.
Concluding the proof.
Using G, we prove our final correspondence between H and G: Lemma 2. Assume that P ∃ is winning in H and let δ be a positional winning strategy.
Similarly, assume that P ∀ is winning in H and let δ be a positional winning strategy.
Then the elementary map G(δ) is a witness that G, q 0 |≡ E ¬ϕ.
Proof. We prove the first point, the second one following similar arguments. Assume that P ∃ is winning in H, and pick a memoryless winning strategy δ. Toward a contradiction, assume further that G(δ) is not a witness of G, q 0 |≡ E ϕ. Then there exists w 0 : V ∀ → (Hist G → Act) s.t. G, q 0 |= G(δ)(w0) β. ϕ. We use w 0 to build a strategy δ for Player P ∀ in H. Given a history is the action to be played for going from π ≤|ρ |·(l+1)+i−1 to π ≤|ρ |·(l+1)+i in H; η = 0≤j<a 0≤i≤l (q j , d j , m j,i )). Write ν = (q j ) j∈N for the outcome of θ(w 0 ) under strategy assignment β in G. Then, by construction of δ, the outcome of δ and δ in H will visit the (q j , d j ) j∈Ncopies of the quantification game, where d j is the state reached by reading (q j ) j ≤j in the deterministic automaton D. Now, since G, q 0 |= G(δ)(w0) β. ϕ, we get that ν does not satisfy ϕ and therefore the outcome of δ and δ in H does not satisfy the parity condition. This is in contradiction with δ being the winning strategy of P ∃ , and proves that G(δ) must be a witness that G, q 0 |≡ E ϕ.
Proposition 2, together with the determinacy of parity games [16,35] immediately imply that at least one of ϕ and ¬ϕ must hold in G for |≡ E . This concludes our proof.
The following two results, already mentioned in [30], immediately follow: the first result uses the fact that G, q 0 |≡ E ϕ implies G, q 0 |≡ C ϕ; the second one uses the two-player game built in the proof. Following the discussion above, we introduce a new type of dependences between strategies (which we call timeline dependences). They allow strategies to also observe (and depend on) all other universally-quantified strategies on the strict prefix of the current history. For instance, for a block of quantifiers ∀x 1 . ∃x 2 . ∀x 3 , the value of x 2 after history ρ may depend on the value of x 1 on ρ and its prefixes (as for elementary maps), but also on the value of x 3 on the (strict) prefixes of ρ. Such dependences are depicted on Fig. 5. We believe that such dependences are relevant in many situations, especially for reactive synthesis, since in this framework strategies really base their decisions on what they could observe along the current history. Formally, a map θ is a timeline map if it satisfies the following condition: Using those maps, we introduce the timeline semantics of SL[BG] : Such a map, if any, is called a T-witness of ϕ for G and q. As in the previous section, it is easily seen that Property (E) implies Property (T), so that an E-witness is also a T-witness, and G, q |≡ E ϕ ⇒ G, q |≡ T ϕ for any formula ϕ ∈ SL[BG] .
Comparison of |≡ E and |≡ T .
As explained at the end of Section 3, the proof of Prop. 2 actually shows the following result: Proposition 3. For any game G with initial state q 0 , and any formula ϕ ∈ SL[1G] , it holds G, q 0 |≡ E ϕ ⇔ G, q 0 |≡ T ϕ.
We now prove that this does not extend to SL , we consider the game structure and formula of Fig. 6. We first notice that G, q 0 |≡ E ϕ: indeed, in order to satisfy the first goal under any choice of x A , the strategy for y has to point to p 1 from both q 1 and q 2 . But then no choice of x B will make the second goal true.
The syntactic vs. semantic negations.
While both semantics differ, we now prove that the situation w.r.t. the syntactic vs. semantic negations is similar. First, following Prop. 3 and 2, the two negations coincide on SL[1G] under the timeline semantics. Moreover: Proposition 5. For any formula ϕ in SL[BG] , for any game G and any state q 0 , we have G, q 0 |≡ T ϕ ⇒ G, q 0 |≡ T ¬ϕ.
Remember that the same result for |≡ E was proven easily from the implication G, q 0 |≡ E ϕ ⇒ G, q 0 |≡ C ϕ, and because the two negations coincide for |≡ C . The proof for |≡ T is more involved.
We define χ(x)(ρ) inductively on histories and on the list of quantified variables. When ρ is the empty history q 0 , we consider two cases: Similarly, when χ(x)(q 0 ) has been defined for all x ∈ {x 1 , ..., x i−1 }, we again consider two cases: , which again does not depend on the value of w besides those defined above;

Notice that this indeed enforces that
The induction step is proven similarly: consider a history ρ and a variable x i , assuming that χ has been defined for all variables on all prefixes of ρ, and for variables in {x 1 , ..., x i−1 } on ρ itself. Then: , which does not depend on the value of w besides those defined above; -the construction for the case when x i ∈ V ∃ is similar.
As in the initial step, it is easy to check that this construction enforces θ(χ |V ∀ ) = θ(χ |V ∃ ) = χ, as required. Proposition 6. There exists a formula ϕ ∈ SL[BG] , a (turn-based) game G and a state q 0 such that G, q 0 |≡ T ϕ and G, q 0 |≡ T ¬ϕ.
Proof. For this proof, we reuse the game and formula of Fig 3. Since the quantifier part is ∀x. ∃y, the timeline-and elementary semantics coincide for this formula. Since G, q 0 |≡ E ϕ, also G, q 0 |≡ T ϕ.
Hence G, q 0 |≡ T ¬ϕ 5 The fragment SL [EG] In this section, we focus on the timeline semantics |≡ T . We exhibit a fragment 1 SL[EG] of SL[BG] , containing SL[CG] and SL [DG] , for which the syntactic and semantic negations coincide: Theorem 1. For any game G with initial state q 0 , and any formula ϕ ∈ SL[EG] , it holds G, q 0 |≡ T ϕ ⇔ G, q 0 |≡ T ¬ϕ.
We prove this result in the sequel of this section. We first introduce semi-stable sets, which are the basis of the definition of SL[EG] ; we then prove useful properties of those sets, and finally proceed to the proof of Theorem 1.

Semi-stable sets.
For n ∈ N, we let {0, 1} n be the set of mappings from [1, n] to {0, 1}. We write 0 n (or 0 if the size n is clear) for the function that maps all integers in [1, n] to 0, and 1 n (or 1) for the function that maps [1, n] to 1. For f, g ∈ {0, 1} n , we define: The set {0, 1} n can be seen as the lattice of subsets of [1; n], with the above three operations corresponding to complement, intersection and union, respectively. We then introduce the notion of semi-stable sets, on which the definition of SL[EG] relies: a set F n ⊆ {0, 1} n is semi-stable if for any f and g in F n , it holds that ∀s ∈ {0, 1} n .
(f s) (g s) ∈ F n or (g s) (f s) ∈ F n .
We can now define SL[EG] as follows:

SL[EG]
ϕ ::= ∀x.ϕ | ∃x.ϕ | ξ ξ ::= F n ((ω i ) 1≤i≤n ) ω ::= assign(σ). ψ ψ :: where F n ranges over semi-stable subsets of {0, 1} n , for all n ∈ N. The semantics of the operator F n is defined as Equivalently: Example 4. Consider the following formula, expressing the existence of a Nash equilibrium for two players with respective LTL objectives ψ 1 and ψ 2 : This formula has four goals, and it corresponds to the set Proof. Remember that boolean combinations in SL[AG] follow the grammar ξ ::= ξ ∨ ω | ξ ∧ ω | ω. In terms of subsets of {0, 1} n , it corresponds to considering sets defined in one of the following two forms: is semi-stable, then we can prove that F n ξ also is. We detail the proof for the second case, the first case being similar.

Properties of semi-stable sets
Before proving our main theorem, we show that semi-stable sets enjoy several nice structural properties. Our first lemma entails that SL [EG] is closed under (syntactic) negation.
Lemma 3. F n is semi-stable if, and only if, its complement is.
Proof. Assume F n is not semi-stable, and pick f and g in F n and s ∈ {0, 1} n such that none of α = (f s) (g s) and γ = (g s) (f s) are in F n . It cannot be the case that g = f , as this would imply α = f ∈ F n . Hence α = γ. We claim that α and γ are our witnesses for showing that the complement of F n is not semi-stable: both of them belong to the complement of F n , and (α s) (γ s) can be seen to equal f , hence it is not in the complement of F n . Similarly for (γ s) (α s) = g. Proof. For a contradiction, assume that there exist s ∈ {0, 1} n and H n ⊆ F n such that, for any f ∈ H n , there is an element g ∈ H n for which (f s) (g s) / ∈ F n . Then there must exist a minimal integer 2 ≤ λ ≤ |H n | and λ elements {f i | 1 ≤ i ≤ λ} of H n such that By Lemma 3, the complement of F n is semi-stable. Hence, considering (f λ−1 s) (f λ s) and (f λ s) (f 1 s), one of the following two vectors is not in F n : The second expression equals f λ , which is in F n . Hence we get that (f λ−1 s) (f 1 s) is not in F n , contradicting minimality of λ.
For two elements f and g of {0, 1} n , we write f ≤ g whenever f (i) = 1 implies g(i) = 1 for all i ∈ [1, n] (this corresponds to set inclusion when seeing {0, 1} n as the lattice of subsets of [1; n]). Given B n ⊆ {0, 1} n , we write ↑B n = {g ∈ {0, 1} n | ∃f ∈ B n , f ≤ g}. A set F n ⊆ {0, 1} n is upward-closed if F n = ↑F n . Notice that being upward-closed and being semi-stable are uncomparable (for instance, the set ↑{(0, 0, 1, 1); (1, 1, 0, 0)} is not semi-stable). We now explain how to transform a semi-stable set into an upward-closed one by flipping some of its bits. This will simplify the presentation of the proof of our main theorem. Proof. We begin with the first statement. Assume that F n is semi-stable, and take f = flip b (f ) and g = flip b (g) in flip b (F n ), and s ∈ {0, 1} n . By distributivity, we get Write α = (f s) (g s) and β = (f s) (g s). One can easily check that β = α. We then have This computation being valid for any f and g, we also have with γ = (g s) (f s). By hypothesis, at least one of α and γ belongs to F n , so that also at least one of (f s) (g s) and (g s) (f s) belongs to flip b (F n ).
The second statement of Lemma 5 trivially holds for F n = ∅; thus in the following, we assume F n to be non-empty. For 1 ≤ i ≤ n, let s i ∈ {0, 1} n be the vector such that s i (j) = 1 if, and only if, j = i. Applying Lemma 4, we get that for any i, there exists some f i ∈ F n such that for any f ∈ F n , it holds We fix such a family (f i ) i≤n then define g ∈ {0, 1} n as g = 1≤i≤n (f i s i ), i.e. g(i) = f i (i) for all 1 ≤ i ≤ n. Starting from any element of F n and applying Equation (6) iteratively for each i, we get that g ∈ F n . Since g s i = f i s i , we also have By Equation (5), since flip g (g) = 1, we get Now, assume that flip g (F n ) is not upward closed: then there exist elements f ∈ F n and h / ∈ F n such that flip g (f )(i) = 1 ⇒ flip g (h)(i) = 1 for all i. Starting from f and iteratively applying Equation (7) for those i for which flip g (h)(i) = 1 and flip g (f )(i) = 0, we get that flip g (h) ∈ flip g (F n ) and h ∈ F n . Hence flip g (F n ) must be upward closed.
For s = (0, 0, 1), we can proceed similarly and get that ( , , 0) s ( , , 1). We now prove a technical result over such orders, which will be useful for the proof of Lemma 11. Lemma 7. Given a semi-stable set F n , s 1 , s 2 ∈ {0, 1} n such that s 1 s 2 = 0 and f, g ∈ {0, 1} n such that f s1 g and f s2 g, it holds f s1 s2 g.
The following lemma is straightforward: Lemma 8. Assuming F n is upward-closed, for any f , g and s in {0, 1} n , if f ≤ g (i.e. for all i, f (i) = 1 ⇒ g(i) = 1), then f s g. In particular, 0 is a minimal element for s, for any s.
Proof. Since f ≤ g, then also (f s) (h s) ≤ (g s) (h s), for any h ∈ {0, 1} n . Since F n is upward-closed, if (f s) (h s) is in F n , then so is (g s) (h s).

Sketch of proof of Theorem 1
The proof of Theorem 1 is long and technical. Before giving the full details, we begin with some intuition how semi-stable sets, and the quasi-orders defined above, are used to prove the result. We first notice that the approach we used in Prop. 2 does not extend in general to formulas with several goals. Consider for instance formula (Q i x i ) i≤l (β 1 . ϕ 1 ⇔ β 2 . ϕ 2 ): if at some points the two goals give rise to two different outcomes, thus to two different subgames, the winning objectives in one subgame depends on what is achieved in the other subgame.
SL [EG] has been designed to simplify such dependences between different subgames: when two (or more) outcomes are available at a given position, each subgame can be assigned an independent winning objective. This objective can be obtained from the quasi-orders s associated with the SL[EG] formula being checked. Consider again Example 6: associating the set F 3 with three goals ω 1 , ω 2 and ω 3 (and adequate strategy quantifiers), we get a formula in SL [EG] . Assume that the moves selected by the players give rise to the same transition for ω 1 and ω 2 , and to a different transition for ω 3 ; this gives rise to two subgames. In the subgame reached when following the transition of ω 1 and ω 2 (hence with s = (1, 1, 0)), the optimal way of playing is given by (0, 0, ) s (0, 1, ) s (1, 0, ) ≡s (1, 1, ), independently of what may happen in the subgame reached by following the transition given by ω 3 ; for instance, it is better to fulfill only ω 1 than to fulfill only ω 2 (i.e. (0, 1, ) s (1, 0, )), which can be observed on Fig. 7 by the fact that fulfilling ω 1 is enough to make the whole formula hold true. In the subgame corresponding to ω 3 , the optimal way of playing is given by ( , , 0) s ( , , 1): it is always better to fulfill ω 3 , whatever happens on the other subgame.
Our proof follows the schema depicted on Fig. 8. Building on the idea depicted on Fig. 4, we would like to construct a turn-based parity game encoding the SL[EG] model-checking instance at hand. Strategy quantifiers are encoded with tree-shaped quantification games as in Fig. 4, but now, the leaves of quantification games may give rise to different outcomes, depending on the goal being considered: Fig. 8 depicts the case of a leaf from which the first two goals would go in one direction (to q 1 here) while the third goal follows a different direction (to q 2 ). Notice that from the other leaves, the goals may have been grouped differently (and in particular, they may have all given rise to the same transition). Now, consider the outcome generated by the first two goals: it goes to a subgame starting in state q 1 , and only the first two goals have to be tracked. From our observations above, we can compute an order defining the best way of satisfying the remaining two goals; this does not depend on what happens along the other outcome, generated by the third goal. We can thus consider this subgame alone, and apply the same construction with the remaining goals (using parity automata to keep track of the satisfaction of the LTL formulas in the goals). Since there are finitely many goals, we eventually end up in a situation where there is a single goal, or where the goals always give rise to the same outcomes; then the computation remains in the same subgame, and the situation corresponds to the case of Fig. 4.
We implement these ideas as follows: first, in order to keep track of the truth values of the LTL formulas ψ i of each goal, we define a family of parity automata, one for each subset of goals of the formula under scrutiny. A subgame, as considered above, is characterized by a state q of the original concurrent game, a state dp of each of the parity automata, and a vector s ∈ {0, 1} n defining which goals are still active in that subgame. For each subgame, we can compute, by induction on s, the optimal set of goals that can be fulfilled from that configuration. The optimal strategies of both players in each subgame can be used to define (partial) optimal timeline dependence maps. We can then combine these partial maps together to get optimal dependence maps θ and θ; using similar arguments as for the proof of Prop. 5, we get a valuation χ such that θ(χ |V ∀ ) = χ = θ(χ |V ∃ ), from which we deduce that exactly one of ϕ and ¬ϕ holds.

Proof of Theorem 1
We can now prove our main theorem, which we first restate: Theorem 1. For any game G with initial state q 0 , and any formula ϕ ∈ SL[EG] , it holds G, q 0 |≡ T ϕ ⇔ G, q 0 |≡ T ¬ϕ.
Proof. Following Lemma 5, we assume for the rest of the proof that the set F n of the SL[EG] formula ϕ is upward-closed (even if it means negating some of the LTL objectives). We also assume it is non-empty, since the result is trivial otherwise.
The proof of Theorem 1 is in three steps: we build a family of parity automata expressing the objectives that may have to be fulfilled along outcomes. A configuration of a subgame is then described by a state q of the game, a vector d of states of those parity automata, and a set s of goals that are still active in the current subgame; -we characterize the two ways of fulfilling a set of goals: either by fulfilling all goals along the same outcome, or by partitioning them among different branches; we encode these two possibilities into 2-player parity games, and inductively compute optimal sets of goals (represented as vectors b q,d,s ∈ {0, 1} n ) that can be achieved from any given configuration. By determinacy of parity games, we derive timeline maps witnessing the fact that b q,d,s can be achieved, and the fact that it is optimal. If b q0,d0,1 ∈ F n , we get a witness map for G, q 0 |≡ T ϕ; otherwise, we get one for G, q 0 |≡ T ¬ϕ.

Automata for conjunctions of goals
We use deterministic parity word automata to keep track of the goals to be satisfied. Since we initially have no clue about which goal(s) will have to be fulfilled along an outcome, we use a (large) set of automata, all running in parallel. For s ∈ {0, 1} n and h ∈ {0, 1} n , we let D s,h be a deterministic parity automaton accepting exactly the words over 2 AP along which the following formula Φ s,h holds: where a conjunction over an empty set (i.e., if (k s)(j) = 0 for all j) is true. Notice that in Φ s,h , we should also have imposed ¬ϕ j for those indices j for which (k s)(j) = 0. However, using Lemma 8, if h s k and k ≤ k , then also h s k , so that any conjunction containing more ϕ j 's would also appear in Φ s,h .
Notice that when s = 0, we have h s k for any h and k, so that Φ 0,h is true for any h ∈ {0, 1} n ). From now on, we only consider vectors s ∈ {0, 1} n such that |s| = 1≤i≤n s i ≥ 1.
As an example, take s ∈ {0, 1} n with |s| = 1, writing j for the index where s(j) = 1; for any h ∈ {0, 1} n , if there is k s h with k(j) = 0 (which in particular is the case when h(j) = 0), then the automaton D s,h is universal; otherwise D s,h accepts the set of words over 2 AP along which ϕ j holds.
We write D = {D s,h | s ∈ {0, 1} n , h ∈ {0, 1} n } for the set of automata defined above. A vector of states of D is a function associating with each automaton D ∈ D one of its states. We write VS for the set of all vectors of states of D. For any vector d ∈ VS and any state q of G, we let succ(d, q) to be the vector of states associating with each D ∈ D the successor of state d(D) after reading lab(q); we extend succ to finite paths (q i ) 0≤i≤n in G inductively, letting succ(d, (q i ) 0≤i≤n ) = succ(succ(d, (q i ) 0≤i≤n−1 ), qn).
An infinite path (q i ) i∈N in G is accepted by an automaton D of D whenever the word (lab(q i )) i∈N is accepted by D. We write L(D) for the set of paths of G accepted by D. Finally, for d ∈ VS, we write L(D d s,h ) for the set of words that are accepted by D s,h starting from the state d(D s,h ) of D s,h . Proposition 8. The following holds for any s ∈ {0, 1} n : Proof. Φ s,0 contains the empty conjunction (k = 0) as a disjunct. Hence it is equivalent to true. When h 1 s h 2 , formula Φ s,h1 contains more disjuncts than Φ s,h2 , hence the second result. Finally, F n (f, 1) = {0, 1} n if f ∈ F n , and is empty otherwise. Hence if h ∈ F n , we have h 1 k if, and only if, k ∈ F n , which entails the result.

Two ways of achieving goals
After a given history, a set of goals may be achieved either along a single outcome, in case the assignment of strategies to players gives rise to the same outcomes, or they may be split among different outcomes. We express those two ways of satisfying goals, by means of two operators parameterized by the current configuration. The first operator covers the case where the goals currently enabled by s (those goals β i . ϕ i for which s(i) = 1) are all fulfilled along the same outcome. For any d ∈ VS and any two s and h in {0, 1} n , the operator Γ stick d,s,h is defined as follows: given a context χ with V ⊆ dom(χ) and a state q of G, Intuitively, all the goals enabled by s must give rise to the same outcome, which is accepted by D d s,h . In the formula above, χ • β j corresponds to the strategy profile to be used for goal β j · ϕ j .
We now consider the case where the active goals are partitioned among different outcomes.
An extended partition of s is a sequence τ = (sκ, qκ, dκ) 1≤κ≤λ of elements of {0, 1} n × Q × VS where (sκ) 1≤κ≤λ is a partition of s, qκ are states of G, and dκ are vectors of states of the automata in D.
We write Part(s) for the set of all extended partitions of s. Notice that we only consider non-trivial partitions; in particular, if |s| ≤ 1, then Part(s) = ∅. For any d ∈ VS, any s in {0, 1} n and any set of partitions Υs of s, the operator Γ sep d,s,Υs states that the goals currently enabled by s all follow a common history ρ for a finite number of steps, and then partition themselves according to some partition in Υs. The operator Γ sep d,s,Υs is defined as follows: ∃ρ ∈ Hist G (q).
Notice that h does not appear explicitly in this definition, but Γ sep d,s,Υs will depend on h through the choice of Υs. The operators Γ stick and Γ sep are illustrated on Fig. 9

Fulfilling optimal sets of goals
We now inductively (on |s|) define new operators Γ d,s,h combining the above two operators Γ stick and Γ sep , and selecting optimal ways of partitioning the goals among the outcomes.
We write b q,d,s for one such value (notice that it need not be unique). By maximality, for any h such that Induction step. We assume that for any d ∈ VS, any h ∈ {0, 1} n and any s ∈ {0, 1} n with |s| ≤ k, we have defined an operator Γ d,s,h , and that for any q ∈ Q, we have fixed an element b q,d,s ∈ {0, 1} n for which G, q |≡ T (Q i x i ) 1≤i≤l . Γ d,s,b and such that for any h such that b q,d,s ≺s h, it holds G, q |≡ T (Q i x i ) 1≤i≤l . Γ d,s,h .
We then define As previously, we claim that G, q |=χ Γ d,s,0 for any χ such that Agt ⊆ dom(χ). Indeed, for a given χ, if all the outcomes of the goals enabled by s follow the same infinite path, then this path is accepted by D s,0 and G, q |=χ Γ stick d,s,0 ; otherwise, after some common history ρ, the outcomes are partitioned following some extended partition τ 0 , which obviously satisfies 0 s cs,τ 0 since 0 is a minimal element of s. Hence in that case G, q |=χ Γ sep d,s,Υs,0 .
In particular, it follows that G, q |≡ T (Q i x i ) 1≤i≤l . Γ d,s,0 , and we can fix a maximal element b q,d,s for which G, q |≡ T (Q i x i ) 1≤i≤l . Γ d,s,b q,d,s and G, q |≡ T (Q i x i ) 1≤i≤l . Γ d,s,h for any h s b q,d,s . This concludes the inductive definition of Γ d,s,b q,d,s . We now prove that it satisfies the following lemma: Lemma 9. For any q ∈ Q, any d ∈ VS and any s ∈ {0, 1} n , it holds Proof. The first result is a direct consequence of the construction: the values for b q,d,s have been selected so that G, To prove the second part, we again turn the satisfaction of Γ d,s,h , for h s b q,d,s , into a parity game, as for the proof of Prop. 2. We only sketch the proof here, as it involves the same ingredients.
The parity game is obtained from G by replacing each state by a quantification game. We also introduce two sink states, qeven and q odd , which are winning for Player P ∃ and for Player P ∀ respectively. When arriving at a leaf (q, d, m) of the (q, d)-copy of the quantification game, there may be one of the following three transitions available: if there is a state q such that for all j with s(j) = 1, it holds q = ∆(q, m βj ) (in other terms, the moves selected in the current quantification game generate the same transition for all the goals enabled by s), then there is a single transition to (q , d , ε), where d = succ(d, q ). -otherwise, if there is an extended partition τ = (sκ, qκ, dκ) 1≤κ≤λ of s such that cs,τ s h and, for all 1 ≤ κ ≤ λ, for all j such that sκ(j) = 1, we have ∆(q, m βj ) = qκ and succ(d, qκ) = dκ, then there is a transition from (q, d, m) to qeven. -otherwise, there is a transition from (q, d, m) to q odd .
The priorities defining the parity condition are inherited from those in D s,h .
Since G, q |≡ T (Q i x i ) 1≤i≤l . Γ d,s,h , Player P ∃ does not have a winning strategy in this game, and by determinacy Player P ∀ has one. From the winning strategy of Player P ∀ , we obtain a timeline map ϑ q,d,s,h for (Q i x i ) 1≤i≤l witnessing the fact that G, q |≡ T (Q i x i ) 1≤i≤l . ¬Γ d,s,h .
Remark 2. While the definition of Γ d,s,b q,d,s (and in particular of b q,d,s ) is not effective, the parity games defined in the proof above can be used to compute each b q,d,s and Γ d,s,b q,d,s . Indeed, such parity games can be used to decide whether G, q |≡ T (Q i x i ) 1≤i≤l . Γ d,s,h for all h, and selecting a maximal value for which the result holds.
Each parity game has size doubly-exponential, with exponentially-many priorities; hence they can be solved in 2 -EXPTIME. The number of games to solve is also doubly-exponential, so that the whole algorithm runs in 2 -EXPTIME.
Applying Lemma 9, we fix a timeline map ϑ q,d,s for (Q i x i ) 1≤i≤l witnessing (9), and for each h s b q,d,s , a timeline map ϑ q,d,s,h for (Q i x i ) 1≤i≤l witnessing (10).
We now focus on the operator obtained at the end of the induction, when s = 1. Following Prop. 8, L(D 1,f ) does not depend on the exact value of f , as soon as it is in F n . We then let where f is any element of F n (remember F n is assumed to be non-empty), d 0 is the vector of initial states of the automata in D, and Υ F n = {Part(1) | c 1,τ ∈ F n }. We write ϑ 1 and ϑ 1 for the maps ϑ q0,d0,1 and ϑ q0,d0,1,h for some h ∈ F n , as given by Lemma 9. From the discussion above, ϑ q0,d0,1,h does not depend on the choice of h in F n , and we simply write it ϑ q0,d0,1 . Then: Proof. The first part directly follows from the previous lemma. For the second part, G, Hence for any f ∈ F n , we have f s b q0,d0,1 , so that ϑ q0,d01 is a witness that G, q |≡ T (Q i x i ) 1≤i≤l . ¬Γ F n .

Compiling optimal maps
From Lemma 9, we have timeline maps for each q, d and s. We now compile them into two map θ and θ. The construction is inductive, along histories.
The dual map θ is defined in the same way, using maps ϑ in place of ϑ.
The following result will conclude our proof of Theorem 1.
Induction step. We assume that the Proposition 12 holds for any elements s ∈ {0, 1} n of size |s| < α. We now consider for the induction step an element s ∈ {0, 1} n such that |s| = α and (s, ρ) ∈ R w .
We are now ready to prove the first part of Lemma 11: consider a function w : V ∀ → (Hist G → Act). By Lemma 12 applied to w, s = 1, and ρ = q 0 , we get that b q0,d0,1 1 f w . Now, by Lemma 13, b q0,d0,1 ∈ F n , therefore the element f w , being greater than b q0,d0,1 for 1 , must also be in F n , which means that G, q 0 |= θ(w) F n (β j . ϕ j ) 1≤j≤n .
The second implication of the lemma is proven using similar arguments.
Lemma 11 allows us to conclude that at least one of ϕ and ¬ϕ must hold on G for |≡ T . Lemma 5 implies that at most one can hold. Combining both we get that exactly one holds.
Remark 3. Notice that we do not get the twin of Corollary 1 here, and actually |≡ T and |≡ C differ over SL [EG] . Indeed, the proof of Prop. 4 provides a counterexample: -as shown in the proof of Prop. 4, the game G and formula ϕ ∈ SL[CG] of Fig. 6 are such that G, q 0 |≡ T ϕ; -considering the classical semantics, because of the conjunction of goals, any strategy for y for which the rest of the formula is fulfilled must play differently in states q 1 and q 2 . On the other hand, in order to fulfill the first conjunct for any strategy x A , then the strategy y must play to p 1 from both q 1 and q 2 . Hence no such strategy exist.

Maximality of SL[EG]
Finally, we prove that SL [EG] is, in a sense, maximal for the first property of Theorem 1: Proposition 9. For any non-semi-stable boolean set F n ⊆ {0, 1} n , there exists a SL[BG] formula ϕ built on F n , a game G and a state q 0 such that G, q 0 |≡ T ¬ϕ and G, q 0 |≡ T ϕ.
Proof. We consider again the game G depicted on Fig. 6, with two agents and . Let F n be a non-semi-stable set over {0, 1} n . Then there must exist f 1 , f 2 ∈ F n , and s ∈ {0, 1} n , such that (f 1 s) (f 2 s) / ∈ F n and (f 2 s) (f 1 s) / ∈ F n . We then let ϕ = ∀y 1 . ∀y 2 . ∀x 1 . ∃x 2 . F n (β 1 . ϕ 1 , . . . , βn. ϕn) Formulas ϕ i have been built to satisfy the following property: Lemma 14. Let ρ be a maximal run of G from q 0 . Let k ∈ {1, 2} be such that ρ visits a state labelled with p k . Then for any 1 ≤ i ≤ n, we have ρ |= ϕ i if, and only if, f k (i) = 1.
Let v and v be the vectors in {0, 1} n representing the values of the goals (β 1 . ϕ 1 , . . . , βn. ϕn) under θ(w) and θ(w ), respectively. Then v and v are in F n . However: if τ 2 (q 0 · q 2 ) = p 1 , then under θ(w ), for any 1 ≤ i ≤ n: if s i = 1, strategies σ 1 and τ 1 are applied, so that the game ends in p 2 ; then v i = 1 if, and only if, f 2 (i) = 1; -if s i = 0, strategies σ 2 and τ 2 are used, and the game goes to p 1 ; then v i = 1 if, and only if, f 1 (i) = 1. In the end, we have v = (f 1 s) (f 2 s), which is not in F n . -if τ 2 (q 0 · q 2 ) = p 2 , then under θ(w), for any 1 ≤ i ≤ n: if s i = 1, strategies σ 1 and τ 1 are applied, so that the game ends in p 1 ; then v i = 1 if, and only if, f 1 (i) = 1; -if s i = 0, strategies σ 2 and τ 2 are used, and the game goes to p 2 ; then v i = 1 if, and only if, f 2 (i) = 1. In the end, we have v = (f 1 s) (f 2 s), which also is not in F n .
Both cases lead to a contradiction, so that our hypothesis that G, q 0 |≡ T ϕ can only be wrong. Lemma 16. G, q 0 |≡ T ¬ϕ.
Proof. We use similar arguments as above: we assume G, q 0 |≡ T ¬ϕ, and fix a witnessing timeline map θ for ¬ϕ.
We consider four valuations w 11 , w 12 , w 21 and w 22 for x 2 , such that w jk (x 2 )(ρ) = w j k (x 2 )(q 0 ) (the exact value is not important) and w jk (x 2 )(q 0 · q 1 ) = p i and w jk (x 2 )(q 0 · q 2 ) = p j . We let σ 1 = θ(w jk )(y 1 ), σ 2 = θ(w jk )(y 2 ) and τ 1 = θ(w jk )(x 1 ). Notice that those strategies do not depend on i and j, since θ is a timeline map for ¬ϕ. We write v jk i for the vector representing the truth value of goal β i . ϕ i under valuation θ(w jk ).
Assume that σ 2 (q 0 ) = q 1 , and that τ 1 (q 0 · σ 1 (q 0 )) = p 1 . Then under w 11 (i.e., when τ 2 (q 0 · q 1 ) = p1), for any 1 ≤ i ≤ n, the outcome of strategy assignment β i from q 0 goes to p 1 . Hence v 11 = f 1 , which is in F n , contradicting the fact that θ witnesses G, q 0 |≡ T ¬ϕ. Similar arguments apply if τ 1 (q 0 · σ 1 (q 0 )) = p 2 , and when σ 2 (q 0 ) = q 2 . Thus our assumption that G, q 0 |≡ T ¬ϕ cannot be correct.  In this paper, we have studied various semantics of SL, depending on how the successive strategy quantifiers in an SL formula may depend on each other. Following [30], we defined a natural translation of the elementary semantics of SL[1G] into a twoplayer turn-based parity game, and introduced a new timeline semantics for SL [BG] that better corresponds to this translation. For this new semantics, we defined a fragment SL [EG] for which the timeline semantics can be model-checked in 2 -EXPTIME. Figure 10 represents the relations between those semantics (with implications in grey only valid for SL[1G]), as well as the maximal fragments of SL [BG] for which the semantical and syntactical negations coincide.
While our work clarifies the setting of strategy dependences in SL, those various semantics of SL remains to be fully understood, in particular as to which situations are better suited for which semantics. Of course, studying the decidability and complexity of model checking for the different semantics and fragments of SL[BG] is a natural continuation of this work. Studying quantitative or epistemic extensions of SL [EG] under the timeline semantics is also a natural direction to follow. Finally, since our approach relies on translations to two-player parity games, our modelchecking algorithm would be a good candidate for being implemented e.g. in the tool MCMAS.