Complexity of Unordered CNF Games

The classic TQBF problem is to determine who has a winning strategy in a game played on a given conjunctive normal form formula (CNF), where the two players alternate turns picking truth values for the variables in a given order, and the winner is determined by whether the CNF gets satisfied. We study variants of this game in which the variables may be played in any order, and each turn consists of picking a remaining variable and a truth value for it. For the version where the set of variables is partitioned into two halves and each player may only pick variables from his or her half, we prove that the problem is PSPACE-complete for 5-CNFs and in P for 2-CNFs. Previously, it was known to be PSPACE-complete for unbounded-width CNFs (Schaefer, STOC 1976). For the general unordered version (where each variable can be picked by either player), we also prove that the problem is PSPACE-complete for 5-CNFs and in P for 2-CNFs. Previously, it was known to be PSPACE-complete for 6-CNFs (Ahlroth and Orponen, MFCS 2012) and PSPACE-complete for positive 11-CNFs (Schaefer, STOC 1976).


INTRODUCTION
Conjunctive normal form formulas (CNFs) are among the most prevalent representations of Boolean functions. All sorts of computational problems concerning CNFs-such as satisfying them, minimizing them, learning them, refuting them, fooling them, and playing games on them-play central roles in complexity theory. The CNF format is so prevalent because it can represent all Boolean functions and can do so in a succinct way for many functions of interest. A CNF is a conjunction of clauses, where each clause is a disjunction of literals; a w-CNF has at most w literals per clause. The width w is often the most important parameter governing the complexity of problems concerning CNFs; this is because problems often turn out to be tractable for small width (e.g., satisfiability of 2-CNFs) and intractable for larger width (e.g., satisfiability of 3-CNFs). The following are three classical two-player games played on a CNF φ(x 1 , . . . , x n ): • In the ordered game, player 1 assigns a bit value for x 1 , then player 2 assigns x 2 , then player 1 assigns x 3 , and so on, and the winner is determined by whether φ gets satisfied. Note that the variables must be played in the prescribed order x 1 , x 2 , x 3 , . . .. Deciding who has a winning strategy-better known as TQBF or QSAT-is PSPACE-complete for 3-CNFs [17] and in P for 2-CNFs [3,8]. Many PSPACE-completeness results have been shown by reducing from the ordered 3-CNF game; classic examples include Generalized Geography [13,14] and Node Kayles [13,14]. • In the unordered game, each player is allowed to pick which remaining variable to play next (as well as which bit value to assign it), and again the winner is determined by whether φ gets satisfied. Deciding who has a winning strategy is PSPACE-complete for 6-CNFs [1] and for 11-CNFs with only positive literals [13,14]. The unordered game on positive CNFs is also known as the maker-breaker game, and a simplified proof of PSPACE-completeness for unbounded-width positive CNFs appears in the work of Byskov [7]. Many PSPACEcompleteness results have been proven by reducing from the unordered positive CNF game [2, 4, 7, 9-11, 15, 16, 19, 20]. For the general unordered CNF game, nothing was known for width < 6; in particular, the complexity of the unordered 2-CNF game was not studied in the literature previously. An experimental evaluation of heuristics for the unordered CNF game appears in the work of Zhao and Müller [21]. • In the partitioned game, the set of variables is partitioned into two halves, and each player may only pick variables from his or her half. This is, in a sense, intermediate between ordered and unordered: the ordered game restricts the set of variables available to each player and the order in which they must be played; the unordered game restricts neither; and the partitioned game restricts only the former. Deciding who has a winning strategy was shown to be PSPACE-complete for unbounded-width CNFs in the work of Schaefer [13,14], where it was explicitly posed as an open problem to show PSPACE-completeness with any constant bound on the width. This game has been used for PSPACE-completeness reductions [5], and a variant with a matching between the two players' variables has also been studied [6]. The partitioned 2-CNF game has not been studied in the literature.
Study of the unordered and partitioned games is motivated by their resemblance to real-world two-player games that also lack a prescribed "order" for possible moves. For example, the game of Hex has an unordered flavor since any cell can potentially be played by either player at any time. In addition, the game of Checkers has a partitioned flavor since for any configuration of pieces, the set of moves one player is allowed to make is disjoint from the set of moves the other player is allowed to make, and each player may pick any of his or her available moves. Hardness results for the unordered and partitioned CNF games may translate via reduction more easily (than the ordered game) to other games of interest.
We prove that the unordered and partitioned games are both PSPACE-complete for 5-CNFs; the former improves the width 6 bound from the work of Ahlroth and Orponen [1], and the latter resolves the 42-year-old open problem 1 from the work of Schaefer [13,14]. We also prove that the unordered and partitioned games are both in P for 2-CNFs. The complexity for width 3 and 4 remains open. In the following section, we give the precise definitions and theorem statements.

Statement of Results
The unordered CNF game is defined as follows. There are two players, denoted T (for "true") and F (for "false"). The input consists of a CNF φ, a set of variables X = {x 1 , . . . , x n } containing all the variables that appear in φ (and possibly more), and a specification of which player goes first. The players alternate turns, and each turn consists of picking a remaining variable from X and assigning it a value 0 or 1. Once all variables have been assigned, the game ends and T wins if φ is satisfied, and F wins if it is not. We let G (for "game") denote the problem of deciding which player has a winning strategy, given φ, X , and who goes first.
The partitioned CNF game is similar to the unordered CNF game, except X is partitioned into two halves X T and X F , and each player may only pick variables from his or her half. If n is even, we require |X T | = |X F |, and if n is odd, we require |X T | = |X F | + 1 if T goes first, and |X F | = |X T | + 1 if F goes first. We let G % denote the problem of deciding which player has a winning strategy, given φ, the partition X = X T ∪ X F , and who goes first.
We let G w and G % w denote the restrictions of G and G % , respectively, to instances where φ has width w (i.e., each clause has at most w literals). Now, we state our results as the following theorems. We prove Theorem 1 and Theorem 2 in Section 2 by showing reductions from the PSPACEcomplete games G and G % , respectively. For Theorem 3 and Theorem 4, in Section 3 we prove characterizations in terms of the graph representation from the classical 2-SAT algorithm-who has a winning strategy in terms of certain graph properties-and we design linear-time algorithms to check these properties. 2 In the proofs, it is helpful to distinguish four patterns for "who goes first" and "who goes last," so we introduce new subscripts. For a, b ∈ {T, F}, the subscript a · · · b means player a goes first and player b goes last, a · · · means a goes first, and · · · b means b goes last. These may be combined with the width w subscript. For example, G % T···F (which was denoted L % free (CNF) in the work of Schaefer [13,14]) corresponds to the partitioned game where T goes first and F goes last (so n = |X | must be even), and G 5, ···T corresponds to the unordered game with width 5 where T goes last (so either n is even and F goes first, or n is odd and T goes first).

5-CNF
We prove Theorem 1 in Section 2.1 and Theorem 2 in Section 2.2. We use the ≤ symbol to indicate the existence of a polynomial-time mapping (Karp) reduction from one problem to another.

G 5
In this section, we prove Theorem 1. It is trivial to argue that G 5 ∈ PSPACE. We prove PSPACEhardness by showing a reduction G T···F ≤ G 5,T···F in Section 2.1.2. G T···F is already known to be PSPACE-complete [1,7,13,14]. We will talk about the other three patterns G F···F , G T···T , G F···T in Section 2.1.3. Before the formal proof, we develop the intuition in Section 2.1.1.

Intuition.
In NP-completeness, recall the following simple reduction from SAT with unbounded width to 3-SAT. Suppose a SAT instance is given by φ over set of variables X . If ( 1 ∨ 2 ∨ 3 ∨ · · · ∨ k ) is a clause in φ with width k > 3, then the reduction introduces fresh variables z 1 , z 2 , . . . , z k−1 and generates a chain of clauses in φ as follows: Each clause of φ gets a separate set of fresh variables for its chain, and we let Z = {z 1 , z 2 , . . .} be the set of all fresh variables for all chains. The reduction claims that φ is satisfiable if and only if φ is satisfiable. We will make use of the following specific property of the reduction. Claim 1. For every assignment x to X : φ(x ) is satisfied if and only if there exists an assignment z to Z such that φ (x, z) is satisfied.
Proof. Suppose x satisfies φ. If x satisfies ( 1 ∨ 2 ∨ 3 ∨ · · · ∨ k ) in φ by i = 1, then in the corresponding chain of clauses in φ , the clause having i also gets satisfied by i = 1 and the rest of the clauses in that chain can get satisfied by assigning all z's on the left side of i as 1 and right side of i as 0.
Now suppose x does not satisfy φ. Then at least one of the clauses of φ has all literals assigned as 0. The corresponding chain of clauses in φ essentially becomes To satisfy the preceding chain, z 1 = 1 and z k−1 = 0. It also introduces the following chain of implications: z 1 ⇒ z 2 ⇒ z 3 ⇒ · · · ⇒ z k−1 . Following the chain, we get (z 1 ⇒ z k−1 ) = (1 ⇒ 0). Therefore, we conclude that φ (x, z) cannot be satisfied for any assignment z. Now this reduction does not show G T···F ≤ G 3,T···F since the games on φ and φ are not equivalent. We show a simple example to make our point. Consider the following G T···F game over variables {x 0 , x 1 , . . . , x k }: In the preceding G T···F game, T has a winning strategy. On the first move T plays, x 0 = 1. Then whatever F plays, T plays one of the k − 1 many unassigned x i from {x 1 , x 2 , . . . , x k } as 1. T wins.
But if we introduce fresh variables {z 1 , z 2 , z 3 , . . .} as in the NP-completeness reduction, then we get a game over variables {x 0 , x 1 , x 2 , . . . , x k } ∪ {z 1 , . . . , z k−1 }: In the preceding G 3,T···F game, F has a winning strategy. On the first move, T must play x 0 = 1; otherwise, F wins by x 0 = 0. Then F plays x 1 = 0 and T must reply by z 1 = 1; otherwise, F wins by z 1 = 0. Then F plays x 2 = 0 and T must reply by z 2 = 1; otherwise, F wins by z 2 = 0. The strategy goes on like this until the last clause and F wins by x k = 0.
The G 3,T···F game is disadvantageous for T compared to the G T···F game. The disadvantage arises from F having the beginning move in a fresh chain of clauses. Now the intuition is to design a game version of the NP-completeness reduction by fixing the imbalance. We design ψ in such a way that the games on φ and ψ stay equivalent. To counter the unfairness for T due to fresh variables {z 1 , z 2 , z 3 , . . .}, we replace z i by a pair of variables (a i , b i ), which gives T more opportunities to satisfy the clauses. The construction of a chain of clauses in ψ from a clause ( 1 ∨ 2 ∨ 3 ∨ · · · ∨ k ) in φ goes as follows: Let us consider a G 5,T···F game on ψ . In an optimal gameplay, no player should play a's or b's before playing x's. Intuitively, this is because if F plays any a i or b i , then T can reply by making a i b i and both clauses involving a i and b i will be satisfied, which benefits T. If T plays any a i or b i , F can reply by making a i = b i , which satisfies one clause involving a i and b i but the other clause gets two 0 literals. Since only one of the two clauses gets satisfied by a i , b i , T would like to wait for more information before deciding which one to satisfy with a i , b i : it depends on whether they are on the right side or left side of a satisfied i in a chain, which in turn depends on the assignment x.
Thus, an optimal gameplay consists of two phases. In the first phase, players should play only x's. The second phase begins when all of the x's have been played and someone must start playing a's and b's. Since the number of fresh variables is even (2|Z |) and F plays last, T must be the one to start the second phase, which is essential because if F started the second phase, then T could satisfy all of the clauses regardless of what happened in the first phase.
In the second phase, after T plays any a i or b i , it is optimal for F to reply by making a i = b i . Assuming this optimal gameplay by F, we can consider a pair (a i , b i ) as a single variable z i that can be assigned only by T. Effectively, the second phase just consists of T choosing an assignment z to φ from the NP-completeness reduction. Thus, ψ (x, a, b) is satisfied if and only if φ (x, z) is satisfied, which by Claim 1 is possible if and only if φ(x ) is satisfied, where x is the assignment from the first phase.

Formal Proof.
We show G T···F ≤ G 5,T···F . Suppose an instance of G T···F is given by (φ, X ), where φ is a CNF with unbounded width over set of variables X . We show how to construct an we show how to construct a chain of clauses in ψ . We introduce two sets of fresh variables {a 1 , a 2 , a 3 , . . . , a k−1 } and {b 1 , b 2 , b 3 , . . . ,b k−1 } and clauses as follows: Each clause of φ gets separate sets of fresh variables for its chain, and we let A = {a 1 , a 2 , a 3 , . . .} and B = {b 1 , b 2 , b 3 , . . .} be the sets of all fresh variables for all chains. Finally, we get a 5-CNF ψ over set of variables Y = X ∪ A ∪ B.
We claim that T has a winning strategy in (φ, X ) if and only if T has a winning strategy in (ψ , Y ). Suppose T has a winning strategy in (φ, X ). We describe T's winning strategy in (ψ , Y ) as Algorithm 1. To see that the strategy works, note that the winning strategy in (φ, X ) ensures that φ(x ) is satisfied by the assignment x to X in the first phase, so according to Claim 1, there is an assignment z to Z (the set of fresh variables introduced in the definition of φ ) such that φ (x, z) is satisfied. T can ensure that for each i, Suppose F has a winning strategy in (φ, X ). We describe F's winning strategy in (ψ , Y ) as Algorithm 2. To see that the strategy works, note that the winning strategy in (φ, X ) ensures that φ(x ) is unsatisfied by the assignment x to X , so according to Claim 1, for all assignments z to Z , φ (x, z) is unsatisfied. F can ensure that for each i, a i = b i ; let us call this common value z i . Thus, Proof. The reduction is G T···F ≤ G F···F ≤ G 5,F···F . First we show G T···F ≤ G F···F . Suppose φ = c 1 ∧ c 2 ∧ c 3 ∧ · · · ∧ c m over set of variables X is an instance of G T···F . We introduce a fresh variable ALGORITHM 1: T's winning strategy in (ψ , Y ) when T has a winning strategy in (φ, X ) 1 while there is a remaining X -variable do 2 if (first move) or (F played an X -variable in the previous move) then 3 play according to the same winning strategy as in (φ, X ) , F's first move must be z = 0; otherwise, T wins by z = 1 as the first move. Then the rest of the winning strategy for T or F is the same as in (φ, X ). This completes the reduction G T···F ≤ G F···F . Now the reduction G F···F ≤ G 5,F···F is identical to Section 2.1.2 except it is F's move first.
To handle the patterns where T moves last, we do not rely on our proof of Theorem 1 but rather derive corollaries of the result from Schaefer [13,14].
Proof. The reduction is G + 11,T···F ≤ G + 11,T···T ≤ G 11,T···T , where G + 11 is the restriction of G 11 to instances with only positive literals (and G + 11,T···F is known to be PSPACE-complete [13,14]). Given a positive 11-CNF φ + over set of variables X , we simply introduce a dummy variable z that does not appear in φ + and use Y = X ∪ {z}. We claim that T has a winning strategy in G + 11,T···F on (φ + , X ) if and only if T has a winning strategy in G + 11,T···T on (φ + , Y ). Suppose T has a winning strategy on (φ + , X ). We show T's winning strategy on (φ + , Y ). T can start by the same strategy as in (φ + , X ) and continue as long as F does not play z. If F never plays z, then T plays z at the end and wins as in (φ + , X ). If F plays z, then T can respond by playing any remaining variable x i = 1, then T resumes his strategy from (φ + , X ) until that strategy tells him to play x i . At this time, T again picks any other remaining variable and assigns it 1. Then T again resumes his strategy from (φ + , X ). The game goes on like this in phases. At the end, T has played all of the variables he would have played in the (φ + , X ) game and possibly one more. Since φ + is positive, it must still be satisfied when one of the variables is 1 instead of 0.
Proof. The reduction is G 11,T···T ≤ G 12,F···T (similar to G T···F ≤ G F···F in Corollary 1). Introduce a fresh variable z to every clause in φ. Then F must play z = 0 as the first move; otherwise, T wins by z = 1 as the first move. Like in Corollary 2, this in fact shows PSPACE-completeness of G + 12,F···T .

G % 5
In this section, we prove Theorem 2. It is trivial to argue that G % 5 ∈ PSPACE. We prove PSPACEhardness by showing a reduction G % T···F ≤ G % 5,T···F in Section 2.2.2. G % T···F is already known to be PSPACE-complete [13,14]. We will talk about the other three patterns G % F···F , G % T···T , G % F···T in Section 2.2.3. Before the formal proof, we develop the intuition in Section 2.2.1.

Intuition.
This intuition is a continuation of Section 2.1.1. The reduction is the same as G T···F ≤ G 5,T···F reduction except giving A-variables to T and B-variables to F. In the general unordered game, if any player plays a i or b i , then the other player can immediately play the other one from a i , b i in a certain advantageous way. In the partitioned version, they can do the same thing if a i belongs to T and b i belongs to F.

Formal Proof.
We show G % T···F ≤ G % 5,T···F . Suppose an instance of G % T···F is given by (φ, X T , X F ), where φ is a CNF with unbounded width over sets of variables X T and X F . We show how to construct an instance (ψ , Y T , Y F ) for G % 5,T···F , where ψ is a 5-CNF over sets of variables Y T and Y F . Suppose ( 1 ∨ 2 ∨ 3 ∨ · · · ∨ k ) is a clause in φ. If k ≤ 3, the same clause remains in ψ . If k > 3, we show how to construct a chain of clauses in ψ . We introduce two sets of fresh variables {a 1 , a 2 , a 3 , . . . , a k−1 } for T and {b 1 , b 2 , b 3 , . . . ,b k−1 } for F and clauses as follows: Each clause of φ gets separate sets of fresh variables for its chain, and we let A = {a 1 , a 2 , a 3 , . . .} for T and B = {b 1 , b 2 , b 3 , . . .} for F be the sets of all fresh variables for all chains. Finally, we get a 5-CNF ψ over sets of variables Y T = X T ∪ A and Y F = X F ∪ B.
We claim that T has a winning strategy in (φ, X T , X F ) if and only if T has a winning strategy in (ψ , Y T , Y F ).
Suppose T has a winning strategy in (φ, X T , X F ). We describe T's winning strategy in (ψ , Y T , Y F ) as Algorithm 3. To see that the strategy works, note that the winning strategy in (φ, X T , X F ) ensures that φ(x ) is satisfied by the assignment x to X T ∪ X F in the first phase, so according to Claim 1, there is an assignment z to Z (the set of fresh variables introduced in the definition of φ ) such that φ (x, z) is satisfied. T can ensure that for each i, either a i = z i or b i = z i (since a i = z i due to line 8, or a i b i due to line 4 or line 7) and thus ψ (x, a, b) gets satisfied, since φ (x, z) is satisfied and each clause of ψ is identical to a clause from φ but with each z i replaced with a i ∨ b i and z i replaced with a i ∨ b i . Suppose F has a winning strategy in (φ, X T , X F ). We describe F's winning strategy in (ψ , Y T , Y F ) as Algorithm 4. To see that the strategy works, note that the winning strategy in (φ, X T , X F ) ensures that φ(x ) is unsatisfied by the assignment x to X T ∪ X F , so according to Claim 1, for all assignments z to Z , φ (x, z) is unsatisfied. F can ensure that for each i, a i = b i ; let us call this common value z i . Thus, ψ (x, a, b) is unsatisfied, since φ (x, z) is unsatisfied and ψ (x, a, b) = φ (x, z).

G %
F···F , G % T···T , G % F···T . Corollary 4. G % 5,F···F is PSPACE-complete. Proof. The reduction is G % T···F ≤ G % F···F ≤ G % 5,F···F . First we show G % T···F ≤ G % F···F . Suppose (φ, X T , X F ) is an instance of G % T···F . We introduce a dummy variable z that does not appear in φ The reduction works for the following reason. When F has a winning strategy in (φ, X T , X F ), F can play z as the first move, then continue the winning strategy as in (φ, X T , X F ). Conversely, when T has a winning strategy in (φ, X T , X F ), T can use the same strategy from (φ, X T , X F ) if F plays z as the starting move. If F plays x i instead of playing z at the beginning, then T can ignore F's first move and start playing with the same strategy from (φ, X T , X F ). The game can continue as usual until F plays z, then T can pretend that F just played x i and continue the usual strategy from there. At the end, T and F have both played the same assignment as they would have in (φ, X T , X F ), so T still wins.
This completes the reduction G % T···F ≤ G % F···F . Now the reduction G % F···F ≤ G % 5,F···F is identical to Section 2.2.2, except it is F's move first.
where Y T = X and Y F is a new set of fresh variables such that |Y F | = |X |. F's moves do not matter. If φ is satisfiable, then T can play a satisfying assignment; otherwise, T cannot satisfy φ.

2-CNF
To analyze the complexity of the games G 2 and G % 2 , we construct a directed graph д(φ, X ) by the classical technique for 2-SAT: • For each variable x i ∈ X , form two nodes x i and x i . Let i refer to either x i or x i . 3 3 In Section 2, i represented an arbitrary literal; in Section 3, i always represents either x i or x i .
• For each clause ( i ∨ j ), add two directed edges i → j and i ← j . In case of a single variable clause ( i ), consider the clause as ( i ∨ i ) and add one directed edge i → i .
In our arguments, we write i j to mean there exists a path from node i to node j . In the graph, every path i j has a mirror path i j . If there exist two paths i j and i j , we express this as i j . We are interested in strongly connected components, which we call strong components for short. We say an edge is incident to a node if the node is an endpoint of the edge (head or tail). We say two nodes are neighbors if there exists an edge between them (in either direction).
The 2-CNF game analogy on this graph is as follows. If any variable x i is assigned a bit value in φ, then in the graph both nodes x i and x i are assigned. Conversely, if say a player assigns a bit value to a node i , then the complement node i simultaneously gets assigned the opposite value. If i refers to x i , then x i gets assigned the same value as i and similarly for i referring to x i . Thus, we can describe strategies as assigning bit values to nodes in the graph.
In a satisfying assignment for φ, there must not exist any false implication edge (1 → 0) in the graph. In fact, the graph must not have any path (1 0) since the path will contain at least one (1 → 0) edge. Player F's goal is to create a false implication, and player T will try to make all implications true.
We prove Theorem 3 in Section 3.1 and Theorem 4 in Section 3.2. In terms of the graph representation, linear time means O (n + m), where n = number of nodes and m = number of edges.

G 2
G 2 is the unordered analogue of the 2-TQBF game. We prove Theorem 3 by separately considering the cases G 2,F···F in Section 3.1.1, G 2,F···T in Section 3.1.2, and G 2,T··· in Section 3.1.3. Our algorithm for G 2 is to run either Algorithm 5 or Algorithm 6 or Algorithm 7, depending on the pattern of who goes first and who goes last. (1) There exists a node i such that i i .
Proof. Suppose at least one of the statements holds. If statement (1) holds, F can win by i = 0 as the very first move. If statement (2) holds but statement (1) does not, there can be two cases: • In the first case, i , j , k represent three distinct variables. At the beginning, F can play i = 0, then whatever T plays, F still has at least one of j or k to play. F can assign j or k as 1 and wins. • In the second case, i , j , k do not represent three distinct variables. The only possibility is that k is j -that is, j i j (because otherwise i would represent the same variable as either j or k , in which case we would have i i , which is covered by statement (1)). F can play i = 0, then whatever the value of j , F wins.

ALGORITHM 5:
Linear-time algorithm for G 2,F···F Input: φ, X Output: which player has a winning strategy Conversely, suppose none of the statements hold. Then we claim the graph has no two edges that share an endpoint. Otherwise, two edges that share an endpoint would cause statement (2) or statement (3) to be satisfied. We show this by considering all possible ways of two edges sharing an endpoint: • i ↔ j : Satisfies statement (3).
Thus, the graph can only have some isolated nodes and isolated edges. Since statement (1) does not hold, there are no edges between complementary nodes. An example of such a graph looks like Figure 1. Conversely, in any such graph (like Figure 1), none of statements (1), (2), (3) hold. Now, we describe a winning strategy for T on such a graph. If F plays i or j of any fresh (both endpoints unassigned) edge i → j , T plays in the same edge by the same bit value for the other node (i.e., making i = j ). Otherwise, T picks any remaining node i . If i is isolated, T assigns any arbitrary bit value. If i has an incoming edge, T plays i = 1. If i has an outgoing edge, T plays i = 0.
The strategy works, since all edges i → j will be satisfied by either i = j or i = 0 or j = 1.
The characterization of such a graph in the proof of Lemma 1 can be verified in linear time, and that yields a linear-time algorithm for G 2,F···F . Details of the idea have been described as Algorithm 5.
ALGORITHM 6: Linear-time algorithm for G 2,F···T Input: φ, X Output: which player has a winning strategy or (x i has at least two neighbors) then output F 4 output T Proof. Suppose one of the statements holds. In Lemma 1, we have already seen that statement (1) and statement (2) allow player F to win at the beginning.
Conversely, suppose none of the statements hold. The graph can have strong components of size 2. Other than that, there are no two edges sharing an endpoint because statement (2) does not hold. Thus, the graph can only have some isolated nodes, isolated edges, and isolated strong components of size 2. Since statement (1) does not hold, there are no edges between complementary nodes. An example of such a graph looks like Figure 2. Conversely, in any such graph (like Figure 2), none of statements (1) and (2) hold. Now, we describe a winning strategy for T on such a graph. If F plays i or j of any fresh (both endpoints unassigned) edge i → j or strong component i ↔ j , T plays in the same edge or strong component by the same bit value for the other node (i.e., making i = j ). Otherwise, T picks any remaining isolated node and gives it any arbitrary bit value. Since |X | is even, T can always play such a node.
The strategy works, since all of the edges i → j will be satisfied by i = j .
The characterization of such a graph in the proof of Lemma 2 can be verified in linear time, and that yields a linear-time algorithm for G 2,F···T . Details of the idea have been described as Algorithm 6.
3.1.3 G 2,T··· ∈ Linear Time. To win G 2,T··· , at the beginning T must locate a node i such that after playing it, the game is reduced to a G 2,F··· game such that T still has a winning strategy in it. Thus, T's success depends on finding such a node i . However, F's success depends on there not existing such a node i . Proof. Suppose T has a winning strategy in G 2,T··· . Let T's first move in the winning strategy be i = 1 (or i = 0). Then, i must not have any outgoing edge; otherwise, either that edge goes to i or F could play the other endpoint node of that edge as 0 and win.
Conversely, suppose there exists such an i . At the beginning, T can play i = 1, and all incoming edges to i and outgoing edges from i get satisfied. Then T can continue the game according to the winning strategy in G 2,F··· for the rest of the graph and win. For example, in Figure 3, T's winning strategy is to play i = 1 at the beginning, then continue the winning strategy for G 2,F··· .
We define L as the set of all nodes that have no outgoing edges. If |L| = 0, then according to Lemma 3, T has no winning strategy in G 2,T··· . If |L| > 0, then the trivial algorithm for G 2,T··· is, checking for each node i ∈ L, whether or not after playing i = 1 the rest of the graph becomes a winning graph for T in G 2,F··· -for instance, running Algorithm 5 or Algorithm 6 for O (|L|) times, which is a quadratic-time algorithm. We argue that we can do better than that.
We filter the possibilities in L and show that there are only three cases to consider: • There exists a node i ∈ L such that statement (1) from Lemma 1 and Lemma 2 holds. We consider this case in Claim 2. • There exists a node i ∈ L such that statement (2) from Lemma 1 and Lemma 2 holds. We consider this case in Claim 3. • There exists no node i ∈ L such that statement (1) or statement (2) from Lemma 1 and Lemma 2 holds. We consider this case in Claim 4.
Then in Claim 5 and Claim 6, we analyze the efficiency of this approach.
Claim 2. If there exists i ∈ L such that i i and T has a winning strategy in G 2,T··· , then T's first move must be i = 1.
Proof. Suppose T's first move is not i = 1. If T's first move assigns 1 to a node with an outgoing edge, then T loses as in Lemma 3. Otherwise, T's first move must not involve any variable on the path i i (since if it assigns 1 to a node on the path other than i , then that node has an outgoing edge, and if it assigns 0 to a node on the path other than i , then that node's complement has an outgoing edge). In this case, in the rest of the game T loses by statement (1) from Lemma 1 and Lemma 2. Claim 3. If there exists i ∈ L such that j i k for two other nodes j , k and T has a winning strategy in G 2,T··· , then T's first move must be i = 1 or j = 1 or k = 1.
Proof. Suppose T's first move is not i = 1 or j = 1 or k = 1. If T's first move assigns 1 to a node with an outgoing edge, then T loses as in Lemma 3. Otherwise, T's first move must not involve any variable on the paths j i k (since if it assigns 1 to a node on the paths other than i , then that node has an outgoing edge, and if it assigns 0 to a node on the paths other than j or k , then that node's complement has an outgoing edge). In this case, in the rest of the game, T loses by statement (2) from Lemma 1 and Lemma 2. Claim 4. If there exists no i ∈ L such that i i or j i k for two other nodes j , k and T has a winning strategy in G 2,T··· , then for all i ∈ L, T has a winning strategy in G 2,T··· beginning with i = 1.
Proof. For all nodes i ∈ L, statement (1) and statement (2) from Lemma 1 and Lemma 2 do not hold. Thus, all nodes i ∈ L are either isolated single nodes or have only one isolated incoming edge, from another variable's node outside L. (The argument is similar to the situation when statement (1) and statement (2) do not hold in Lemma 1 and Lemma 2.) If T plays any i ∈ L as i = 1, then it does not affect whether or not statements (1), (2), (3) from Lemma 1 and Lemma 2 hold on the rest of the graph. Thus, if T indeed has a winning strategy, then it does not matter which i ∈ L is assigned as 1 as the first move.
The overall idea is as follows. If we can find an i for which statement (1) or statement (2) from Lemma 1 and Lemma 2 holds, then Claim 2 and Claim 3 allow us to narrow down T's first move to O (1) possibilities. If we cannot find such an i , then Claim 4 allows T to play any arbitrary i ∈ L as the first move because all of them are equivalent as the first move. We define L * as the O (1) possibilities in L. Then we can run Algorithm 5 or Algorithm 6 for |L * | = O (1) times.
In the following two claims, we show how we can efficiently verify whether or not there exists such an i for which statement (1) or statement (2) from Lemma 1 and Lemma 2 holds. Claim 5. There exists a constant-time algorithm for the following: given i , find two other nodes j , k such that j i k or determine they do not exist. Proof. It is sufficient to check three cases: • i has indegree > 1: Then we can find j → i ← k . • i has indegree = 1: There exists j with j → i . Then look for k with k → j . • i has indegree < 1: Such j , k do not exist. Claim 6. There exists a constant-time algorithm for the following: given i for which there are no j , k as in Claim 5, decide whether there exists a path i i . Proof. Since j i k does not hold, i has indegree ≤ 1 and any incoming neighbor has indegree 0. It is sufficient to check two cases: • i has indegree = 1: Then check if i → i . • i has indegree < 1: Such a path does not exist. Now combining the whole idea from Claim 2 to Claim 6, we can develop an algorithm for G 2,T··· . Details of the idea have been described as Algorithm 7.

G % 2
In this section, we prove Theorem 4 by separately considering the cases G % 2, ···F in Section 3.2.1 and G % 2, ···T in Section 3.2.2. Our algorithm for G % 2 is to run either Algorithm 8 or Algorithm 9, depending on the pattern of who goes first and who goes last. We let V T and V F be the sets of nodes created from X T and X F , respectively. In addition, let V = V T ∪ V F be the set of all nodes.  V T,0 , leaving V T,1 , or between V T,free and V F . In general, V F may have many isolated nodes. A general case of the graph looks like Figure 4. Now we describe a winning strategy for T on such a graph. Whatever F plays, T picks any remaining node to play. If the node is in V T,0 , T assigns it 0. If the node is in V T,1 , T assigns it 1. If the node is in V T,free , T assigns it according to a satisfying assignment that exists since statement (1) does not hold.
The strategy works since each edge i → j has either i ∈ V T,0 , in which case it gets satisfied by i = 0, or j ∈ V T,1 , in which case it gets satisfied by j = 1, or i , j ∈ V T,free , in which case it gets satisfied by the satisfying assignment. Now, we develop a linear-time algorithm to check statements (1), (2), (3) in Lemma 4. We start by creating a topologically sorted DAG of strong components for the whole graph. The DAG construction can be done in linear time [18]. We can check statements (1) and (3) by directly inspecting the strong components. To check statement (2), we do dynamic programming over the topological order of strong components to see whether any strong component containing a node in V F is reachable from any other such strong component. The idea has been described as Algorithm 8.
Lemma 5. F has a winning strategy in G % 2, ···T if and only if at least one of the following statements holds in the graph д(φ, X ): (1) There exists a node i ∈ V such that i i . (2) There exist two nodes i , j ∈ V F such that i j . (3) There exist three nodes i ∈ V F and j , k ∈ V T such that j i k .
Proof. Suppose at least one of the statements holds. In Lemma 4, we have already seen that statement (1) and statement (2) allow player F to win.
If statement (3) holds, F can wait by playing variables other than x i with arbitrary values until T plays x j or x k . Then F can respond by making i j or i k and win. Conversely, suppose none of the statements hold. The graph structure remains the same as we had for G % 2, ···F , except it is allowed to have shared strong components of size 2 that form a matching between some nodes of V T and V F . Intuitively, F can force T to assign V T,sc nodes as any bit values if ( i ∈ s) or ( i ∈ V F and |s | > 2) then output F 7 let S F = set of strong components containing at least one node from V F 8 let S T = set of strong components containing only nodes from V T 9 mark all s ∈ S F as "reachable from S F " 10 topologically order s 1 , s 2 , s 3 , . . . ∈ S so edges of д * go from lower to higher indices 11 foreach i = 1, 2, 3, . . . , |S | do 12 if ∃j < i such that s j → s i and s j is marked then 13 if s i ∈ S T then mark s i as "reachable from S F " 14 else output F 15 output T

CONCLUSION
In this article, we have determined the ordered and partitioned game complexities for 2-CNFs and 5-CNFs, thereby providing new algorithmic techniques for solving games and new starting points to prove hardness of other games. Interestingly, any completeness result for 3-CNFs or 4-CNFs, for either the unordered or partitioned version, remains open. In this direction, we boldly conjecture that the unordered game on 3-CNFs is tractable. Thus far, we have already proven this conjecture is indeed true for 3-CNFs under a certain restriction-that each width-3 clause has a variable that occurs in no other clauses [12]. We have also proven that the unordered 4-CNF game is at least NLhard. Future work could also explore hardness of approximation for the unordered and partitioned CNF games.