Tight lower bounds for the complexity of multicoloring

In the multicoloring problem, also known as ($a$:$b$) or $b$-fold coloring, we are given a graph $G$ and a set of $a$ colors, and the task is to assign a subset of $b$ colors to each vertex of $G$ so that adjacent vertices receive disjoint color subsets. This natural generalization of the classic coloring problem (the $b=1$ case) is equivalent to finding a homomorphism to the Kneser graph $KG_{a,b}$. It is tightly connected with the fractional chromatic number, and has multiple applications within computer science. We study the complexity of determining whether a graph has an ($a$:$b$)-coloring. As shown by Cygan et al. [SODA 2016], given an arbitrary $n$-vertex graph $G$ and $h$-vertex graph $H$ one cannot determine in time $2^{o(\log h)\cdot n}$ whether $G$ admits a homomorphism to $H$, unless the Exponential Time Hypothesis (ETH) fails. Despite the fact that when $H$ is the Kneser graph $KG_{a,b}$ we have $h=\binom{a}{b}$, Nederlof [2008] showed a $(b+1)^n\cdot n^{O(1)}$-time algorithm for ($a$:$b$)-coloring. Our main result is that this is essentially optimal: there is no algorithm with running time $2^{o(\log b)\cdot n}$ unless the ETH fails. The crucial ingredient in our hardness reduction is the usage of detecting matrices of Lindstr\"om [Canad. Math. Bull., 1965], which is a combinatorial tool that, to the best of our knowledge, has not yet been used for proving complexity lower bounds. As a side result, we also prove that the running time of the algorithms of Abasi et al. [MFCS 2014] and of Gabizon et al. [ESA 2015] for the $r$-monomial detection problem are optimal under ETH.


Introduction
The complexity of determining the chromatic number of a graph is undoubtedly among the most intensively studied computational problems. Countless variants, extensions, and generalizations of graph colorings have been introduced and investigated. Here, we focus on multicolorings, also known as (a:b)-colorings. In this setting, we are given a graph G, a palette of a colors, and a number b ≤ a. An (a:b)-coloring of G is any assignment of b distinct colors to each vertex so that adjacent vertices receive disjoint subsets of colors. The (a:b)-coloring problem asks whether G admits an (a:b)-coloring. Note that for b = 1 we obtain the classic graph coloring problem. The smallest a for which an (a:b)-coloring exists, is called the b-fold chromatic number, denoted by χ b (G).
The motivation behind (a:b)-colorings can be perhaps best explained by showing the connection with the fractional chromatic number. The fractional chromatic number of a graph G, denoted χ f (G), is the optimum value of the natural LP relaxation of the problem of computing the chromatic number of G, expressed as finding a cover of the vertex set using the minimum possible number of independent sets. It can be easily seen that by relaxing the standard coloring problem by allowing b times more colors while requiring that every vertex receives b colors and adjacent vertices receive disjoint subsets, with increasing b we approximate the fractional chromatic number better and better. Consequently, lim b→∞ χ b (G)/b = χ f (G). Another interesting connection concerns Kneser graphs. Recall that for positive integers a, b with b < a/2, the Kneser graph KG a,b has all b-element subsets of {1, 2, . . . , a} as vertices, and two subsets are considered adjacent if and only if they are disjoint. For instance, KG 5,2 is the well-known Petersen graph (see Fig. 1, right). Thus, (a:b)-coloring of a graph G can be interpreted as a homomorphism from G to the Kneser graph KG a,b (see Fig. 1). Kneser graphs are well studied in the context of graph colorings mostly due to the celebrated result of Lovász [27], who determined their chromatic number, initiating the field of topological combinatorics.
Multicolorings and (a:b)-colorings have been studied both from combinatorial [6,11,25] and algorithmic [4,17,18,23,24,28,29,32] points of view. The main real-life motivation comes from appearance of b in the base of the exponent is necessary, or is there an algorithm running in time O ⋆ (c n ) for some universal constant c independent of b.
Our contribution. We prove that the algorithms for (a:b)-coloring mentioned above are essentially optimal under the Exponential Time Hypothesis. Precisely, we prove the following result. Theorem 1. If there is an algorithm for (a:b)-coloring that runs in time 2 o(log b)·n , then ETH fails. This holds even if the algorithm is only required to work on instances where a = Θ(b 2 log b) and b = Θ(b(n)) for an arbitrarily chosen poly-time computable function b(n) such that b(n) ∈ ω(1) and b(n) ∈ o(log n/ log log n).
The statement of Theorem 1 excludes even the existence of an algorithm tailored to some particular magnitudes of b, expressed as a function of n. Admittedly, the range of functions b(n) for which we obtain a lower bound is quite restricted. This is a result of our proof strategy. Namely, we first prove a lower bound for the list variant of the problem, where every vertex is given a list of colors that can be assigned to it (see Section 2 for formal definitions). The list version is reduced to the standard version by introducing a large Kneser graph KG a+b,b ; we need a and b to be really small so that the size of this Kneser graph does not dwarf the size of the rest of the construction. The good news is that for the list version, we obtain a lower bound for a much wider range of functions b(n).

Theorem 2.
If there is an algorithm for List (a:b)-coloring that runs in time 2 o(log b)·n , then ETH fails. This holds even if the algorithm is only required to work on instances where a = Θ(b 2 log b) and b = Θ(b(n)) for an arbitrarily chosen polynomial-time computable function b(n) such that b(n) ∈ ω(1) and b(n) = O(n/ log n).
The crucial ingredient in the proof of Theorem 2 is the usage of d-detecting matrices introduced by Lindström [26]. We choose to work with their combinatorial formulation, hence we shall talk about d-detecting families. Suppose we are given some universe U and there is an unknown function f : U → {0, 1, . . . , d − 1}, for some fixed positive integer d. One may think of U as consisting of coins of unknown weights that are integers between 0 and d − 1. We would like to learn f (the weight of every coin) by asking a small number of queries of the following form: for a subset X ⊆ U , what is e∈X f (e) (the total weight of coins in X)? A set of queries sufficient for determining all the values of an arbitrary f is called a d-detecting family. Of course f can be learned by asking |U | questions about single coins, but it turns out that significantly fewer questions are needed: there is a d-detecting family of size O(|U |/ log |U |), for every fixed d [26]. The logarithmic factor in the denominator will be crucial for deriving our lower bound.
Let us now sketch how d-detecting families are used in the proof of Theorem 2. Given an instance ϕ of 3SAT with n variables and O(n) clauses, and a number b ≤ n/ log n, we will construct an instance G of List (a:b)-coloring for some a. This instance will have a positive answer if and only if ϕ is satisfiable, and the constructed graph G will have O(n/ log b) vertices. It can be easily seen that this will yield the promised lower bound.
Partition the clause set C of ϕ into groups C 1 , C 2 , . . . , C p , each of size roughly b; thus p = O(n/b). Similarly, partition the variable set V of ϕ intro groups V 1 , . . . , V q , each of size roughly log 2 b; thus q = O(n/ log b). In the output instance we create one vertex per each variable group-hence we have O(n/ log b) such vertices-and one block of vertices per each clause group, whose size will be determined in a moment. Our construction ensures that the set of colors assigned to a vertex created for a variable group misses one color from some subset of b colors. The choice of the missing color corresponds to one of 2 log 2 b = b possible boolean assignments to the variables of the group. Take any vertex u from a block of vertices created for some clause group C j . We make it adjacent to vertices constructed for precisely those variable groups V i , for which there is some variable in V i that occurs in some clause of C j . This way, u can only take a subset of the above missing colors corresponding to the chosen assignment on variables relevant to C j . By carefully selecting the list of u, and some additional technical gadgeteering, we can express a constraint of the following form: the total number of satisfied literals in some subset of clauses of C j is exactly some number. Thus, we could verify that every clause of C j is satisfied by creating a block of |C j | vertices, each checking one clause. However, the whole graph output by the reduction would then have O(n) vertices, and we would not obtain any non-trivial lower bound. Instead, we create one vertex per each question in a d-detecting family on the universe U = C j , which has size Then, the total number of vertices in the constructed graph will be O(n/ log b), as intended.
Finally, we observe that from our main result one can infer a lower bound for the complexity of the (r, k)-Monomial Testing problem. Recall that in this problem we are given an arithmetic circuit that evaluates a homogenous polynomial P (x 1 , x 2 , . . . , x n ) over some field F; here, a polynomial is homogenous if all its monomials have the same total degree k. The task is to verify whether P has some monomial in which every variable has individual degree not larger than r, for a given parameter r. Abasi et al. [1] gave a randomized algorithm solving this problem in time O ⋆ (2 O(k· log r r ) ), where k is the degree of the polynomial, assuming that F = GF(p) for a prime p ≤ 2r 2 + 2r. This algorithm was later derandomized by Gabizon et al. [13] within the same running time, but under the assumption that the circuit is non-cancelling: it has only input, addition, and multiplication gates. Abasi et al. [1] and Gabizon et al. [13] gave a number of applications of low-degree monomial detection to concrete problems. For instance, r-Simple k-Path, the problem of finding a walk of length k that visits every vertex at most r times, can be solved in time O ⋆ (2 O(k· log r r ) ). However, for r-Simple k-Path, as well as other problems that can be tackled using this technique, the best known lower bounds under ETH exclude only algorithms with running time O ⋆ (2 o( k r ) ). Whether the log r factor in the exponent is necessary was left open by Abasi et al. and Gabizon et al. We observe that the List (a:b)-coloring problem can be reduced to (r, k)-Monomial Testing over the field GF(2) in such a way that an O ⋆ (2 k·o( log r r ) )-time algorithm for the latter would imply a 2 o(log b)·n -time algorithm for the former, which would contradict ETH. Thus, we show that the known algorithms for (r, k)-Monomial Testing most probably cannot be sped up in general; nevertheless, the question of lower bounds for specific applications remains open. However, going through List (a:b)-coloring to establish a lower bound for (r, k)-Monomial Testing is actually quite a detour, because the latter problem has a much larger expressive power. Therefore, we also give a more straightforward reduction that starts from a convenient form of Subset Sum; this reduction also proves the lower bound for a wider range of r, expressed as a function of k.
Outline. In Section 2 we set up the notation as well as recall definitions and well-known facts. We also discuss d-detecting families, the main combinatorial tool used in our reduction. In Section 3 we prove the lower bound for the list version of the problem, i.e., Theorem 2. In Section 4 we give a reduction from the list version to the standard version, thereby proving Theorem 1. Section 5 is devoted to deriving lower bounds for low-degree monomial testing.

Preliminaries
Notation. We use standard graph notation, see e.g. [9,10]. All graphs we consider in this paper are simple and undirected. For an integer k, we denote [k] = {0, . . . , k − 1}. By ⊎ we denote the disjoint union, i.e., by A ⊎ B we mean A ∪ B with the indication that A and B are disjoint. If I and J are instances of decision problems P and R, respectively, then we say that I and J are equivalent if either both I and J are YES-instances of respective problems, or both are NO-instances.
Exponential-Time Hypothesis. The Exponential Time Hypothesis (ETH) of Impagliazzo et al. [21] states that there exists a constant c > 0, such that there is no algorithm solving 3-SAT in time O ⋆ (2 cn ). During the recent years, ETH became the central conjecture used for proving tight bounds on the complexity of various problems. One of the most important results connected to ETH is the Sparsification Lemma [22], which essentially gives a reduction from an arbitrary instance of k-SAT to an instance where the number of clauses is linear in the number of variables. The following well-known corollary can be derived by combining ETH with the Sparsification Lemma.
Theorem 3 (see e.g. Theorem 14.4 in [9]). Unless ETH fails, there is no algorithm for 3-SAT that runs in time 2 o(n+m) , where n, m denote the numbers of variables and clauses, respectively.
We need the following regularization result of Tovey [33].   As an intermediary step of our reduction, we will use the following generalization of list colorings where the number of demanded colors varies with every vertex. For integers a, b, a graph G with a function L : V (G) → 2 [a] and a demand function β : V (G) → {1, . . . , b}, an L-(a:β)-coloring of G is an assignment of exactly β(v) colors from L(v) to each vertex v ∈ V (G), such that adjacent vertices get disjoint color sets. Nonuniform List (a:b)-coloring is then the problem in which given (G, L, β) we ask if an L-(a:β)-coloring of G exists.
d-detecting families. In our reductions the following notion plays a crucial role.
A deterministic construction of sublinear, d-detecting families was given by Lindström [26], together with a proof that even the constant factor 2 in the family size cannot be improved. (1)). Furthermore, F can be constructed in time polynomial in |U |.
Other constructions, generalizations, and discussion of similar results can be found in Grebinski and Kucherov [15], and in Bshouty [3]. Note that the expression x∈S f (x) is just the product of f as a vector in [d] |U | with the characteristic vector of S. Hence, instead of subset families, Lindström speaks of detecting vectors, while later works see them as detecting matrices, that is, (0, 1)-matrices with these vectors as rows (which define an injection on [d] |U | despite having few rows). Similar definitions appear in the study of query complexity, e.g., as in the popular Mastermind game [5].
While known polynomial deterministic constructions of detecting families involve some number theory or fourier analysis, their existence can be argued with an elementary probabilistic argument. Intuitively, a random subset S ⊆ U will distinguish two distinct functions f, g : ) with probability at least 1 2 . This is because any x where f and g disagree is taken or not taken into S with probability 1 2 , while sums over S cannot agree in both cases simultaneously, as they differ by f (x) and g(x) respectively. There are d n · d n function pairs to be distinguished. In any subset of pairs, at least half are distinguished by a random set in expectation, thus at least one such set exists. Repeatedly finding such a set for undistinguished pairs, we get | log 1 2 (d n · d n )| = O(n log d) sets that distinguish all functions. More strongly though, when two functions differ on more values, the probability of distinguishing them increases significantly. Hence we need fewer random sets to distinguish all pairs of distant functions. On the other hand, there are few function pairs that are close, so we need few random sets to distinguish them all as well. This allows to show that in fact O( n log d n ) random sets are enough to form a d-detecting family with positive probability [15].

Hardness of List (a:b)-coloring
In this section we show our main technical contribution: an ETH-based lower bound for List (a:b)coloring. The key part is reducing an n-variable instance 3-SAT to an instance of Nonuniform List (a:b)-coloring with only O( n log b ) vertices. Next, it is rather easy to reduce Nonuniform List (a:b)-coloring to List (a:b)-coloring. We proceed with the first, key part.

The nonuniform case
We prove the following theorem through the remaining part of this section.
Theorem 8. For any instance φ of (3,4)-SAT with n variables and any integer 2 ≤ b ≤ n/ log 2 n, there is an equivalent instance (G, β, L) of Nonuniform List (a:2b)-coloring such that a = O(b 2 log b), |V (G)| = O( n log b ) and G is 3-colorable. Moreover, the instance (G, β, L) and the 3coloring of G can be constructed in poly(n) time.
Consider an instance φ of 3-SAT where each variable appears in at most four clauses. Let V be the set of its variables and C be the set of its clauses. Note that 1 3 |V | ≤ |C| ≤ 4 3 |V |. Let a = 12b 2 · ⌊log 2 b⌋. We shall construct, for some integers such that the following condition holds: For any j = 1, . . . , n C , the variables occurring in clauses of C j are all different and they all belong to pairwise different variable groups. Moreover, the indices of these groups are mapped to pairwise different values by σ.

( )
In other words, any two literals of clauses in C j have different variables, and if they belong to V i and V i ′ respectively, then σ(i) = σ(i ′ ).
Proof. We first group variables, in a way such that the following holds: (P1) the variables occurring in any clause are different and belong to different variable groups. To this end, consider the graph G 1 with variables as vertices and edges between any two variables that occur in a common clause (i.e. the primal graph of φ). Since no clause contains repeated variables, G 1 has no loops. Since every variable of φ occurs in at most four clauses, and since those clauses contain at most two other variables, the maximum degrees of G 1 is at most 8. Hence G 1 can be greedily colored with 9 colors. Then, we refine the partition given by colors to make every group have size at most ⌊log 2 b⌋, producing in total at most n V := ⌈|V |/⌊log 2 b⌋⌉ + 9 groups V 1 , . . . , V n V . (P1) holds, because any two variables occurring in a common clause are adjacent in G 1 , and thus get different colors, and thus are assigned to different groups.
Next, we group clauses in a way such that: (P2) the variables occurring in clauses of a group C j are all different and belong to different variable groups. For this, consider the graph G 2 with clauses as vertices, and with an edge between clauses if they contain two different variables from the same variable group. By (P1), G 2 has no loops. Since every clause contains exactly 3 variables, each variable is in a group with at most ⌊log 2 b⌋ − 1 others, and every such variable occurs in at most 4 clauses, the maximum degree of G 2 is at most 12(⌊log 2 b⌋ − 1). We can therefore color G 2 greedily with 12⌊log 2 b⌋ colors. Similarly as before, we partition clauses into n C := ⌈|C|/b⌉ + 12⌊log 2 b⌋ monochromatic groups C 1 , . . . , C n C of size at most b each. Then (P2) holds by construction of the coloring.
Finally, consider a graph G 3 with variable groups as vertices, and with an edge between two variable groups if they contain two different variables occurring in clauses from a common clause group. More precisely, V i and V i ′ are adjacent if there are two different variables x ∈ V i and x ′ ∈ V i ′ , and a clause group C j with clauses c and c ′ (possibly c = c ′ ), such that x occurs in c and x ′ occurs in c ′ . By (P2), G 3 has no loops. Since a variable has at most ⌊log 2 b⌋ − 1 other variables in its group, each of these variables occur in at most 4 clauses, each of these clauses has at most b − 1 other clauses in its group, and each of these contains exactly 3 variables, the maximum degree of G 3 is at most 4 · (⌊log 2 b⌋ − 1) · (b − 1) · 3. We can therefore color it greedily into 12b⌊log 2 b⌋ colors. Let σ be the resulting coloring. By (P2) and the construction of this coloring, ( ) holds.
The colorings can be found in linear time using standard techniques. Note that we have n V = ⌈|V |/⌊log 2 b⌋⌉ + 9 = O(|V |/ log b). Moreover, since b ≤ n/ log 2 n, we get log 2 b ≤ log 2 n ≤ n b = Θ(|C|/b) and hence n C = ⌈|C|/b⌉ + 12⌊log 2 b⌋ = O(|C|/b). Figure 2: (left) The groups of variables and clauses of the formula; literals in C 1 are joined with their variables. Since no variable of V 2 occurs in C 1 , we have 2 ∈ I 1 -this may allow us to make σ(2) the same number as σ (3), say, reducing the total number a of colors needed. (right) The constructed graph; thick lines represent edges to all vertices corresponding to C 1 .
For every 1 ≤ i ≤ n V , the set V i of variables admits 2 |V i | ≤ b different assignments. We will therefore say that each assignment on V i is given by an integer x ∈ [b], for example by interpreting the first |V i | bits of the binary representation of x as truth values for variables in V i . Note that when |V i | < log 2 b, different integers from [b] may give the same assignment on V i . For 1 ≤ j ≤ n C , let I j ⊆ {1, . . . , n V } be the set of indices of variable groups that contain some variable occurring in the clauses of C j . Since every clause contains exactly three literals, property ( ) means that |I j | = 3|C j | and that σ is injective over each I j . See Figure 2.
For 1 ≤ j ≤ n C , let {C j,1 , . . . , C j,n F } be a 4-detecting family of subsets of C j , for some n F = O( b log b ) (we can assume n F does not depend on j by adding arbitrary sets when |C j | < b). For every 1 ≤ k ≤ n F , let C j,n F +k = C j \ C j,k .
We are now ready to build the graph G, the demand function β : V (G) → {1 . . . , 2b}, and the list assignment L as follows.
(2) For 1 ≤ j ≤ n C and 1 ≤ k ≤ 2n F , create a vertex u j,k adjacent to each v i for i ∈ I j .
Let β(u j,k ) = |C j,k | and x gives an assignment of V i that satisfies some clause of C j,k }.
(3) For 1 ≤ j ≤ n C , create a vertex w j , adjacent to each v i for i ∈ I j and to each u j, Before giving a detailed proof of the correctness, let us describe the reduction in intuitive terms. Note that vertices of type v i get all but one color from their list; this missing color, say b · σ(i) + x i , for some x i ∈ [b], defines an assignment on V i . For every j = 1, . . . , n C the goal of the gadget consisting of w j and vertices u j,k is to express the constraint that every clause in C j has a literal satisfied by this assignment. Since w j , u j,k are adjacent to all vertices in {v i | i ∈ I j }, they may only use the missing colors (of the form b · σ(i) + x i , where i ∈ I j ). Since |I j | = 3|C j |, there are 3|C j | such colors and 2|C j | of them go to w j . This leaves exactly |C j | colors for vertices of type u j,k , corresponding to a choice of |C j | satisfied literals from the 3|C j | literals in clauses of C j . The lists and demands for vertices u j,k guarantee that exactly |C j,k | chosen satisfied literals occur in clauses of C j,k . The properties of 4-detecting families will ensure that every clause has exactly one chosen, satisfied literal, and hence at least one satisfied literal. We proceed with formal proofs. Lemma 10. If φ is satisfiable then G is L-(a:β)-colorable.
Proof. Consider a satisfying assignment η for φ. For 1 ≤ i ≤ n V , let x i ∈ [2 |V i | ] be an integer giving the same assignment on V i as η. For every clause c of φ, choose one literal satisfied by η in it, and let i c be index of the group V ic containing the literal's variable. Let α : ≤2b be the L-(a:β)-coloring of G defined as follows, for Let us first check that every vertex v gets colors from its list L(v) only. This is immediate for vertices v i and w j , while for u j,k it follows from the fact that x ic gives a partial assignment to V i that satisfies some clause of C j,k . Now let us check that for every vertex v, the coloring α assigns exactly β(v) colors to v. For α(v i ) this follows from the fact that |L(v i )| = b and 0 ≤ x i < 2 |V i | ≤ b. Since by property ( ), σ is injective on I j , and thus on {i c | c ∈ C j,k } ⊆ I j , we have |α(u j,k )| = |C j,k | = b(u j,k ). Similarly, since σ is injective on It remains to argue that the sets assigned to any two adjacent vertices are disjoint. There are three types of edges in the graph, namely v i u j,k , v i w j , and w j u j,k . The disjointness of α(w j ) and α(u j,k ) is immediate from the definition of α, since C j,k ⊆ C j . Fix j = 1, . . . , n C . Since σ is injective on I j , for any two different i, Since α(u j,k ), α(w j ) ⊆ {b · σ(i) + x i | i ∈ I j }, it follows that edges of types v i u j,k and v i w j received disjoint sets of colors on their endpoints, concluding the proof.
Proof. Assume that G is L-(a:β)-colorable, and let α be the corresponding coloring.
For 1 ≤ i ≤ n V , we have |L(v i )| = b and |α(v i )| = b − 1, so v i misses exactly one color from its list. Let b · σ(i) + x i , for some x i ∈ [b], be the missing color. We want to argue that the assignment x for φ given by x i on each V i satisfies φ.
Consider any clause group C j , for 1 ≤ j ≤ n C . Every vertex in {w j } ∪ {u j,k | 1 ≤ k ≤ 2n F } contains {v i | i ∈ I j } in its neighborhood. Therefore, the sets α(u j,k ) and α(w j ) are disjoint from }, we get that α(u j,k ) and α(w j ) are contained in the set of missing colors {b · σ(i) + x i | i ∈ I j } (corresponding to the chosen assignment). By property ( ), this set has exactly |I j | = 3|C j | different colors. Of these, exactly 2|C j | are contained in α(w j ). Let the remaining |C j | colors be {b · σ(i) + x i | i ∈ J j }, for some subset J j ⊆ I j of |C j | indices.
Since α(u j,k ) is disjoint from α(w j ), we have α(u j,k ) ⊆ {b · σ(i) + x i | i ∈ J j } for all k. By definition of I j , for every i ∈ J j ⊆ I j there is a variable in V i that appears in some clause of C j . By property ( ), it can only occur in one such clause, so let l i be the literal in the clause of C j where it appears. For every color b · σ(i) + x i ∈ α(u j,k ), by definition of the lists for u j,k we know that x i gives a partial assignment to V i that satisfies some clause of C j,k . This means x i makes the literal l i true and l i occurs in a clause of C j,k . Therefore, for each k, at least |α(u j,k )| = |C j,k | literals from the set {l i | i ∈ J j } occur in clauses of C j,k and are made true by the assignment x.
Let f : C j → {0, 1, 2, 3} be the function assigning to each clause c ∈ C j the number of literals of c in {l i | i ∈ J j }. By the above, c∈C j,k f (c) ≥ |C j,k | for 1 ≤ k ≤ 2n F . Since each literal in {l i | i ∈ J j } belongs to some clause of C j , we have c∈C j f (c) = |J j | = |C j |. Then, Since {C j,1 , . . . , C j,n F } is a 4-detecting family, this implies that f ≡ 1. Thus, for every clause c of C j we have f (c) = 1, meaning that there is a literal from the set {l i | i ∈ J j } in this clause. All these literals are made positive by the assignment η, therefore all clauses of C j are satisfied. Since j = 1, . . . , n C was arbitrary, this concludes the proof that η is a satisfying assignment for φ.
The construction can clearly be made in polynomial time and the total number of vertices is . Moreover, we get a proper 3-coloring of G, by coloring vertices of the type v i by color 1, vertices of the type u j,k by color 2, and vertices of the type w j by color 3. By Lemmas 10 and 11, this concludes the proof of Theorem 8.

The uniform case
In this section we reduce the nonuniform case to the uniform one, and state the resulting lower bound on the complexity of List (a:b)-coloring.
be an L-(a:β)-coloring of G. We define a coloring α ′ : Since α was a proper L-(a:β)-coloring, adjacent vertices can only share the filling colors. However, the lists of adjacent vertices have disjoint subsets of filling colors, since these vertices are colored differently by c. It follows that α ′ is an L ′ -(a:b)-coloring of G.
Conversely, let α ′ : . It is immediate to check that α is an L-(a:β)-coloring of G.
We are now ready to prove one of our main results. Proof. Let b(n) be a function as in the statement. We can assume w.l.o.g. that 2 ≤ b(n) ≤ n/ log 2 n, for otherwise we can replace b(n) with a function b ′ (n) = 2 + ⌊b(n)/c⌋ in the reasoning below, where c is a big enough constant; note that b ′ (n) = Θ(b(n)). Fix a function f (b) = o(log b) and assume there is an algorithm A for List (a:b)-coloring that runs in time 2 f (b)·n , whenever b = Θ(b(n)). Consider an instance of (3,4)-SAT with n variables. Let b = b(n). By Theorem 8 in poly(n) time we get an equivalent instance (G, β, L) of Nonuniform List (a:(2b))-coloring such that a = Θ (b 2 log b), |V (G)| = O( n log b ), and a 3-coloring of G. Next, by Lemma 12 in poly(n) time we get an equivalent instance (G, L ′ ) of List ((a + 6b):(2b))-coloring. Finally, we solve the instance (G, L ′ ) using algorithm A. Since b(n) = ω(1), we have f (b(n)) = o(log(b(n))), and A runs in time 2 o(log b(n))·|V (G)| . Thus, we have solved the instance φ of (3,4)-SAT in time In this section we reduce List (a:b)-coloring to (a:b)-coloring. This is done by adding a Kneser graph, and replacing the lists by edges to appropriate vertices of the Kneser graph. We will need the following well-known property of Kneser graphs (see e.g., Theorem 7.9.1 in the textbook [14]).
Theorem 13. If p > 2q then every homomorphism from KG p,q to KG p,q is an automorphism.
We proceed with the reduction. That is, K is isomorphic to the Kneser graph KG a+b,b . Then let V ′ = V (G) ⊎ V (K) and The graph G ′ = (V ′ , E ′ ) has n+ a+b b vertices, and the construction can be done in time polynomial in n+ a+b b . Let G ′ be our output instance of ((a+b):b)-coloring. We will show that it is equivalent to the instance (G, L) of List (a:b)-coloring.
Let us assume that α : We claim that α ′ is an ((a + b):b)-coloring of G ′ . Indeed, for every edge uv ∈ E(G) we have is an ((a + b):b)-coloring of G ′ . Recall that α ′ is a homomorphism of G ′ to KG a+b,b . Denote φ = α ′ | V (K) . By Theorem 13, φ is an automorphism of K.
We now prove our main result.

Low-degree testing
In this section we derive lower bounds for (r, k)-Monomial Testing. In this problem, we are given an arithmetic circuit C over some field F; such a circuit may contain input, constant, addition, negation, multiplication, and inversion gates. One gate is designated to be the output gate, and it computes some polynomial P of the variables x 1 , x 2 , . . . , x n that appear in the input gates. We assume that P is a homogenous polynomial of degree k, i.e., all its monomials have total degree k. The task is to verify whether P contains an r-monomial, i.e., a monomial in which every variable has its individual degree bounded by r, for a given parameter r ≤ k. Abasi et al. [1] gave a very fast randomized algorithm for (r, k)-Monomial Testing.
Theorem 15 (Abasi et al. [1]). Fix integers r, k with 2 ≤ r ≤ k. Let p ≤ 2r 2 + 2r be a prime, and let g ∈ GF(p)[x 1 , . . . , x n ] be a homogenous polynomial of degree k, computable by a circuit C. Then, there is a randomized algorithm running in time O(r 2k/r |C|(rn) O(1) ) which • with probability at least 1/2 answers YES when g contains an r-monomial, • always answers NO when g contains no r-monomial.
This result was later derandomized by Gabizon et al. [13] under the assumption that the circuit is non-cancelling, that is, it contains only input, addition, and multiplication gates. Many concrete problems like r-Simple k-Path can be reduced to (r, k)-Monomial Testing by encoding the set of candidate objects as monomials of some large polynomial, so that "good" objects correspond to monomials with low individual degrees. As we will see in a moment, this is also the case for List (a:b)-coloring.
Let (G = (V, E), L) be an instance of the List (a:b)-coloring problem and let I be the family of all independent sets of G. We denote n = |V |. Let C a (G, L) denote the set of all functions c : V → 2 [a] such that for every edge uv ∈ E the sets c(u) and c(v) are disjoint, and for every vertex v we have c(v) ⊆ L(v). Consider the following polynomial in n(a + 1) variables {x v } v∈V and {y v,j } v∈V,j∈[a] , over GF (2).
Note that every summand in expression (2) has a different set of variables, therefore it corresponds to a monomial (with coefficient 1). Then the following proposition is immediate.
Proposition 16. There is a list (a:b)-coloring of graph G iff p G contains a b-monomial. Now we show that p G can be evaluated relatively fast.
Lemma 17. The polynomial p G can be evaluated using a circuit of size 2 n poly(a, n), which can be constructed in time 2 n poly(a, n) Proof. Consider the following polynomial: Observe that p G is obtained from q G by removing all monomials of degree different than 2bn. Eq. (3) shows that q G can be evaluated by a circuit C q of size |I| poly(a, n) ≤ 2 n poly(a, n), which can be constructed in time 2 n poly(a, n). We obtain from C q a circuit C p for p G by splitting gates according to degrees, in a bottom-up fashion, as follows. Every input gate u of C q is replaced with a gate u 1 in C p . Every addition gate u with inputs x and y in C q is replaced in C p by 2an addition gates u 1 , . . . , u 2an , where u i has inputs x i and y i (whenever x i and y i exist). Every multiplication gate u with inputs x and y in C q is replaced in C p by 2an addition gates u 1 , . . . , u 2an . Moreover, for every pair of integers 1 ≤ r, s ≤ 2an we create a multiplication gate u r,s with inputs x r and y s (whenever they exist) and make it an input of the addition gate u r+s . It is easy to see that for every gate u of C q , for every i, the gate u i of C p evaluates the same polynomial as u, but restricted to monomials in which the total degree is equal to i. When o is the output gate of C q , then o 2bn is the output gate of C p . Clearly, |C p | ≤ (2an + 1) 2 |C q |, and C p can be constructed from C q in time 2 n poly(a, n).
Since p G is a homogenous polynomial of degree k = 2bn, by putting r = b we can combine Proposition 16, Theorem 15 and Lemma 17 to get yet another polynomial-space algorithm for List (a:b)-coloring, running in time b O(n) · poly(n). Similarly, if the running time in Theorem 15 was improved from to 2 o(log r/r)·k · |C| poly(r, n), then we would get an algorithm for List (a:b)coloring in time 2 o(log b)·n · poly(n), which contradicts ETH by Theorem 2. However, a careful examination shows that this chain of reductions would only yield instances of (r, k)-Monomial Testing with r = O( k/ log k). Hence, this does not exclude the existence of a fast algorithm that works only for large r. Below we show a more direct reduction, which excludes fast algorithms for a wider spectrum of pairs (r, k).
In the Carry-Less Subset Sum problem, we are given n + 1 numbers s, a 1 , . . . , a n , each represented as n decimal digits. For any number x, the j-th decimal digit of x is denoted by x (j) . It is assumed that n i=1 a (j) i < 10, for every j = 1, . . . , n. The goal is to verify whether there is a sequence of indices 1 ≤ i 1 < . . . < i k ≤ n such that k q=1 a iq = s. (Note that by the small sum assumption, this is equivalent to the statement that k q=1 a (j) iq = s (j) , for every j = 1, . . . , n). The standard NP-hardness reduction from 3-SAT to Subset Sum in fact outputs an instance of Carry-Less Subset Sum of linear size, yielding the following. We proceed to reducing Carry-Less Subset Sum to (r, k)-Monomial Testing. Let us choose a parameter t ∈ {1, . . . , n}. We assume w.l.o.g. that n mod t = 0, for otherwise we add t − (n mod t) zeroes at the end of every input number. Let q = n/t. For an n-digit decimal number x, for every j = 1, . . . t, let x [j] denote the q-digit number given by the j-th block of q digits in x, i.e., x [j] = (x (jq−1) · · · x ((j−1)q) ) 10 .
Let r = 10 q − 1. Define the following polynomial over GF (2): Proposition 19. (s, a 1 , . . . , a n ) is a YES-instance of Carry-Less Subset Sum iff q S contains the monomial t j=1 x r j i ∈S y i i∈S z i , for some S ⊆ {1, . . . , n}.
Proof. Consider the following polynomial over GF (2): The summands in the expression above have unique sets of y i variables, so each of them corresponds to a monomial (of coefficient 1). It is clear that these monomials where for every j the degree of x j is exactly r are in one-to-one correspondence with solutions of the instance (s, a 1 , . . . , a n ). The claim follows by observing that polynomials r S and q S coincide.
Let p S denote the polynomial obtained from q S by filtering out all the monomials of degree different than k = tr + n. Proposition 20. (s, a 1 , . . . , a n ) is a YES-instance of Carry-Less Subset Sum iff p S contains an r-monomial.
Proof. If (s, a 1 , . . . , a n ) is a YES-instance and let then by Proposition 19 polynomial q S contains the monomial t j=1 x r j i ∈S y i i∈S z i , which is an r-monomial. This monomial has degree tr + n, so it is contained in p S as well.
Conversely, assume p S contains an r-monomial m. Every monomial of q s (and hence also of p S ) contains exactly one of the variables y i and z i , with degree 1, for every i = 1, . . . , n. It means that the total degree of x j -type variables in m is tr. Hence, since m is an r-monomial, each of x j 's has degree exactly r. In other words, m is of the form t j=1 x r j i ∈S y i i∈S z i , for some S ⊆ {1, . . . , n}. Then (s, a 1 , . . . , a n ) is a YES-instance of Carry-Less Subset Sum by Proposition 19.
Proposition 21. p S can be evaluated by a circuit of size O(nt 2 r + n 2 t), which can be constructed in time polynomial in n + t + r.
Proof. Polynomial q S can be evaluated by a circuit of size O(nt). The circuit for p S is built using the construction from Lemma 17. Thus, its size is O(nt(tr + n)) = O(nt 2 r + n 2 t).
We are ready to give our main lower bound for (r, k)-Monomial Testing. We state it in the most general form, which unfortunately is also quite technical. Next, we derive an exemplary corollary that gives a lower bound for r expressed as a function of k.

Theorem 22.
If there is an algorithm solving (r, k)-Monomial Testing in time 2 o(k log r/r) |C| O(1) , then ETH fails. The statement remains true even if the algorithm works only for instances where r = 2 Θ(n/t(n)) and k = t(n)2 Θ(n/t(n)) , for an arbitrarily chosen function t : N → N computable in 2 o(n) time, such that t(n) = ω(1) and t(n) ≤ n for every n.
By Proposition 20, solving Carry-Less Subset Sum is equivalent to detecting an r-monomial in p S , which is a homogenous polynomial of degree k = tr + n. Let C be the circuit for p S ; by Proposition 21 we have |C| = O(nt 2 r + n 2 t). If this can be done in time 2 o(k log r/r) |C| O(1) , we get an algorithm for Carry-Less Subset Sum running in time Recall that t ≤ n and r = 10 n/t − 1 = 2 o(n) , since t = t(n) = ω(1). Hence (ntr) O(1) = 2 o(n) poly(n). The claim follows.
Proof. We prove that an algorithm for (r, k)-Monomial Testing with properties as in the statement can be used to derive an algorithm for the same problem with properties as in the statement of Theorem 22, which implies that ETH fails. Take t to be a positive integer not larger than n such that 1 2 ≤ 10 n/t − 1 (t · (10 n/t − 1) + n) σ ≤ 2; it can be easily verified that since σ < 1, for large enough n such an integer t ≤ n always exists. Moreover, we have that t = t(n) ∈ ω(1) and t(n) can be computed in polynomial time by brute-force. Hence, t(n) satisfies the properties stated in Theorem 22. Let t = t(n) and q = n/t. Define r = 10 q − 1 and k = tr + n, then (4) is equivalent to Hence r = Θ(k σ ). Consequently, the assumed algorithm solves (r, k)-Monomial Testing in time 2 o(k log r/r) |C| O(1) , however in the proof of Theorem 22 we have shown that the existence of an algorithm that achieves such a running time for this particular choice of parameters implies that ETH fails.
Note that Theorem 23 in particular implies that (r, k)-Monomial Testing does not admit an algorithm that achieves running time 2 o( log r r )·k · |C| O(1) for any given r.