The Role of Regularity in (Hyper-)Clique Detection and Implications for Optimizing Boolean CSPs
Abstract
Is detecting a -clique in -partite regular (hyper-)graphs as hard as in the general case? Intuition suggests yes, but proving this – especially for hypergraphs – poses notable challenges. Concretely, we consider a strong notion of regularity in -uniform hypergraphs, where we essentially require that any subset of at most is incident to a uniform number of hyperedges. Such notions are studied intensively in the combinatorial block design literature. We show that any -time algorithm for detecting -cliques in such graphs transfers to an -time algorithm for the general case, establishing a fine-grained equivalence between the -uniform hyperclique hypothesis and its natural regular analogue.
Equipped with this regularization result, we then fully resolve the fine-grained complexity of optimizing Boolean constraint satisfaction problems over assignments with non-zeros. Our characterization depends on the maximum degree of a constraint function. Specifically, if , we obtain a linear-time solvable problem, if , the time complexity is essentially equivalent to -clique detection, and if the problem requires exhaustive-search time under the 3-uniform hyperclique hypothesis. To obtain our hardness results, the regularization result plays a crucial role, enabling a very convenient approach when applied carefully. We believe that our regularization result will find further applications in the future.
Keywords and phrases:
fine-grained complexity theory, clique detections in hypergraphs, constraint satisfaction, parameterized algorithmsCategory:
Track A: Algorithms, Complexity and GamesFunding:
Nick Fischer: Partially funded by the Ministry of Education and Science of Bulgaria’s support for INSAIT, Sofia University “St. Kliment Ohridski” as part of the Bulgarian National Roadmap for Research Infrastructure. Parts of this work were done while the author was at Weizmann Institute of Science.Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Problems, reductions and completeness ; Theory of computation Graph algorithms analysisFunding:
Research supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 462679611.Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele PuppisSeries and Publisher:

1 Introduction
In the design of algorithms and hardness reductions, it is often helpful to assume that the involved instances satisfy certain regularity conditions. To name a concrete example: For his celebrated Set Cover inapproximability result, Feige [19] first argues that the 3SAT-5 problem (the highly regular special case of SAT in which every clause contains precisely 3 variables and each variable occurs in precisely 5 clauses) is hard to approximate, and then gives a reduction from 3SAT-5 to Set Cover. We informally refer to the first reduction as a regularization. Another notable example of a regularization in parameterized complexity is the -hardness of -Clique detection in regular graphs. In fact, this is the first (non-trivial) reduction in the textbook by Cygan et al. [17], and has been independently observed for their respective applications in at least three different works [27, 12, 28].
Conversely, obtaining regularization results can also have algorithmic implications. For instance, Zamir [33] devises improved exact algorithms for a wide array of problems including graph coloring on regular graphs (in fact, even for almost-regular graphs). Designing regularizations for these problems would therefore imply breakthroughs for general instances.
In this work we focus on the -Clique and -Hyperclique problems from a fine-grained perspective. Our main result is a surprisingly intricate fine-grained regularization reduction, i.e., we prove that both problems are as hard on regular (hyper-)graphs as they are on general (hyper-)graphs. Equipped with this, we are able to resolve the fine-grained complexity of optimizing Boolean CSPs over weight- solutions; this, in fact, had been the motivating question for this study. Below, we detail these two contributions separately in Sections 1.1 and 1.2.
1.1 Regularizing (Hyper-)clique Detection
Clique Detection.
The -Clique problem is to decide if a -partite graph contains a -clique, i.e., nodes such that any pair forms an edge in .111Often, the problem is stated for general instead of -partite graphs. However, these formulations are well-known to be equivalent using color-coding techniques, and the restriction to -partite instances often helps in designing reductions. While the trivial algorithm takes time , Nešetřil and Poljak [29] devised an algorithm running in time where denotes the matrix multiplication exponent [3].222More precisely, the running time bound applies only when is divisible by . For general , the running time is , where is the exponent of rectangular matrix multiplication (i.e., is the running time of multiplying an matrix by an matrix). The -Clique Hypothesis postulates that this algorithm is essentially optimal (see Section 2). As it generalizes the popular Triangle hypothesis (i.e., the special case , stating that the complexity of Triangle Detection is ), this hardness assumption entails fine-grained lower bounds for an extensive list of problems (see e.g. [31]), but also for some important lower bounds beyond the Triangle case [1, 14, 9].
Hyperclique Detection.
The natural generalization to -uniform hypergraphs, the -Hyperclique problem, is to decide if a given -partite -uniform hypergraph contains a -hyperclique, i.e., nodes such that all -tuples form a hyperedge in . In contrast to the -Clique problem in graphs (which coincides with -Hyperclique), the -Hyperclique problem for is not known to admit any significant improvement over the brute-force search time . This led to the (-Uniform) -Hyperclique Hypothesis postulating that time is necessary in -uniform hypergraphs [25]. Besides being a natural problem in itself, the relevance of the Hyperclique problem is also due to its close connection to the MAX-3-SAT problem [25] and due to a recent surge in strong (often matching) conditional lower bounds based on the Hyperclique conjecture [2, 25, 7, 13, 24, 4, 8, 18, 23, 20, 33].
(Hyper-)clique Regularization.
The starting point of this paper is the following question: Do these hypotheses imply the same time bounds for clique detection in regular (hyper)graphs? While it is clear what we mean by regular graph, it is less clear how to define a regular -uniform hypergraph. The perhaps simplest definition requires each node to be incident to the same number of hyperedges. However, this notion turns out to be insufficient for our main application. Instead, we consider a significantly stronger variant, in which every subset of vertices, each taken from a different part of the -partition, is incident to the same number of hyperedges (see Section 2 for details). This yields essentially the strongest form of regularity that we may wish for in a hypergraph. The existence of such structures is well-studied in combinatorial block design theory; e.g., -partite -uniform hypergraphs that are regular in our sense correspond precisely to group divisible designs with groups, see [15].
Our first main contribution is to establish equivalence of the 3-Uniform Hyperclique Hypothesis for general hypergraphs to its special case of regular -partite hypergraphs in the above strong sense.
Theorem 1.1 (Hyperclique Regularization, Informal).
Let . The -Hyperclique problem is solvable in time on -partite -uniform hypergraphs if and only if it is solvable in time on regular -partite -uniform hypergraphs.
We are confident that this theorem will be useful in a black-box fashion for future hardness results based on the -Clique or -Hyperclique hypotheses. Moreover, we extend this equivalence also for counting -hypercliques, see Section 3.
Let us discuss Theorem 1.1: While the statement might feel “obviously true”, to our surprise, proving it turns out to be anything but straightforward. Even for graphs (), we are not aware of a previous proof of this fact. The parameterized reductions in [12, 28, 5, 17] are generally not efficient enough as they reduce arbitrary -vertex graphs with maximum degree to regular graphs with vertices. As can be as large as , this blow-up is inadmissible for our fine-grained perspective (the resulting conditional lower bounds would have the exponent of halved).
Having said that, it is possible to regularize graphs by only adding additional nodes. Specifically, one approach is to increase the degree of each node by . To this end we can add dummy nodes and add an appropriate number of edges between the original nodes and the freshly introduced dummy nodes (while ensuring not to create new -cliques). Afterwards it still remains to regularize the new dummy nodes, which can for instance be solved by balancing the new edges appropriately. The details are tedious and not very insightful.
The real challenge arises for hypergraphs. Recall that the strong regularity guarantee for, say, 3-uniform hypergraphs, is that any pair of vertices (in different vertex parts) must be contained in the same number of hyperedges. Thus, in order to generalize the previous approach we would have to regularize not only the individual dummy nodes, but also all pairs of (original, dummy) and (dummy, dummy) nodes. For this reason we essentially make no progress by adding dummy nodes. Put differently, regularizing graphs is a local problem (in the sense that we could fix the regularity constraints for each node by adding some dummy nodes only concerned with ), whereas regularizing 3-uniform hypergraphs appears to be a more global problem (forcing us to simultaneously regularize all vertex-pairs while not introducing unintended -cliques).
Due to this obstacle, we adopt a very different and conceptually cleaner approach. Let be a given -partite -uniform hypergraph. The new idea is to construct a regular -partite -uniform hypergraph from a graph product operation that we call the signed product of with a suitable signed hypergraph . A signed hypergraph has positive and negative edges and , respectively. We define the resulting graph as follows: Its vertex set is , and form a hyperedge in if and only if
-
1.
and , or
-
2.
and .
We prove (Lemma 3.6) that yields an equivalent, but regular instance for -clique if the following conditions hold:
-
(i)
There is some such that for all nodes (taken from different parts in the -partition of ), the set is contained in exactly positive and in exactly negative edges of ,
-
(ii)
there is a -hyperclique in involving only positive edges, and
-
(iii)
there is no -hyperclique in involving a negative edge.
We call a signed hypergraph satisfying these conditions a template graph. Our task is thus simplified to finding suitable template graphs, which only depend on and , not on .
In a second step, we then show how to construct such template graphs in Lemma 3.10. We employ a construction akin to Cayley graphs for some appropriate group (concretely, we will pick for some appropriate dimension ). That is, let the nodes of be independent copies of denoted . For each -tuple of vertices, say, , we let be a positive edge if and only if for some fixed group element , and we let be a negative edge if and only if for some other fixed group element . The advantage of this construction is that it is immediately clear that the resulting graph satisfies the regularity condition (i): For each there is a unique choice such that is a positive hyperedge; the same applies to negative edges and all other combinations of parts . We will then pick the elements and in a specific way to guarantee the remaining conditions (ii) and (iii).
1.2 Optimizing Boolean CSPs over Weight- Assignments
The complexity of constraint satisfaction problems (CSPs) has been intensively studied. Starting with Schaefer’s dichotomy theorem for Boolean CSPs [30], which initiated the eventually successful quest of proving the Dichotomy Conjecture [10, 34, 35], classification results for a number of settings have been proven: This includes, among others, approximability characterizations of natural optimization variants [16, 21], as well as parameterized complexity classifications of solving CSPs parameterized by their solution size (aka weight) , i.e., the number of non-zeros, see [26, 11, 22]. The parameterized classification of Boolean CSPs due to Marx [26] has subsequently been refined to a classification of the fine-grained time complexity , resulting in 4 different regimes [24]: an FPT regime, a subexponential regime, a clique regime, and a brute-force regime.
In this work, we investigate the fine-grained complexity of optimizing Boolean CSPs parameterized by solution size, i.e., optimizing the number of satisfied constraints among assignments of weight . Formally, let be a finite family of Boolean constraint functions. The problem () asks to determine, given a set of constraints – each formed by applying some to a selection of the Boolean variables – the maximum (minimum) number of constraints satisfied by a Boolean assignment that sets precisely variables to true (a weight- assignment). Among others, this class of problems contains the well-studied Densest -Subgraph and Densest -Subhypergraph problem ( consists of the -ary AND), the Partial Vertex Cover in -uniform hypergraphs ( consists of the -ary OR), and the graph problem MaxCut() of maximizing the cut size among cuts with set sizes and , respectively (), studied in e.g. [12].
Note that trivially generalizes the previously studied variant , asking whether all constraints are satisfiable by a weight- assignment, that has been fully classified in [26, 24]. Indeed, for some constraint families , the optimization variant turns out to be harder than : is in FPT [24], but is -hard [12]. Thus, a classification result for must differ from the classification of , but how significantly? Do we still obtain 4 regimes of tractability?
We shall prove that the answer is No: for we obtain only 3 regimes: (1) a linear-time (rather than FPT) regime, (2) a clique regime, and (3) a brute-force regime. This characterization is governed by a different concept as well: rather than the notion of -representing and Implication-representing families (see [24]), it is crucial to analyze the maximum degree over , where is defined as the degree of the (unique) multilinear polynomial representing (see Section 4).
Theorem 1.2 (Informal Version).
Let be constraint family. Let . Then,
-
If , and are linear-time solvable,
-
if , the time complexity for and is , conditioned on the -Clique Hypothesis,333The precise bound is (matching the Nešetřil-Poljak running time for -Clique).
-
if , the time complexity for and is , conditioned on the -Hyperclique Hypothesis.
We briefly mention some corollaries: Let us call a constraint function non-reducible if it depends on all of its inputs. We can without loss of generality assume that all are non-reducible (since each can be replaced by an equivalent non-reducible function). As discussed, e.g., in Williams [32, Section 6.5], non-reducible functions of degree at most must have arity at most 4. Thus, there is only a finite list of non-reducible function of degree at most 2, and thus assuming the 3-uniform hyperclique hypothesis, there exists only a finite number of constraint families for which admits algorithms beating brute-force running time . Some interesting degree- functions (beyond binary arity) include:
-
3-NAE (as well as 3-AllEqual)
-
Sort(), i.e., the predicate that is true if and only if the bit string is sorted descending or ascending.
-
Select() on 3 variables, i.e., the predicate if then else
We remark that a conceptually similar classification result has been proven for optimizing first-order graph formulas [6].
Technical Remarks.
The algorithmic part of the above theorem is a straightforward adaption of a similar reduction of Weighted Degree-2 CSP to -clique detection given in [32, Section 6.5], see also [25, 6]. Our main technical contribution for Theorem 1.2 is to prove that this algorithmic technique is essentially the best possible, unless the 3-uniform hyperclique hypothesis can be refuted.
For ease of presentation, consider first a constraint family containing a constraint function that is symmetric and has . Let be such that and note that . The main idea is the following: Given a -partite regular 3-uniform hypergraph with for , we create a Boolean CSP by introducing, for each , the constraint . Since is regular, there is some such that any pair of nodes not from the same part is incident to precisely edges. Thus, a weight- assignment respecting the -partition, i.e., the 1-variables satisfy , has an objective value of
The -term is the only term depending on the choice of the assignment, maximizing the objective (for , can be handled similarly) if and only if the assignment corresponds to a clique in . For , -partite assignments are favorable over non-partite assignments. For , we instead need to make sure that -partite assignments are favorable, by carefully introducing dummy constraints. Finally, we show how to handle if contains any (not necessarily symmetric) constraint function with degree at least 3.
1.3 Outlook and Further Applications
We believe that reducing from the regular Hyperclique problem will find further applications. One interesting theme is that by plugging our regularization into known reductions, we can in many cases obtain conditional lower bounds for regular instances. For instance, conditioned on the -Uniform -Hyperclique Hypothesis, Dalirrooyfard and Vassilevska Williams [18] proved that deciding if a graph contains an induced 4-cycle requires time even in graphs with edges. Composing this reduction with our regularization reduction (in a white-box manner), one can conclude that induced 4-cycle detection takes time even in regular graphs with edges.
Regularization for Non--Partite Hypergraphs?
An interesting follow-up question to our work is whether we can obtain a similar regularization reduction for the non--partite Hyperclique problem. This question is of lesser importance for the design of reductions as in most (though not all) contexts it is more convenient to work with the -partite version. Interestingly, already constructing an interesting NO instance to the problem – i.e., a 3-uniform hypergraph that is dense (say, has at least hyperedges), regular (i.e., each pair of distinct vertices appears in the same number of hyperedges) and does not contain a 4-clique – appears very challenging. Even disregarding the 4-clique constraint, constructing (non--partite) regular hypergraphs falls into the domain of combinatorial block designs and is known to be notoriously difficult.
2 Preliminaries
We denote by for . For an -element and , we denote by the set of all size subsets . Let denote the set of all permutations of a set on elements. For a hypergraph , let . A simple graph is -regular, if every vertex is incident to exactly edges. This notion can be generalized in the following way: For and , an -uniform hypergraph is -regular if every possible tuple of pairwise distinct vertices is contained in exactly edges. Using our notation, the notion of -regularity of a graph can thus be viewed as -regularity of a -uniform hypergraph.
We will mostly work with balanced, -partite -uniform hypergraphs where 1) are pairwise disjoint and for all 2) Each edge contains precisely distinct vertices and 3) No edge contains two vertices such that for any .
For the purposes of this paper, we will relax the notion of regularity to better fit the -partite setting. In particular, by slight abuse of notation, we say that a -partite -uniform hypergraph is -regular if for each tuple of pairwise distinct indices and for any choice of vertices , there are precisely many edges such that . We call an -uniform hypergraph -regular if it is -regular.
We state the following simple observation for -regular hypergraphs.
Observation 2.1.
For , let be a -regular -partite -uniform hypergraph. Then for any there exists , such that is also -regular.
2.1 Hardness Assumptions
The -clique problem asks, given a graph with , to decide if contains a clique (set of pairwise adjacent vertices) of size . For divisible by , -clique detection can be solved in time [29] where [3] is the matrix multiplication exponent. Generalizing -clique detection to -uniform hypergraphs asks to determine if there exists a -(hyper)clique of size such that . In contrast to the -uniform case, the matrix multiplication approach fails to generalize to (see [25] for a discussion) and no known algorithm is significantly faster than . Improvements for -hyperclique would entail faster algorithms for problems that are believed to be hard, such as Max--SAT. This gives raise to the following conjecture:
Hypothesis 2.2 (-Uniform -Hyperclique Hypothesis).
Let and .
-
1.
For there is no -time algorithm detecting a -clique in a -partite graph.
-
2.
For , there is no -time algorithm detecting a -clique in a -partite -uniform hypergraph.
3 Making Hypergraphs Regular
In this section we break down in detail the approach to regularize -uniform hypergraphs, while preserving large cliques. More precisely, we prove the following main theorem.
Theorem 3.1.
Let be constants and let be an -uniform hypergraph. Then there exists a -partite -uniform hypergraph satisfying the following conditions:
-
1.
For any , there exists some such that is -regular.
-
2.
contains a clique of size if and only if contains a clique of size .
-
3.
There exists a computable function , such that .
-
4.
can be computed deterministically in time .
Moreover, as a direct consequence of Theorem 3.1, we get the following theorem for free.
Theorem 3.2.
Let . The -Clique Detection problem is solvable in time on -partite -uniform hypergraphs if and only if it is solvable in time on regular -partite -uniform hypergraphs (for some computable functions and ).
In order to prove Theorem 3.1 we come up with an appropriate notion of hypergraph product, that we call signed product. It takes an arbitrary hypergraph and regular hypergraph equipped with a sign function and certain properties (we will call such a hypergraph template hypergraph) and produces the hypergraph , satisfying the conditions of the theorem. The proof of Theorem 3.1 thus consists of two main parts:
-
1.
Defining the proper notion of the signed product that produces the hypergraph satisfying the conditions of the theorem.
-
2.
Showing that for any choice of we can construct a template hypergraph .
We start by formally defining the notion of a signed hypergraph.
Definition 3.3 (Signed Hypergraph).
Let be constants. A signed hypergraph is a -partite, -uniform hypergraph equipped with a sign function .
We say an edge is positive if and denote by the set of all positive edges. We define the set of negative edges analogously. Using this notation, we equivalently represent a signed hypergraph as a tuple .
With the concept of signed hypergraphs set, we are ready to define the signed product.
Definition 3.4 (Signed Product).
Given an -uniform hypergraph and a signed hypergraph , the signed product of and is a -partite, -uniform hypergraph defined as follows:
-
.
-
Let . Then for all that form an edge in , let be an edge in .
-
Let . Then for all, not necessarily distinct, that form a non-edge in , let form an edge in .
Given a hypergraph , in order for our product graph to satisfy the properties of Theorem 3.1 it is not sufficient to take just any signed hypergraph , but we want our signed hypergraph to have a certain structure. To this end, we introduce the notion of template hypergraphs.
Definition 3.5 (Template Hypergraph).
For fixed constants , we call a template hypergraph, if the following properties are satisfied:
-
1.
There exists a positive integer , such that the underlying “monochromatic” hypergraphs and are both -regular.
-
2.
The hypergraph contains a clique of size .
-
3.
No clique of size contains edges from .
Equipped with the definitions above, we are now ready to formally state the main lemma of this section.
Lemma 3.6.
For any fixed constants with , let be an -uniform hypergraph and be a template hypergraph. Let be the signed product of and . Then satisfies all the conditions of Theorem 3.1.
Before proving this lemma however, let us first prove some auxiliary statements. For the rest of this section, let denote an arbitrary -uniform hypergraph, a template hypergraph and the signed product of and .
Lemma 3.7.
Let be arbitrary pairwise distinct vertices in . Let be an arbitrary permutation. The following statements are equivalent:
-
1.
The vertices form a clique in and form a clique in .
-
2.
The vertices form a clique in .
We provide the proof in the full version of the paper. Intuitively, since cliques in consist of only positive edges, edges of a clique in stem from a positive edge from and an edge contained in . Those edges thus form a clique in the respective graphs.
For a hypergraph , let denote the number of cliques of size in . Following the approach as in the proof of the previous lemma, we obtain this corollary.
Corollary 3.8.
Let and be as above. Then .
Lemma 3.9.
Let and be as above. Let be such that both underlying hypergraphs and are -regular. Then is a -partite -regular hypergraph.
We provide the full proof in the full version of the paper.
3.1 Constructing Template Hypergraphs
In this section we show that for any choice of constants we can efficiently construct a template hypergraph , which is the missing ingredient in the proof of Theorem 3.1. In particular, we prove the following lemma
Lemma 3.10.
For any fixed constants , there exists a template hypergraph .
This template hypergraph can be deterministically constructed in time for some computable function .
We first give the construction, then prove that it fulfills the properties of a template hypergraph. For the rest of this section let be constants. Consider the additive group , i.e. the group of all vectors of length over the group . We index the dimensions of these vectors by subsets of of size . For any set , denote by the vector in that has a in the dimension indexed by and zeros in all other dimensions, and by we denote the additive identity in (the all-zero vector).
Consider the -partite signed graph where each corresponds to a copy of the group and the edges are added as follows. For each and each choice of vertices , we add a positive edge (i.e. ) if the corresponding elements of satisfy the equality . Moreover, if the corresponding elements satisfy , we add a negative edge (i.e. ). We proceed to prove that the constructed has the desired properties. We begin by observing that the number of vertices only depends on and .
Observation 3.11.
The signed hypergraph has vertices.
We now state the regularity conditions of the underlying hypergraphs and , proven in the full paper.
Lemma 3.12 (P. 1).
Let be a signed graph as constructed above and let . Then the underlying hypergraphs and are both -regular.
It remains to show that the hypergraph contains a clique of size , and that no clique of size contains an edge from . We begin by showing the latter.
Lemma 3.13 (P. 3).
Each clique in of size consists exclusively of positive edges.
Proof.
Let , and assume for contradiction that there exists a clique of size consisting of vertices , such that (w.l.o.g.) . Let and be the elements that corresponds to . By construction of , this means that satisfy the equation . Moreover, since form a clique, for any subset with , we have
As , if we consider only the entry indexed by , we get . Over all possible values of , we obtain the following equation system.
Summing up all the rows, we get that . Recalling that in , we get that , which is a contradiction. We can finally observe that choosing a vertex corresponding to the zero element for each yields a clique of size in . We state this as a separate observation.
Observation 3.14 (P. 2).
Let be the vertices corresponding to the all-zero vector in their respective parts. Then is a clique in the hypergraph .
Proof of Theorem 3.1..
Proof of Theorem 3.2..
Assume that we can decide if an -regular -uniform hypergraph contains a clique of size in time , for computable functions and . Then, given an arbitrary -uniform hypergraph consisting of vertices, by Theorem 3.1 we can construct an -regular -uniform hypergraph in linear time, such that it contains a clique of size if and only if contains a clique of size . Moreover, the number of vertices of is bounded by for some computable function . We then run the fast algorithm to detect cliques of size in time and report this as the solution to the original instance. Note that the other direction is trivial. In fact, we can prove an even stronger result, showing that counting cliques of size in regular hypergraphs is as hard as counting cliques in general hypergraphs.
Theorem 3.15.
For any , there exists an algorithm counting the cliques of size in -partite regular -uniform hypergraphs in time if and only if there exists an algorithm counting the cliques of size in general -uniform hypergraphs in time .
In order to prove this theorem, we extend Observation 3.11 to show that there is in fact a fixed number of cliques in our constructed template graph, only depending on and . Recall that by we denote the group . We prove the next lemma in the full paper.
Lemma 3.16.
Let be a template hypergraph as constructed above. The number of cliques of size in is precisely .
The proof of Theorem 3.15 now follows directly.
4 Boolean Constraint Optimization Problems
In this section we present the main application of our construction, namely exploiting the regularity of hyperclique detection to show hardness of certain families of Boolean Constraint Optimization Problems. In particular, we obtain a full classification of the fine-grained complexity of this large class of problems. The problems are defined as follows:
Definition 4.1 (Boolean Constraint Optimization Problems ()).
Let be a finite Boolean constraint family (i.e. a set of functions ). Given a set of Boolean constraints on variables , each of the form , where , the problem (resp. ) asks for an assignment setting precisely variables to that maximizes (resp. minimizes) the number of satisfied constraints in .
It is well known that each Boolean constraint function corresponds to a unique multilinear polynomial (see e.g. [32]). More precisely, for any Boolean constraint function , there exists a unique multilinear polynomial such that for any .444E.g. corresponds to the multilinear polynomial We call such a polynomial the characteristic polynomial of .555This characterization is faithful in the sense that if two Boolean functions have the same characteristic polynomial, they represent the same Boolean function. This characterization allows the concepts defined on multilinear polynomials to transfer to the domain of Boolean constraint functions. For a Boolean constraint function , we say that has degree if its characteristic polynomial has degree . Similarly, the degree of Boolean constraint family is the smallest integer such that every constraint has degree at most .
We are now ready to state the main theorem that we prove in this section:
Theorem 4.2 (Hardness of the Boolean Constraint Optimization Problems).
Let be a Boolean constraint family of degree .
-
If , then for no does there exist an algorithm solving (resp. ) in time , unless the -Clique Hypothesis fails.
-
If , then for no does there exist an algorithm solving (resp. ) in time , unless the -uniform -Hyperclique Hypothesis fails.
We will show in detail the second reduction for families with degree at least , reducing from -uniform -Hyperclique detection, the reduction from -Clique is achieved similarly. By simple modifications to the existing algorithms in the literature (see e.g. [25, 32]), we can complement these hardness results with the corresponding algorithms to obtain a tight classification of the fine-grained complexity of Boolean Constraint Optimization Problems. The details can be found in the full paper.
Theorem 4.3 (Boolean Constraint Optimization Algorithms).
Let be an arbitrary constraint family and let be the degree of .
-
If , we can solve () in linear time .
-
If , we can solve () in time .
-
If , we can solve () in time .
We prove all the hardness results and the algorithms for the problem, and remark that there is a canonical reduction showing equivalence between and .
4.1 Hardness Construction
This section is dedicated to proving Theorem 4.2. But first, we will introduce some useful notation and prove some auxiliary results. For the rest of this section, let be a set consisting of variable, let be an arbitrary finite constraint family and let be the given set consisting of constraints over , where each constraint is of type for some and some choice of . Let be a function assigning to each variable a Boolean value. By we denote the number of constraints in satisfied by . The following observation plays a crucial role in our proof.
Observation 4.4.
For each constraint , let denote the characteristic polynomial of the corresponding function . For any assignment , the equality holds.
With this, it suffices to find an assignment that maximizes the value of the polynomial obtained by summing over all the characteristic polynomials of the functions corresponding to the constraints. More precisely, given a set consisting of constraints , define the polynomial as
where denotes, as above, the characteristic polynomial of the function , corresponding to the constraint . Using this notation, for any assignment , as a simple consequence of Observation 4.4, we have that .
For space reasons, we only discuss the case where is a symmetric ternary Boolean constraint function of degree . In the full paper we show that this is without loss of generality: We can symmetrize a constraint by considering all constraints obtained by permuting the variables, and reduce arity and (possibly) degree by reusing variables, i.e. consider . With this assumption, the characteristic polynomial for is given by
(1) |
We remark that our reduction will depend on the coefficients of this multilinear polynomial. We first consider the coefficient . Let be a -partite -regular -uniform hypergraph, with and without loss of generality assume that , where . Let , with the edges defined as follows. If , let . Otherwise, if , take as the complement of , respecting the -partition (i.e. for pairwise distinct , and , let if and only if ). We can observe that is a -partite -regular -uniform hypergraph with .
Observation 4.5.
is a -partite -regular -uniform hypergraph, where is as follows:
-
1.
If , then .
-
2.
If , then .
We say that an assignment of variables in is -partite if for each vertex part in , sets precisely one variable to and all the others to . We now consider the sign of the coefficient , and make a case distinction based on this value.
Case 1: .
Construction 4.6.
Let be the set of variables. Now for each edge in , add to the constraint . No other constraints are added to .
Our goal is to prove that if there exists a -clique in the hypergraph , then any weight- assignment that maximizes sets the vertices in that correspond to this -clique to and all other vertices to . The strategy that we take to prove this is to first prove that any weight- assignment that maximizes has to be -partite. We then prove that among all the -partite assignments, the one that maximizes the value of corresponds to a -clique in (i.e. either a -clique, or an independent set in , depending on sign of ).
Lemma 4.7.
Let be any weight- assignment of the variables in and let be the set of vertices that correspond to the variables set to by . Denote by the value , and let denote the number of edges in the induced subhypergraph . Then we have that
where does not depend on the assignment .
We sketch the proof here and refer the reader to the full paper for the detailed proof.
Proof (sketch)..
Using Observation 4.4, it holds that
Recall that in Equation 1 we have . Plugging this in above, we get
Clearly does not depend on . Since for all vertices in , it follows that does not depend on either. We remark that if and only if . Since the sum iterates only over the edges in , we can observe that
For the coefficient , we recall that is a regular -partite hypergraph, hence for any pair of vertices (for ), there are many edges containing and . As for every , the desired equality follows easily.
As an immediate consequence of the last lemma, we get the number of satisfied constraints by any -partite assignment , just by plugging in for each .
Corollary 4.8.
Let be any -partite assignment of the variables in and let be defined as above and be an arbitrary vertex in . Then
We now prove that any -partite assignment is favorable over any non-partite assignment.
Lemma 4.9.
Let be a -partite assignment and let be any non--partite assignment of weight . Let and be the sets of vertices that correspond to the variables set to by and respectively. Then .
Due to space constraints, we only sketch the proof, giving it in the full version of the paper.
Proof (sketch)..
Consider the value . By Lemma 4.7 and Corollary 4.8 we compute:
By inspecting the coefficient of , and using that in our setting the inequality holds, we observe that the coefficient of is in (by Observation 4.5). Noticing further, that the coefficient of is always bounded by some function , we obtain:
We are now ready to prove that if we can efficiently solve the instance , we can also efficiently decide if the given hypergraph contains a -clique.
Lemma 4.10.
There exists a weight- assignment and an integer , such that if and only if contains a clique of size . Moreover, can be computed in constant time.
We give a sketch of the proof here and refer the reader to the full version for details.
Proof (sketch)..
Let be an assignment that maximizes the value . By Lemma 4.9, we can assume that is -partite. Then by Corollary 4.8, we have , where is the value equal for all -partite assignments (independent of ). We now make a distinction between two cases based on the sign of . If , set , otherwise set . In both cases a simple inspection of our construction shows that a -partite assignment satisfies if and only if the vertices in form a clique in .
Case 2: .
Construction 4.11.
Let be the set of variables. For each edge in , add to . Furthermore, for each and each pair of distinct vertices , add to for any .
Following the general structure of the first case, we first count the number of constraints satisfied by some assignment and then show that any optimal assignment is -partite.
Lemma 4.12.
Let be any weight- assignment and let be the set of vertices that correspond to the variables set to by . Denote by the value , and let denote the number of edges in the induced subhypergraph . Then the following equality holds:
where is a value that does not depend on the choice of .
We remark that the proof follows the same general idea as the proof of Lemma 4.7, but with some more tedious counting arguments. To inspect the full proof, see the full version of the paper.
We can now argue that for any -partite assignment and any assignment that is not -partite, the inequality is satisfied.
Lemma 4.13.
Let be any -partite assignment of the constraint in and let be any non--partite assignment of weight . Let and be the sets of vertices that correspond to the variables set to by and respectively. Then .
While the proof follows the general approach similar to the proof of Lemma 4.9, we remark that it is significantly more technical and tedious to work out the details, which can be found in the full version of the paper. We now focus on -partite assignments only, and show that among all -partite assignments, the one that maximizes the value of corresponds to a -clique in (if it exists).
Lemma 4.14.
There exists a weight- assignment and an integer such that if and only if contains a clique of size . Moreover, can be computed in constant time.
Proof.
Let be an assignment of weight that maximizes the value . By the previous lemma, is -partite, thus for and with Lemma 4.12, satisfies:
We now observe that the part is independent of the choice of -partite assignment . In particular, . The result now follows by precisely the same argument as in the proof of Lemma 4.10.
Case 3: .
For this case we use one of the previous constructions, depending on the value of . In particular, we need to ensure any optimal assignment is indeed -partite. We go over the details in the full paper and state the main lemma here:
Lemma 4.15.
There exists a number and an assignment of weight , such that if and only if the hypergraph contains a clique of size . Moreover, we can compute in constant time.
We can finally prove Theorem 4.2.
Proof of Theorem 4.2.
Assume that there exists a family with and that there exists an algorithm solving in time . With our previous observation, detailed in the full paper, we construct a family of degree equal to such that is solvable in time . Then given a regular -uniform hypergraph , we apply the corresponding construction (depending on the value of ) to reduce to construct an instance of that consists of variables and constraints. Now, by Lemmas 4.10, 4.14, 4.15, we can decide if contains a clique of size , by computing an assignment that maximizes the value in time , thus refuting the -uniform -Hyperclique Hypothesis.
References
- [1] Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams. If the current clique algorithms are optimal, so is valiant’s parser. In Venkatesan Guruswami, editor, IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015, pages 98–117. IEEE Computer Society, 2015. doi:10.1109/FOCS.2015.16.
- [2] Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof. More consequences of falsifying SETH and the Orthogonal Vectors conjecture. In Proc. 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC 2018), STOC 2018, pages 253–266, New York, NY, USA, 2018. ACM. doi:10.1145/3188745.3188938.
- [3] Josh Alman, Ran Duan, Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou. More asymmetry yields faster matrix multiplication. In Yossi Azar and Debmalya Panigrahi, editors, Proceedings of the 2025 Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2025, New Orleans, LA, USA, January 12-15, 2025, pages 2005–2039. SIAM, 2025. doi:10.1137/1.9781611978322.63.
- [4] Haozhe An, Mohit Gurumukhani, Russell Impagliazzo, Michael Jaber, Marvin Künnemann, and Maria Paula Parga Nina. The fine-grained complexity of multi-dimensional ordering properties. In Petr A. Golovach and Meirav Zehavi, editors, 16th International Symposium on Parameterized and Exact Computation, IPEC 2021, September 8-10, 2021, Lisbon, Portugal, volume 214 of LIPIcs, pages 3:1–3:15. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPICS.IPEC.2021.3.
- [5] Ulrik Brandes, Eugenia Holm, and Andreas Karrenbauer. Cliques in regular graphs and the core-periphery problem in social networks. In T.-H. Hubert Chan, Minming Li, and Lusheng Wang, editors, Combinatorial Optimization and Applications - 10th International Conference, COCOA 2016, Hong Kong, China, December 16-18, 2016, Proceedings, volume 10043 of Lecture Notes in Computer Science, pages 175–186. Springer, 2016. doi:10.1007/978-3-319-48749-6_13.
- [6] Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. A structural investigation of the approximability of polynomial-time problems. In Mikolaj Bojanczyk, Emanuela Merelli, and David P. Woodruff, editors, 49th International Colloquium on Automata, Languages, and Programming, ICALP 2022, July 4-8, 2022, Paris, France, volume 229 of LIPIcs, pages 30:1–30:20. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPICS.ICALP.2022.30.
- [7] Karl Bringmann, Nick Fischer, and Marvin Künnemann. A fine-grained analogue of schaefer’s theorem in P: dichotomy of existsˆk-forall-quantified first-order graph properties. In Amir Shpilka, editor, 34th Computational Complexity Conference, CCC 2019, July 18-20, 2019, New Brunswick, NJ, USA, volume 137 of LIPIcs, pages 31:1–31:27. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPICS.CCC.2019.31.
- [8] Karl Bringmann and Jasper Slusallek. Current algorithms for detecting subgraphs of bounded treewidth are probably optimal. In Nikhil Bansal, Emanuela Merelli, and James Worrell, editors, 48th International Colloquium on Automata, Languages, and Programming, ICALP 2021, July 12-16, 2021, Glasgow, Scotland (Virtual Conference), volume 198 of LIPIcs, pages 40:1–40:16. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPICS.ICALP.2021.40.
- [9] Karl Bringmann and Philip Wellnitz. Clique-based lower bounds for parsing tree-adjoining grammars. In Juha Kärkkäinen, Jakub Radoszewski, and Wojciech Rytter, editors, 28th Annual Symposium on Combinatorial Pattern Matching, CPM 2017, July 4-6, 2017, Warsaw, Poland, volume 78 of LIPIcs, pages 12:1–12:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2017. doi:10.4230/LIPICS.CPM.2017.12.
- [10] Andrei A. Bulatov. A dichotomy theorem for nonuniform CSPs. In Proc. 58th IEEE Annual Symposium on Foundations of Computer Science (FOCS 2017), pages 319–330, 2017. doi:10.1109/FOCS.2017.37.
- [11] Andrei A. Bulatov and Dániel Marx. Constraint satisfaction parameterized by solution size. SIAM J. Comput., 43(2):573–616, 2014. doi:10.1137/120882160.
- [12] Leizhen Cai. Parameterized complexity of cardinality constrained optimization problems. Comput. J., 51(1):102–121, 2008. doi:10.1093/COMJNL/BXM086.
- [13] Nofar Carmeli, Shai Zeevi, Christoph Berkholz, Benny Kimelfeld, and Nicole Schweikardt. Answering (unions of) conjunctive queries using random access and random-order enumeration. In Dan Suciu, Yufei Tao, and Zhewei Wei, editors, Proceedings of the 39th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS 2020, Portland, OR, USA, June 14-19, 2020, pages 393–409. ACM, 2020. doi:10.1145/3375395.3387662.
- [14] Yi-Jun Chang. Hardness of RNA folding problem with four symbols. In Roberto Grossi and Moshe Lewenstein, editors, 27th Annual Symposium on Combinatorial Pattern Matching, CPM 2016, June 27-29, 2016, Tel Aviv, Israel, volume 54 of LIPIcs, pages 13:1–13:12. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2016. doi:10.4230/LIPICS.CPM.2016.13.
- [15] Charles J. Colbourn and Jeffrey H. Dinitz, editors. Handbook of Combinatorial Designs. Chapman and Hall/CRC, New York, 2 edition, November 2006. doi:10.1201/9781420010541.
- [16] Nadia Creignou. A dichotomy theorem for maximum generalized satisfiability problems. J. Comput. Syst. Sci., 51(3):511–522, 1995. doi:10.1006/jcss.1995.1087.
- [17] Marek Cygan, Fedor V. Fomin, Lukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michal Pilipczuk, and Saket Saurabh. Parameterized Algorithms. Springer, 2015. doi:10.1007/978-3-319-21275-3.
- [18] Mina Dalirrooyfard and Virginia Vassilevska Williams. Induced cycles and paths are harder than you think. In 63rd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2022, Denver, CO, USA, October 31 - November 3, 2022, pages 531–542. IEEE, 2022. doi:10.1109/FOCS54457.2022.00057.
- [19] Uriel Feige. A threshold of ln n for approximating set cover. J. ACM, 45(4):634–652, 1998. doi:10.1145/285055.285059.
- [20] Egor Gorbachev and Marvin Künnemann. Combinatorial designs meet hypercliques: Higher lower bounds for klee’s measure problem and related problems in dimensions d 4. In Erin W. Chambers and Joachim Gudmundsson, editors, 39th International Symposium on Computational Geometry, SoCG 2023, June 12-15, 2023, Dallas, Texas, USA, volume 258 of LIPIcs, pages 36:1–36:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2023. doi:10.4230/LIPICS.SOCG.2023.36.
- [21] Sanjeev Khanna, Madhu Sudan, Luca Trevisan, and David P. Williamson. The approximability of constraint satisfaction problems. SIAM J. Comput., 30(6):1863–1920, 2000. doi:10.1137/S0097539799349948.
- [22] Stefan Kratsch, Dániel Marx, and Magnus Wahlström. Parameterized complexity and kernelizability of max ones and exact ones problems. TOCT, 8(1):1:1–1:28, 2016. doi:10.1145/2858787.
- [23] Marvin Künnemann. A tight (non-combinatorial) conditional lower bound for klee’s measure problem in 3d. In 63rd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2022, Denver, CO, USA, October 31 - November 3, 2022, pages 555–566. IEEE, 2022. doi:10.1109/FOCS54457.2022.00059.
- [24] Marvin Künnemann and Dániel Marx. Finding small satisfying assignments faster than brute force: A fine-grained perspective into boolean constraint satisfaction. In Shubhangi Saraf, editor, 35th Computational Complexity Conference, CCC 2020, July 28-31, 2020, Saarbrücken, Germany (Virtual Conference), volume 169 of LIPIcs, pages 27:1–27:28. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPICS.CCC.2020.27.
- [25] Andrea Lincoln, Virginia Vassilevska Williams, and R. Ryan Williams. Tight hardness for shortest cycles and paths in sparse graphs. In Artur Czumaj, editor, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10, 2018, pages 1236–1252. SIAM, 2018. doi:10.1137/1.9781611975031.80.
- [26] Dániel Marx. Parameterized complexity of constraint satisfaction problems. Computational Complexity, 14(2):153–183, 2005. doi:10.1007/s00037-005-0195-9.
- [27] Dániel Marx. Parameterized graph separation problems. Theor. Comput. Sci., 351(3):394–406, 2006. doi:10.1016/J.TCS.2005.10.007.
- [28] Luke Mathieson and Stefan Szeider. The parameterized complexity of regular subgraph problems and generalizations. In James Harland and Prabhu Manyem, editors, Theory of Computing 2008. Proc. Fourteenth Computing: The Australasian Theory Symposium (CATS 2008), Wollongong, NSW, Australia, January 22-25, 2008. Proceedings, volume 77 of CRPIT, pages 79–86. Australian Computer Society, 2008. URL: http://crpit.scem.westernsydney.edu.au/abstracts/CRPITV77Mathieson.html.
- [29] Jaroslav Nešetřil and Svatopluk Poljak. On the complexity of the subgraph problem. Commentationes Mathematicae Universitatis Carolinae, 026(2):415–419, 1985.
- [30] Thomas J. Schaefer. The complexity of satisfiability problems. In Proceedings of the 10th Annual ACM Symposium on Theory of Computing, May 1-3, 1978, San Diego, California, USA, pages 216–226, 1978. doi:10.1145/800133.804350.
- [31] Virginia Vassilevska Williams. On some fine-grained questions in algorithms and complexity. In Proceedings of the international congress of mathematicians: Rio de janeiro 2018, pages 3447–3487, 2018.
- [32] R. Ryan Williams. Algorithms and resource requirements for fundamental problems. PhD thesis, Carnegie Mellon University, USA, 2007. AAI3274191.
- [33] Or Zamir. Algorithmic applications of hypergraph and partition containers. In Barna Saha and Rocco A. Servedio, editors, Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, Orlando, FL, USA, June 20-23, 2023, pages 985–998. ACM, 2023. doi:10.1145/3564246.3585163.
- [34] Dmitriy Zhuk. A proof of CSP dichotomy conjecture. In Proc. 58th IEEE Annual Symposium on Foundations of Computer Science (FOCS 2017), pages 331–342, 2017. doi:10.1109/FOCS.2017.38.
- [35] Dmitriy Zhuk. A proof of the CSP dichotomy conjecture. J. ACM, 67(5):30:1–30:78, 2020. doi:10.1145/3402029.