Abstract 1 Introduction 2 Preliminaries 3 Making Hypergraphs Regular 4 Boolean Constraint Optimization Problems References

The Role of Regularity in (Hyper-)Clique Detection and Implications for Optimizing Boolean CSPs

Nick Fischer ORCID INSAIT, Sofia University “St. Kliment Ohridski”, Bulgaria Marvin Künnemann Karlsruhe Institute of Technology, Germany Mirza Redžić ORCID Karlsruhe Institute of Technology, Germany Julian Stieß ORCID Karlsruhe Institute of Technology, Germany
Abstract

Is detecting a k-clique in k-partite regular (hyper-)graphs as hard as in the general case? Intuition suggests yes, but proving this – especially for hypergraphs – poses notable challenges. Concretely, we consider a strong notion of regularity in h-uniform hypergraphs, where we essentially require that any subset of at most h1 is incident to a uniform number of hyperedges. Such notions are studied intensively in the combinatorial block design literature. We show that any f(k)ng(k)-time algorithm for detecting k-cliques in such graphs transfers to an f(k)ng(k)-time algorithm for the general case, establishing a fine-grained equivalence between the h-uniform hyperclique hypothesis and its natural regular analogue.

Equipped with this regularization result, we then fully resolve the fine-grained complexity of optimizing Boolean constraint satisfaction problems over assignments with k non-zeros. Our characterization depends on the maximum degree d of a constraint function. Specifically, if d1, we obtain a linear-time solvable problem, if d=2, the time complexity is essentially equivalent to k-clique detection, and if d3 the problem requires exhaustive-search time under the 3-uniform hyperclique hypothesis. To obtain our hardness results, the regularization result plays a crucial role, enabling a very convenient approach when applied carefully. We believe that our regularization result will find further applications in the future.

Keywords and phrases:
fine-grained complexity theory, clique detections in hypergraphs, constraint satisfaction, parameterized algorithms
Category:
Track A: Algorithms, Complexity and Games
Funding:
Nick Fischer: Partially funded by the Ministry of Education and Science of Bulgaria’s support for INSAIT, Sofia University “St. Kliment Ohridski” as part of the Bulgarian National Roadmap for Research Infrastructure. Parts of this work were done while the author was at Weizmann Institute of Science.
Copyright and License:
[Uncaptioned image] © Nick Fischer, Marvin Künnemann, Mirza Redžić, and Julian Stieß; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Problems, reductions and completeness
; Theory of computation Graph algorithms analysis
Funding:
Research supported by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) – 462679611.
Related Version:
Full Version: https://arxiv.org/abs/2505.17314
Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis

1 Introduction

In the design of algorithms and hardness reductions, it is often helpful to assume that the involved instances satisfy certain regularity conditions. To name a concrete example: For his celebrated Set Cover inapproximability result, Feige [19] first argues that the 3SAT-5 problem (the highly regular special case of SAT in which every clause contains precisely 3 variables and each variable occurs in precisely 5 clauses) is hard to approximate, and then gives a reduction from 3SAT-5 to Set Cover. We informally refer to the first reduction as a regularization. Another notable example of a regularization in parameterized complexity is the W[1]-hardness of k-Clique detection in regular graphs. In fact, this is the first (non-trivial) reduction in the textbook by Cygan et al. [17], and has been independently observed for their respective applications in at least three different works [27, 12, 28].

Conversely, obtaining regularization results can also have algorithmic implications. For instance, Zamir [33] devises improved exact algorithms for a wide array of problems including graph coloring on regular graphs (in fact, even for almost-regular graphs). Designing regularizations for these problems would therefore imply breakthroughs for general instances.

In this work we focus on the k-Clique and k-Hyperclique problems from a fine-grained perspective. Our main result is a surprisingly intricate fine-grained regularization reduction, i.e., we prove that both problems are as hard on regular (hyper-)graphs as they are on general (hyper-)graphs. Equipped with this, we are able to resolve the fine-grained complexity of optimizing Boolean CSPs over weight-k solutions; this, in fact, had been the motivating question for this study. Below, we detail these two contributions separately in Sections 1.1 and 1.2.

1.1 Regularizing (Hyper-)clique Detection

Clique Detection.

The k-Clique problem is to decide if a k-partite graph G=(V1,,Vk,E) contains a k-clique, i.e., k nodes v1V1,,vkVk such that any pair (vi,vj) forms an edge in G.111Often, the problem is stated for general instead of k-partite graphs. However, these formulations are well-known to be equivalent using color-coding techniques, and the restriction to k-partite instances often helps in designing reductions. While the trivial algorithm takes time 𝒪(nk), Nešetřil and Poljak [29] devised an algorithm running in time 𝒪(n(ω/3)k) where ω<2.3714 denotes the matrix multiplication exponent [3].222More precisely, the running time bound 𝒪(n(ω/3)k) applies only when k is divisible by 3. For general k, the running time is 𝒪(nω(k/3,k/3,(k1)/3)), where ω(a,b,c) is the exponent of rectangular matrix multiplication (i.e., 𝒪(nω(a,b,c)) is the running time of multiplying an na×nb matrix by an nb×nc matrix). The k-Clique Hypothesis postulates that this algorithm is essentially optimal (see Section 2). As it generalizes the popular Triangle hypothesis (i.e., the special case k=3, stating that the complexity of Triangle Detection is nω±o(1)), this hardness assumption entails fine-grained lower bounds for an extensive list of problems (see e.g. [31]), but also for some important lower bounds beyond the Triangle case [1, 14, 9].

Hyperclique Detection.

The natural generalization to h-uniform hypergraphs, the (k,h)-Hyperclique problem, is to decide if a given k-partite h-uniform hypergraph G=(V1,,Vk,E) contains a k-hyperclique, i.e., k nodes v1V1,,vkVk such that all h-tuples xi1,,xih form a hyperedge in G. In contrast to the k-Clique problem in graphs (which coincides with (k,2)-Hyperclique), the (k,h)-Hyperclique problem for h3 is not known to admit any significant improvement over the brute-force search time 𝒪(nk). This led to the (h-Uniform) k-Hyperclique Hypothesis postulating that time nko(1) is necessary in h-uniform hypergraphs [25]. Besides being a natural problem in itself, the relevance of the Hyperclique problem is also due to its close connection to the MAX-3-SAT problem [25] and due to a recent surge in strong (often matching) conditional lower bounds based on the Hyperclique conjecture [2, 25, 7, 13, 24, 4, 8, 18, 23, 20, 33].

(Hyper-)clique Regularization.

The starting point of this paper is the following question: Do these hypotheses imply the same time bounds for clique detection in regular (hyper)graphs? While it is clear what we mean by regular graph, it is less clear how to define a regular h-uniform hypergraph. The perhaps simplest definition requires each node to be incident to the same number r1 of hyperedges. However, this notion turns out to be insufficient for our main application. Instead, we consider a significantly stronger variant, in which every subset of <h vertices, each taken from a different part of the k-partition, is incident to the same number r of hyperedges (see Section 2 for details). This yields essentially the strongest form of regularity that we may wish for in a hypergraph. The existence of such structures is well-studied in combinatorial block design theory; e.g., k-partite 3-uniform hypergraphs that are regular in our sense correspond precisely to group divisible designs with k groups, see [15].

Our first main contribution is to establish equivalence of the 3-Uniform Hyperclique Hypothesis for general hypergraphs to its special case of regular k-partite hypergraphs in the above strong sense.

Theorem 1.1 (Hyperclique Regularization, Informal).

Let h2. The (k,h)-Hyperclique problem is solvable in time f(k)ng(k) on k-partite h-uniform hypergraphs if and only if it is solvable in time f(k)ng(k) on regular k-partite h-uniform hypergraphs.

We are confident that this theorem will be useful in a black-box fashion for future hardness results based on the k-Clique or k-Hyperclique hypotheses. Moreover, we extend this equivalence also for counting k-hypercliques, see Section 3.

Let us discuss Theorem 1.1: While the statement might feel “obviously true”, to our surprise, proving it turns out to be anything but straightforward. Even for graphs (h=2), we are not aware of a previous proof of this fact. The parameterized reductions in [12, 28, 5, 17] are generally not efficient enough as they reduce arbitrary n-vertex graphs G with maximum degree Δ to regular graphs with Ω(Δn) vertices. As Δ can be as large as Θ(n), this blow-up is inadmissible for our fine-grained perspective (the resulting conditional lower bounds would have the exponent of n halved).

Having said that, it is possible to regularize graphs by only adding 𝒪(n) additional nodes. Specifically, one approach is to increase the degree deg(v) of each node v by Δdeg(v). To this end we can add 𝒪(n) dummy nodes and add an appropriate number of edges between the original nodes and the freshly introduced dummy nodes (while ensuring not to create new k-cliques). Afterwards it still remains to regularize the new dummy nodes, which can for instance be solved by balancing the new edges appropriately. The details are tedious and not very insightful.

The real challenge arises for hypergraphs. Recall that the strong regularity guarantee for, say, 3-uniform hypergraphs, is that any pair of vertices (in different vertex parts) must be contained in the same number of hyperedges. Thus, in order to generalize the previous approach we would have to regularize not only the individual dummy nodes, but also all pairs of (original, dummy) and (dummy, dummy) nodes. For this reason we essentially make no progress by adding dummy nodes. Put differently, regularizing graphs is a local problem (in the sense that we could fix the regularity constraints for each node v by adding some dummy nodes only concerned with v), whereas regularizing 3-uniform hypergraphs appears to be a more global problem (forcing us to simultaneously regularize all vertex-pairs while not introducing unintended k-cliques).

Due to this obstacle, we adopt a very different and conceptually cleaner approach. Let G be a given k-partite h-uniform hypergraph. The new idea is to construct a regular k-partite h-uniform hypergraph G from a graph product operation that we call the signed product of G with a suitable signed hypergraph T. A signed hypergraph T has positive and negative edges E+(T) and E(T), respectively. We define the resulting graph G as follows: Its vertex set is V=V(G)×V(T), and (u1,x1),,(uh,xh)V form a hyperedge in G if and only if

  1. 1.

    {u1,,uh}E(G) and {x1,,xh}E+(T), or

  2. 2.

    {u1,,uh}E(G) and {x1,,xh}E(T).

We prove (Lemma 3.6) that G yields an equivalent, but regular instance for k-clique if the following conditions hold:

  1. (i)

    There is some r such that for all nodes x1,,xh1V(T) (taken from different parts in the k-partition of T), the set {x1,,xh1} is contained in exactly r positive and in exactly r negative edges of T,

  2. (ii)

    there is a k-hyperclique in T involving only positive edges, and

  3. (iii)

    there is no k-hyperclique in T involving a negative edge.

We call a signed hypergraph satisfying these conditions a template graph. Our task is thus simplified to finding suitable template graphs, which only depend on k and h, not on n.

In a second step, we then show how to construct such template graphs in Lemma 3.10. We employ a construction akin to Cayley graphs for some appropriate group 𝒢 (concretely, we will pick 𝒢=(/h)d for some appropriate dimension d=d(k,h)). That is, let the nodes of T be k independent copies of 𝒢 denoted T1,,Tk. For each h-tuple of vertices, say, x1T1,,xhTh, we let {x1,,xh} be a positive edge if and only if x1++xh=a+ for some fixed group element a+𝒢, and we let {x1,,xh} be a negative edge if and only if x1++xh=a for some other fixed group element a𝒢. The advantage of this construction is that it is immediately clear that the resulting graph T satisfies the regularity condition (i): For each x1T1,,xh1Th1 there is a unique choice xh:=a+(x1++xh1)Th such that {x1,,xh} is a positive hyperedge; the same applies to negative edges and all other combinations of parts Ti1,,Tih. We will then pick the elements a+ and a in a specific way to guarantee the remaining conditions (ii) and (iii).

Below, in Sections 1.2 and 1.3, we discuss applications of Theorem 1.1.

1.2 Optimizing Boolean CSPs over Weight-𝒌 Assignments

The complexity of constraint satisfaction problems (CSPs) has been intensively studied. Starting with Schaefer’s dichotomy theorem for Boolean CSPs [30], which initiated the eventually successful quest of proving the Dichotomy Conjecture [10, 34, 35], classification results for a number of settings have been proven: This includes, among others, approximability characterizations of natural optimization variants [16, 21], as well as parameterized complexity classifications of solving CSPs parameterized by their solution size (aka weight) k, i.e., the number of non-zeros, see [26, 11, 22]. The parameterized classification of Boolean CSPs due to Marx [26] has subsequently been refined to a classification of the fine-grained time complexity f(k)ng(k), resulting in 4 different regimes [24]: an FPT regime, a subexponential regime, a clique regime, and a brute-force regime.

In this work, we investigate the fine-grained complexity of optimizing Boolean CSPs parameterized by solution size, i.e., optimizing the number of satisfied constraints among assignments of weight k. Formally, let be a finite family of Boolean constraint functions. The problem maxCSPk() (minCSPk()) asks to determine, given a set Φ of constraints – each formed by applying some f to a selection of the Boolean variables x1,,xn – the maximum (minimum) number of constraints satisfied by a Boolean assignment that sets precisely k variables to true (a weight-k assignment). Among others, this class of problems contains the well-studied Densest k-Subgraph and Densest k-Subhypergraph problem ( consists of the h-ary AND), the Partial Vertex Cover in h-uniform hypergraphs ( consists of the h-ary OR), and the graph problem MaxCut(k) of maximizing the cut size among cuts with set sizes k and nk, respectively (={XOR}), studied in e.g. [12].

Note that maxCSPk() trivially generalizes the previously studied variant CSPk(), asking whether all constraints are satisfiable by a weight-k assignment, that has been fully classified in [26, 24]. Indeed, for some constraint families , the optimization variant maxCSPk() turns out to be harder than CSPk(): CSPk({XOR}) is in FPT [24], but maxCSPk({XOR}) is W[1]-hard [12]. Thus, a classification result for maxCSPk() must differ from the classification of CSPk(), but how significantly? Do we still obtain 4 regimes of tractability?

We shall prove that the answer is No: for maxCSPk() we obtain only 3 regimes: (1) a linear-time (rather than FPT) regime, (2) a clique regime, and (3) a brute-force regime. This characterization is governed by a different concept as well: rather than the notion of NANDh-representing and Implication-representing families (see [24]), it is crucial to analyze the maximum degree deg(f) over f, where deg(f) is defined as the degree of the (unique) multilinear polynomial representing f (see Section 4).

Theorem 1.2 (Informal Version).

Let be constraint family. Let dmaxfdeg(f). Then,

  • If d1, maxCSPk() and minCSPk() are linear-time solvable,

  • if d=2, the time complexity for maxCSPk() and minCSPk() is f(k)nω(k/3)±O(1), conditioned on the k-Clique Hypothesis,333The precise bound is f(k)nω(k/3,k/3,(k1)/3)±o(1) (matching the Nešetřil-Poljak running time for k-Clique).

  • if d3, the time complexity for maxCSPk() and minCSPk() is f(k)nk±o(1), conditioned on the k-Hyperclique Hypothesis.

We briefly mention some corollaries: Let us call a constraint function non-reducible if it depends on all of its inputs. We can without loss of generality assume that all f are non-reducible (since each f can be replaced by an equivalent non-reducible function). As discussed, e.g., in Williams [32, Section 6.5], non-reducible functions of degree at most 2 must have arity at most 4. Thus, there is only a finite list of non-reducible function of degree at most 2, and thus assuming the 3-uniform hyperclique hypothesis, there exists only a finite number of constraint families for which maxCSPk() admits algorithms beating brute-force running time f(k)nk±o(1). Some interesting degree-2 functions (beyond binary arity) include:

  • 3-NAE (as well as 3-AllEqual)

  • Sort(x1,x2,x3,x4), i.e., the predicate that is true if and only if the bit string x1x2x3x4 is sorted descending or ascending.

  • Select(x1,x2,x3) on 3 variables, i.e., the predicate if x1 then x2 else x3

We remark that a conceptually similar classification result has been proven for optimizing first-order graph formulas [6].

Technical Remarks.

The algorithmic part of the above theorem is a straightforward adaption of a similar reduction of Weighted Degree-2 CSP to k-clique detection given in [32, Section 6.5], see also [25, 6]. Our main technical contribution for Theorem 1.2 is to prove that this algorithmic technique is essentially the best possible, unless the 3-uniform hyperclique hypothesis can be refuted.

For ease of presentation, consider first a constraint family containing a constraint function f:{0,1}3{0,1} that is symmetric and has deg(f)=3. Let α,β,γ,δ be such that f(x,y,z)=αxyz+β(xy+yz+xz)+γ(x+y+z)+δ and note that α0. The main idea is the following: Given a k-partite regular 3-uniform hypergraph G=(V1Vk,E) with |Vi|=n for i[k], we create a Boolean CSP by introducing, for each {a,b,c}E(G), the constraint f(xa,xb,xc). Since G is regular, there is some λ such that any pair of nodes not from the same part is incident to precisely λ edges. Thus, a weight-k assignment respecting the k-partition, i.e., the 1-variables xi1,,xij satisfy ijVj, has an objective value of

α|{ia,ib,ic}E|+β(k2)λ+γk(k1)nλ+δ|E|.

The α-term is the only term depending on the choice of the assignment, maximizing the objective (for α>0, α<0 can be handled similarly) if and only if the assignment corresponds to a clique in G. For β>0, k-partite assignments are favorable over non-partite assignments. For β0, we instead need to make sure that k-partite assignments are favorable, by carefully introducing dummy constraints. Finally, we show how to handle maxCSPk() if contains any (not necessarily symmetric) constraint function with degree at least 3.

1.3 Outlook and Further Applications

We believe that reducing from the regular Hyperclique problem will find further applications. One interesting theme is that by plugging our regularization into known reductions, we can in many cases obtain conditional lower bounds for regular instances. For instance, conditioned on the 3-Uniform 4-Hyperclique Hypothesis, Dalirrooyfard and Vassilevska Williams [18] proved that deciding if a graph contains an induced 4-cycle requires time n2o(1) even in graphs with 𝒪(n3/2) edges. Composing this reduction with our regularization reduction (in a white-box manner), one can conclude that induced 4-cycle detection takes time n2o(1) even in regular graphs with 𝒪(n3/2) edges.

Regularization for Non-𝒌-Partite Hypergraphs?

An interesting follow-up question to our work is whether we can obtain a similar regularization reduction for the non-k-partite Hyperclique problem. This question is of lesser importance for the design of reductions as in most (though not all) contexts it is more convenient to work with the k-partite version. Interestingly, already constructing an interesting NO instance to the problem – i.e., a 3-uniform hypergraph G=(V,E) that is dense (say, has at least |E|n3o(1) hyperedges), regular (i.e., each pair of distinct vertices v1,v2V appears in the same number of hyperedges) and does not contain a 4-clique – appears very challenging. Even disregarding the 4-clique constraint, constructing (non-k-partite) regular hypergraphs falls into the domain of combinatorial block designs and is known to be notoriously difficult.

2 Preliminaries

We denote by [n]={1,,n} for n. For an n-element S and 0kn, we denote by (Sk) the set of all size k subsets LS. Let Sn denote the set of all permutations of a set on n elements. For a hypergraph G=(V,E), let |G|=|V|+|E|. A simple graph is r-regular, if every vertex is incident to exactly r edges. This notion can be generalized in the following way: For h and s<h, an h-uniform hypergraph is (s,λ)-regular if every possible s tuple of pairwise distinct vertices is contained in exactly λ edges. Using our notation, the notion of r-regularity of a graph can thus be viewed as (1,r)-regularity of a 2-uniform hypergraph.

We will mostly work with balanced, k-partite h-uniform hypergraphs G=(V1Vk,E) where 1) Vi are pairwise disjoint and |Vi|=|Vj| for all ij[k] 2) Each edge contains precisely h distinct vertices and 3) No edge contains two vertices v,v such that v,vVi for any i[k].

For the purposes of this paper, we will relax the notion of regularity to better fit the k-partite setting. In particular, by slight abuse of notation, we say that a k-partite h-uniform hypergraph G is (s,λ)-regular if for each tuple of pairwise distinct indices i1,,is[k] and for any choice of vertices v1Vi1,,vsVis, there are precisely λ many edges eE such that {v1,,vs}e. We call an h-uniform hypergraph G λ-regular if it is (h1,λ)-regular.

We state the following simple observation for (s,λ)-regular hypergraphs.

Observation 2.1.

For s<h<k, let G be a (s,λ)-regular k-partite h-uniform hypergraph. Then for any ss there exists λ, such that G is also (s,λ)-regular.

2.1 Hardness Assumptions

The k-clique problem asks, given a graph G=(V,E) with |V|=n, to decide if G contains a clique (set of pairwise adjacent vertices) of size k. For k divisible by 3, k-clique detection can be solved in time 𝒪(nωk/3) [29] where ω<2.372 [3] is the matrix multiplication exponent. Generalizing k-clique detection to h-uniform hypergraphs G=(V,E) asks to determine if there exists a k-(hyper)clique CV of size k such that (Ch)E. In contrast to the 2-uniform case, the matrix multiplication approach fails to generalize to h3 (see [25] for a discussion) and no known algorithm is significantly faster than 𝒪(nk). Improvements for k-hyperclique would entail faster algorithms for problems that are believed to be hard, such as Max-h-SAT. This gives raise to the following conjecture:

Hypothesis 2.2 (h-Uniform k-Hyperclique Hypothesis).

Let ϵ>0 and k>h.

  1. 1.

    For h=2 there is no 𝒪(n(ωk/3)ε)-time algorithm detecting a k-clique in a k-partite graph.

  2. 2.

    For h3, there is no 𝒪(nkε)-time algorithm detecting a k-clique in a k-partite h-uniform hypergraph.

3 Making Hypergraphs Regular

In this section we break down in detail the approach to regularize h-uniform hypergraphs, while preserving large cliques. More precisely, we prove the following main theorem.

Theorem 3.1.

Let h<k be constants and let G be an h-uniform hypergraph. Then there exists a k-partite h-uniform hypergraph G satisfying the following conditions:

  1. 1.

    For any 1s<h, there exists some λ such that G is (s,λ)-regular.

  2. 2.

    G contains a clique of size k if and only if G contains a clique of size k.

  3. 3.

    There exists a computable function f, such that |V(G)|=f(k)|V(G)|.

  4. 4.

    G can be computed deterministically in time 𝒪(|G|).

Moreover, as a direct consequence of Theorem 3.1, we get the following theorem for free.

Theorem 3.2.

Let h2. The k-Clique Detection problem is solvable in time f(k)ng(k) on k-partite h-uniform hypergraphs if and only if it is solvable in time f(k)ng(k) on regular k-partite h-uniform hypergraphs (for some computable functions f,f and hg(k)k).

In order to prove Theorem 3.1 we come up with an appropriate notion of hypergraph product, that we call signed product. It takes an arbitrary hypergraph G and regular hypergraph T(h,k) equipped with a sign function and certain properties (we will call such a hypergraph template hypergraph) and produces the hypergraph G, satisfying the conditions of the theorem. The proof of Theorem 3.1 thus consists of two main parts:

  1. 1.

    Defining the proper notion of the signed product that produces the hypergraph G satisfying the conditions of the theorem.

  2. 2.

    Showing that for any choice of 2h<k we can construct a template hypergraph T(h,k).

We start by formally defining the notion of a signed hypergraph.

Definition 3.3 (Signed Hypergraph).

Let h<k be constants. A signed hypergraph is a k-partite, h-uniform hypergraph T=(T1Tk,E,σ) equipped with a sign function σ:E{1,1}.

We say an edge e is positive if σ(e)=1 and denote by E+:={eEσ(e)=1} the set of all positive edges. We define the set of negative edges E analogously. Using this notation, we equivalently represent a signed hypergraph as a tuple T=(T1Tk,E+,E).

With the concept of signed hypergraphs set, we are ready to define the signed product.

Definition 3.4 (Signed Product).

Given an h-uniform hypergraph G=(V,E) and a signed hypergraph T=(T1Tk,E+,E), the signed product of G and T is a k-partite, h-uniform hypergraph G defined as follows:

  • V(G)=V×(T1Tk).

  • Let {t1,,th}E+. Then for all u1,,uhV that form an edge in G, let {(u1,t1),,(uh,th)} be an edge in G.

  • Let {t1,,th}E. Then for all, not necessarily distinct, u1,,uhV that form a non-edge in G, let {(u1,t1),,(uh,th)} form an edge in G.

Given a hypergraph G, in order for our product graph G to satisfy the properties of Theorem 3.1 it is not sufficient to take just any signed hypergraph T, but we want our signed hypergraph to have a certain structure. To this end, we introduce the notion of template hypergraphs.

Definition 3.5 (Template Hypergraph).

For fixed constants h<k, we call T(h,k)=(T1Tk,E+,E) a template hypergraph, if the following properties are satisfied:

  1. 1.

    There exists a positive integer λ, such that the underlying “monochromatic” hypergraphs T+:=(T1Tk,E+) and T:=(T1Tk,E) are both (h1,λ)-regular.

  2. 2.

    The hypergraph T+ contains a clique of size k.

  3. 3.

    No clique of size h+1 contains edges from E.

Equipped with the definitions above, we are now ready to formally state the main lemma of this section.

Lemma 3.6.

For any fixed constants k,h with h<k, let G=(V,E) be an h-uniform hypergraph and T(h,k)=(T1Tk,E,σ) be a template hypergraph. Let G be the signed product of G and T(h,k). Then G satisfies all the conditions of Theorem 3.1.

Before proving this lemma however, let us first prove some auxiliary statements. For the rest of this section, let G denote an arbitrary h-uniform hypergraph, T(h,k)=(T1,Tk,E+,E) a template hypergraph and G the signed product of G and T(h,k).

Lemma 3.7.

Let (u1,t1),,(uk,tk) be k arbitrary pairwise distinct vertices in G. Let π:[k][k] be an arbitrary permutation. The following statements are equivalent:

  1. 1.

    The vertices u1,,uk form a clique in G and t1,,tk form a clique in T(h,k).

  2. 2.

    The vertices (uπ(1),t1),,(uπ(k),tk) form a clique in G.

We provide the proof in the full version of the paper. Intuitively, since cliques in T(h,k) consist of only positive edges, edges of a clique in G stem from a positive edge from T and an edge contained in G. Those edges thus form a clique in the respective graphs.

For a hypergraph X, let Ck(X) denote the number of cliques of size k in X. Following the approach as in the proof of the previous lemma, we obtain this corollary.

Corollary 3.8.

Let T(h,k),G and G be as above. Then Ck(G)=k!Ck(G)Ck(T(h,k)).

Lemma 3.9.

Let T(h,k),G and G be as above. Let λ be such that both underlying hypergraphs T+ and T are λ-regular. Then G is a k-partite λ|V(G)|-regular hypergraph.

We provide the full proof in the full version of the paper.

3.1 Constructing Template Hypergraphs

In this section we show that for any choice of constants h<k we can efficiently construct a template hypergraph T(h,k), which is the missing ingredient in the proof of Theorem 3.1. In particular, we prove the following lemma

Lemma 3.10.

For any fixed constants h<k, there exists a template hypergraph T(h,k).

This template hypergraph can be deterministically constructed in time 𝒪(f(h,k)) for some computable function f.

We first give the construction, then prove that it fulfills the properties of a template hypergraph. For the rest of this section let h<k be constants. Consider the additive group 𝒢:=(/h)(kh), i.e. the group of all vectors of length (kh) over the group /h. We index the dimensions of these vectors by subsets of [k] of size h. For any set S([k]h), denote by eS the vector in 𝒢 that has a 1 in the dimension indexed by S and zeros in all other dimensions, and by 0¯ we denote the additive identity in 𝒢 (the all-zero vector).

Consider the k-partite signed graph T(h,k)=(T1Tk,E+,E) where each Ti corresponds to a copy of the group 𝒢 and the edges are added as follows. For each S:={i1,,ih}([k]h) and each choice of vertices v1Ti1,,vhTih, we add a positive edge (i.e. {v1,,vh}E+) if the corresponding elements of 𝒢 x1,,xh satisfy the equality x1++xh=0¯. Moreover, if the corresponding elements satisfy x1++xh=eS, we add a negative edge (i.e. {v1,,vh}E). We proceed to prove that the constructed T(h,k) has the desired properties. We begin by observing that the number of vertices only depends on k and h.

Observation 3.11.

The signed hypergraph T(h,k) has kh(kh) vertices.

We now state the regularity conditions of the underlying hypergraphs T+ and T, proven in the full paper.

Lemma 3.12 (P. 1).

Let T(h,k) be a signed graph as constructed above and let λ:=kh+1. Then the underlying hypergraphs T+:=(T1Tk,E+) and T:=(T1Tk,E) are both λ-regular.

It remains to show that the hypergraph T+ contains a clique of size k, and that no clique of size h+1 contains an edge from E. We begin by showing the latter.

Lemma 3.13 (P. 3).

Each clique in T(h,k) of size h+1 consists exclusively of positive edges.

Proof.

Let {i1,,ih+1}(kh+1), and assume for contradiction that there exists a clique of size h+1 consisting of vertices v1Ti1,,vh+1Tih+1, such that (w.l.o.g.) {v1,,vh}E. Let S:={i1,,ih} and xi be the elements that corresponds to vi. By construction of T(h,k), this means that x1,,xh satisfy the equation x1++xh=eS. Moreover, since v1,,vh+1 form a clique, for any subset Sj:={i1,,ij1,ij+1,ih+1} with SjS, we have

x1++xj1+xj+1++xh+1{0¯,eSj}.

As SjS, if we consider only the entry indexed by S, we get x1[S]++xj1[S]+xj+1[S]++xh+1[S]=0. Over all possible values of j, we obtain the following equation system.

[0111101111011110][x1[S]xh+1[S]]=[001].

Summing up all the rows, we get that h(x1[S]++xh+1[S])=1. Recalling that h=0 in /h, we get that 0=1, which is a contradiction. We can finally observe that choosing a vertex viTi corresponding to the zero element 0¯𝒢 for each Vi yields a clique of size k in T(h,k). We state this as a separate observation.

Observation 3.14 (P. 2).

Let v1T1,vkTk be the vertices corresponding to the all-zero vector in their respective parts. Then {v1,,vk} is a clique in the hypergraph T+.

This concludes the proof of Lemma 3.10. We are also ready to prove Theorems 3.1 and 3.2.

Proof of Theorem 3.1..

Given an arbitrary h-uniform hypergraph G, construct the signed hypergraph T(h,k) as above. By Lemma 3.10, this is a template hypergraph. Let G be the signed product of G and T(h,k). Then, by Lemma 3.6 G has all the desired properties.

Proof of Theorem 3.2..

Assume that we can decide if an (s,λ)-regular h-uniform hypergraph contains a clique of size k in time f(k)𝒪(ng(k)), for computable functions f,g and g(k)h. Then, given an arbitrary h-uniform hypergraph G consisting of n vertices, by Theorem 3.1 we can construct an (s,λ)-regular h-uniform hypergraph G in linear time, such that it contains a clique of size k if and only if G contains a clique of size k. Moreover, the number of vertices of G is bounded by f(k)n for some computable function f. We then run the fast algorithm to detect cliques of size k in time 𝒪((f(k)n)g(k))=f′′(k)ng(k)) and report this as the solution to the original instance. Note that the other direction is trivial. In fact, we can prove an even stronger result, showing that counting cliques of size k in regular hypergraphs is as hard as counting cliques in general hypergraphs.

Theorem 3.15.

For any h2, there exists an algorithm counting the cliques of size k in k-partite regular h-uniform hypergraphs in time f(k)ng(k) if and only if there exists an algorithm counting the cliques of size k in general h-uniform hypergraphs in time f(k)ng(k).

In order to prove this theorem, we extend Observation 3.11 to show that there is in fact a fixed number of cliques in our constructed template graph, only depending on h and k. Recall that by 𝒢 we denote the group (/h)(kh). We prove the next lemma in the full paper.

Lemma 3.16.

Let T(h,k) be a template hypergraph as constructed above. The number of cliques of size k in T(h,k) is precisely |𝒢|=h(kh).

The proof of Theorem 3.15 now follows directly.

4 Boolean Constraint Optimization Problems

In this section we present the main application of our construction, namely exploiting the regularity of hyperclique detection to show hardness of certain families of Boolean Constraint Optimization Problems. In particular, we obtain a full classification of the fine-grained complexity of this large class of problems. The problems are defined as follows:

Definition 4.1 (Boolean Constraint Optimization Problems (maxCSPk/minCSPk)).

Let be a finite Boolean constraint family (i.e. a set of functions ϕ:{0,1}r{0,1}). Given a set Φ of m Boolean constraints C on variables x1,,xn, each of the form ϕ(xi1,,xir), where ϕ, the problem maxCSPk() (resp. minCSPk()) asks for an assignment a:{x1,,xn}{0,1} setting precisely k variables to 1 that maximizes (resp. minimizes) the number of satisfied constraints in Φ.

It is well known that each Boolean constraint function corresponds to a unique multilinear polynomial (see e.g. [32]). More precisely, for any Boolean constraint function ϕ:{0,1}r{0,1}, there exists a unique multilinear polynomial f such that f(x1,,xr)=ϕ(x1,,xr) for any x1,,xr{0,1}.444E.g. OR3(x,y,z) corresponds to the multilinear polynomial f(x,y,z)=x+y+zxyxzyz+xyz We call such a polynomial the characteristic polynomial of ϕ.555This characterization is faithful in the sense that if two Boolean functions ϕ,ϕ have the same characteristic polynomial, they represent the same Boolean function. This characterization allows the concepts defined on multilinear polynomials to transfer to the domain of Boolean constraint functions. For a Boolean constraint function ϕ, we say that ϕ has degree d if its characteristic polynomial has degree d. Similarly, the degree of Boolean constraint family is the smallest integer d such that every constraint ϕ has degree at most d.

We are now ready to state the main theorem that we prove in this section:

Theorem 4.2 (Hardness of the Boolean Constraint Optimization Problems).

Let be a Boolean constraint family of degree d.

  • If d=2, then for no ε>0 does there exist an algorithm solving maxCSPk() (resp. minCSPk()) in time 𝒪(nk(ω/3ε)), unless the k-Clique Hypothesis fails.

  • If d3, then for no ε>0 does there exist an algorithm solving maxCSPk() (resp. minCSPk()) in time 𝒪(nk(1ε)), unless the 3-uniform k-Hyperclique Hypothesis fails.

We will show in detail the second reduction for families with degree at least 3, reducing from 3-uniform k-Hyperclique detection, the reduction from k-Clique is achieved similarly. By simple modifications to the existing algorithms in the literature (see e.g. [25, 32]), we can complement these hardness results with the corresponding algorithms to obtain a tight classification of the fine-grained complexity of Boolean Constraint Optimization Problems. The details can be found in the full paper.

Theorem 4.3 (Boolean Constraint Optimization Algorithms).

Let be an arbitrary constraint family and let d be the degree of .

  • If d1, we can solve maxCSPk() (minCSPk()) in linear time 𝒪(m+n).

  • If d=2, we can solve maxCSPk() (minCSPk()) in time 𝒪(nω(k/3,k/3,(k1)/3)).

  • If d3, we can solve maxCSPk() (minCSPk()) in time 𝒪(nk).

We prove all the hardness results and the algorithms for the maxCSPk problem, and remark that there is a canonical reduction showing equivalence between maxCSPk and minCSPk.

4.1 Hardness Construction

This section is dedicated to proving Theorem 4.2. But first, we will introduce some useful notation and prove some auxiliary results. For the rest of this section, let X be a set consisting of n variable, let be an arbitrary finite constraint family and let Φ be the given set consisting of m constraints C1,,Cm over X, where each constraint Ci is of type ϕi(xi1,,xiri) for some ϕi and some choice of xi1,,xiriX. Let a:X{0,1} be a function assigning to each variable a Boolean value. By Φ(a) we denote the number of constraints in Φ satisfied by a. The following observation plays a crucial role in our proof.

Observation 4.4.

For each constraint Ci, let fi denote the characteristic polynomial of the corresponding function ϕi. For any assignment a:X{0,1}, the equality Φ(a)=imfi(a(xi1),,a(xiri)) holds.

With this, it suffices to find an assignment that maximizes the value of the polynomial obtained by summing over all the characteristic polynomials of the functions corresponding to the constraints. More precisely, given a set Φ consisting of m constraints C1,,Cm, define the polynomial PΦ as

PΦ(x1,,xn)=i=1mfi(xi1,,xiri)

where fi denotes, as above, the characteristic polynomial of the function ϕi, corresponding to the constraint Ci. Using this notation, for any assignment a, as a simple consequence of Observation 4.4, we have that Φ(a)=PΦ(a(x1),,a(xn)).

For space reasons, we only discuss the case where ϕ is a symmetric ternary Boolean constraint function of degree 3. In the full paper we show that this is without loss of generality: We can symmetrize a constraint ϕ(x1,,xr) by considering all constraints obtained by permuting the variables, and reduce arity and (possibly) degree by reusing variables, i.e. consider ϕ(x1,x2,x3)=ϕ(x1,x2,x3,x1). With this assumption, the characteristic polynomial for ϕ is given by

f(x,y,z)=αxyz+β(xy+xz+yz)+γ(x+y+z)+δ,with α0. (1)

We remark that our reduction will depend on the coefficients of this multilinear polynomial. We first consider the coefficient α. Let G=(V1,,Vk,EG) be a k-partite (2,μ)-regular 3-uniform hypergraph, with |Vi|==|Vk|=n and without loss of generality assume that μ=(k2)qn, where q(0,1). Let G=(V1Vk,EG), with the edges defined as follows. If α>0, let EG=EG. Otherwise, if α<0, take EG as the complement of EG, respecting the k-partition (i.e. for pairwise distinct i,j,, and viVi,vjVj,vV, let {vi,vj,v}EG if and only if {vi,vj,v}EG). We can observe that G is a k-partite (2,λ)-regular 3-uniform hypergraph with λ=Ω(n).

Observation 4.5.

G is a k-partite (2,λ)-regular 3-uniform hypergraph, where λ is as follows:

  1. 1.

    If α>0, then λ=(k2)qn.

  2. 2.

    If α<0, then λ=(k2)(1q)n.

We say that an assignment a of variables in X is k-partite if for each vertex part Vi in G, a sets precisely one variable viVi to 1 and all the others to 0. We now consider the sign of the coefficient β, and make a case distinction based on this value.

Case 1: 𝜷>𝟎.

Construction 4.6.

Let X=V(G) be the set of kn variables. Now for each edge {u,v,w} in G, add to Φ the constraint ϕ(u,v,w). No other constraints are added to Φ.

Our goal is to prove that if there exists a k-clique in the hypergraph G, then any weight-k assignment a that maximizes Φ(a) sets the vertices in G that correspond to this k-clique to 1 and all other vertices to 0. The strategy that we take to prove this is to first prove that any weight-k assignment that maximizes Φ(a) has to be k-partite. We then prove that among all the k-partite assignments, the one that maximizes the value of Φ(a) corresponds to a k-clique in G (i.e. either a k-clique, or an independent set in G, depending on sign of α).

Lemma 4.7.

Let a be any weight-k assignment of the variables in X and let S be the set of vertices that correspond to the variables set to 1 by a. Denote by ki the value |SVi|, and let m(S) denote the number of edges in the induced subhypergraph G[S]. Then we have that

Φ(a)=m(S)α+(i,j)([k]2)kikjλβ+c,

where c does not depend on the assignment a.

We sketch the proof here and refer the reader to the full paper for the detailed proof.

Proof (sketch)..

Using Observation 4.4, it holds that

Φ(a)={u1,u2,u3}E(G)f(a(u1),a(u2),a(u3)).

Recall that in Equation 1 we have f(x,y,z)=αxyz+β(xy+xz+yz)+γ(x+y+z)+δ. Plugging this in above, we get

Φ(a) ={u1,u2,u3}E(G)αa(u1)a(u2)a(u3)
+{u1,u2,u3}E(G)β(a(u1)a(u2)+a(u1)a(u3)+a(u2)a(u3))
+{u1,u2,u3}E(G)γ(a(u1)+a(u2)+a(u3))
+{u1,u2,u3}E(G)δ

Clearly δ does not depend on a. Since deg(u)=deg(v) for all vertices u,v in G, it follows that γ does not depend on a either. We remark that a(u1)a(u2)a(u3)=1 if and only if {u1,u2,u3}S. Since the sum iterates only over the edges in G, we can observe that

{u1,u2,u3}E(G)αa(u1)a(u2)a(u3)={u1,u2,u3}E(G[S])α=m(S)α.

For the coefficient β, we recall that G is a (2,λ) regular k-partite hypergraph, hence for any pair of vertices u1Vi,u2Vj (for ij), there are λ many edges containing u1 and u2. As SVi=ki for every Vi, the desired equality follows easily.

As an immediate consequence of the last lemma, we get the number of satisfied constraints by any k-partite assignment a, just by plugging in ki=1 for each i[k].

Corollary 4.8.

Let a be any k-partite assignment of the variables in X and let S,m(S) be defined as above and u be an arbitrary vertex in G. Then

Φ(a)=m(S)α+(k2)λβ+kdeg(u)γ+|E(G)|δ,

We now prove that any k-partite assignment is favorable over any non-partite assignment.

Lemma 4.9.

Let a be a k-partite assignment and let a be any non-k-partite assignment of weight k. Let S and S be the sets of vertices that correspond to the variables set to 1 by a and a respectively. Then Φ(a)>Φ(a).

Due to space constraints, we only sketch the proof, giving it in the full version of the paper.

Proof (sketch)..

Consider the value Φ(a)Φ(a). By Lemma 4.7 and Corollary 4.8 we compute:

Φ(a)Φ(a) =(m(S)m(S))α+((k2)(i,j)([k]2)kikj)λβ.

By inspecting the coefficient of β, and using that in our setting the inequality (k2)>(i,j)([k]2)kikj holds, we observe that the coefficient of β is in Ω(λ)=Ω(n) (by Observation 4.5). Noticing further, that the coefficient of α is always bounded by some function f(k), we obtain:

Φ(a)Φ(a)Ω(n)f(k)>0.

We are now ready to prove that if we can efficiently solve the instance maxCSPk(), we can also efficiently decide if the given hypergraph G contains a k-clique.

Lemma 4.10.

There exists a weight-k assignment a and an integer τ, such that Φ(a)τ if and only if G contains a clique of size k. Moreover, τ can be computed in constant time.

We give a sketch of the proof here and refer the reader to the full version for details.

Proof (sketch)..

Let a be an assignment that maximizes the value Φ(a). By Lemma 4.9, we can assume that a is k-partite. Then by Corollary 4.8, we have Φ(a)=m(S)α+c, where c is the value equal for all k-partite assignments (independent of a). We now make a distinction between two cases based on the sign of α. If α>0, set τ:=(k3)α+c, otherwise set τ:=c. In both cases a simple inspection of our construction shows that a k-partite assignment a satisfies Φ(a)τ if and only if the vertices in S form a clique in G.

Case 2: 𝜷<𝟎.

Construction 4.11.

Let X=V(G) be the set of kn variables. For each edge {u,v,w} in G, add ϕ(u,v,w) to Φ. Furthermore, for each i[k] and each pair of distinct vertices u,vVi, add ϕ(u,v,w) to Φ for any wV(G){u,v}.

Following the general structure of the first case, we first count the number of constraints satisfied by some assignment a and then show that any optimal assignment is k-partite.

Lemma 4.12.

Let a be any weight-k assignment and let S be the set of vertices that correspond to the variables set to 1 by a. Denote by ki the value |SVi|, and let m(S) denote the number of edges in the induced subhypergraph G[S]. Then the following equality holds:

Φ(a) =α(m(S)+i[k](ki3)+i,j([k]2)((ki2)kj+(kj2)ki))
+β((i,j)([k]2)kikj(λ+2nkikj)+i[k]((ki2)(kn2)))+c

where c is a value that does not depend on the choice of a.

We remark that the proof follows the same general idea as the proof of Lemma 4.7, but with some more tedious counting arguments. To inspect the full proof, see the full version of the paper.

We can now argue that for any k-partite assignment a and any assignment a that is not k-partite, the inequality Φ(a)>Φ(a) is satisfied.

Lemma 4.13.

Let a be any k-partite assignment of the constraint in Φ and let a be any non-k-partite assignment of weight k. Let S and S be the sets of vertices that correspond to the variables set to 1 by a and a respectively. Then Φ(a)>Φ(a).

While the proof follows the general approach similar to the proof of Lemma 4.9, we remark that it is significantly more technical and tedious to work out the details, which can be found in the full version of the paper. We now focus on k-partite assignments only, and show that among all k-partite assignments, the one that maximizes the value of Φ(a) corresponds to a k-clique in G (if it exists).

Lemma 4.14.

There exists a weight-k assignment a and an integer τ such that Φ(a)τ if and only if G contains a clique of size k. Moreover, τ can be computed in constant time.

Proof.

Let a be an assignment of weight k that maximizes the value Φ(a). By the previous lemma, a is k-partite, thus ki=1 for i[k] and with Lemma 4.12, satisfies:

Φ(a)=αm(S)+β(k2)(λ+2n2)+c.

We now observe that the part d:=β(k2)(λ+2n2)+c is independent of the choice of k-partite assignment a. In particular, Φ(a)=αm(S)+d. The result now follows by precisely the same argument as in the proof of Lemma 4.10.

Case 3: 𝜷=𝟎.

For this case we use one of the previous constructions, depending on the value of α. In particular, we need to ensure any optimal assignment is indeed k-partite. We go over the details in the full paper and state the main lemma here:

Lemma 4.15.

There exists a number τ and an assignment a of weight k, such that Φ(a)τ if and only if the hypergraph G contains a clique of size k. Moreover, we can compute τ in constant time.

We can finally prove Theorem 4.2.

Proof of Theorem 4.2.

Assume that there exists a family with deg()3 and that there exists an algorithm solving maxCSPk() in time 𝒪(nkε). With our previous observation, detailed in the full paper, we construct a family of degree equal to 3 such that maxCSPk() is solvable in time 𝒪(nkε). Then given a regular 3-uniform hypergraph G, we apply the corresponding construction (depending on the value of β) to reduce to construct an instance Φ of maxCSPk() that consists of kn variables and 𝒪(m) constraints. Now, by Lemmas 4.10, 4.14, 4.15, we can decide if G contains a clique of size k, by computing an assignment a that maximizes the value Φ(a) in time 𝒪(nkε), thus refuting the 3-uniform k-Hyperclique Hypothesis.

References

  • [1] Amir Abboud, Arturs Backurs, and Virginia Vassilevska Williams. If the current clique algorithms are optimal, so is valiant’s parser. In Venkatesan Guruswami, editor, IEEE 56th Annual Symposium on Foundations of Computer Science, FOCS 2015, Berkeley, CA, USA, 17-20 October, 2015, pages 98–117. IEEE Computer Society, 2015. doi:10.1109/FOCS.2015.16.
  • [2] Amir Abboud, Karl Bringmann, Holger Dell, and Jesper Nederlof. More consequences of falsifying SETH and the Orthogonal Vectors conjecture. In Proc. 50th Annual ACM SIGACT Symposium on Theory of Computing (STOC 2018), STOC 2018, pages 253–266, New York, NY, USA, 2018. ACM. doi:10.1145/3188745.3188938.
  • [3] Josh Alman, Ran Duan, Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou. More asymmetry yields faster matrix multiplication. In Yossi Azar and Debmalya Panigrahi, editors, Proceedings of the 2025 Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2025, New Orleans, LA, USA, January 12-15, 2025, pages 2005–2039. SIAM, 2025. doi:10.1137/1.9781611978322.63.
  • [4] Haozhe An, Mohit Gurumukhani, Russell Impagliazzo, Michael Jaber, Marvin Künnemann, and Maria Paula Parga Nina. The fine-grained complexity of multi-dimensional ordering properties. In Petr A. Golovach and Meirav Zehavi, editors, 16th International Symposium on Parameterized and Exact Computation, IPEC 2021, September 8-10, 2021, Lisbon, Portugal, volume 214 of LIPIcs, pages 3:1–3:15. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPICS.IPEC.2021.3.
  • [5] Ulrik Brandes, Eugenia Holm, and Andreas Karrenbauer. Cliques in regular graphs and the core-periphery problem in social networks. In T.-H. Hubert Chan, Minming Li, and Lusheng Wang, editors, Combinatorial Optimization and Applications - 10th International Conference, COCOA 2016, Hong Kong, China, December 16-18, 2016, Proceedings, volume 10043 of Lecture Notes in Computer Science, pages 175–186. Springer, 2016. doi:10.1007/978-3-319-48749-6_13.
  • [6] Karl Bringmann, Alejandro Cassis, Nick Fischer, and Marvin Künnemann. A structural investigation of the approximability of polynomial-time problems. In Mikolaj Bojanczyk, Emanuela Merelli, and David P. Woodruff, editors, 49th International Colloquium on Automata, Languages, and Programming, ICALP 2022, July 4-8, 2022, Paris, France, volume 229 of LIPIcs, pages 30:1–30:20. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPICS.ICALP.2022.30.
  • [7] Karl Bringmann, Nick Fischer, and Marvin Künnemann. A fine-grained analogue of schaefer’s theorem in P: dichotomy of existsˆk-forall-quantified first-order graph properties. In Amir Shpilka, editor, 34th Computational Complexity Conference, CCC 2019, July 18-20, 2019, New Brunswick, NJ, USA, volume 137 of LIPIcs, pages 31:1–31:27. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPICS.CCC.2019.31.
  • [8] Karl Bringmann and Jasper Slusallek. Current algorithms for detecting subgraphs of bounded treewidth are probably optimal. In Nikhil Bansal, Emanuela Merelli, and James Worrell, editors, 48th International Colloquium on Automata, Languages, and Programming, ICALP 2021, July 12-16, 2021, Glasgow, Scotland (Virtual Conference), volume 198 of LIPIcs, pages 40:1–40:16. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPICS.ICALP.2021.40.
  • [9] Karl Bringmann and Philip Wellnitz. Clique-based lower bounds for parsing tree-adjoining grammars. In Juha Kärkkäinen, Jakub Radoszewski, and Wojciech Rytter, editors, 28th Annual Symposium on Combinatorial Pattern Matching, CPM 2017, July 4-6, 2017, Warsaw, Poland, volume 78 of LIPIcs, pages 12:1–12:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2017. doi:10.4230/LIPICS.CPM.2017.12.
  • [10] Andrei A. Bulatov. A dichotomy theorem for nonuniform CSPs. In Proc. 58th IEEE Annual Symposium on Foundations of Computer Science (FOCS 2017), pages 319–330, 2017. doi:10.1109/FOCS.2017.37.
  • [11] Andrei A. Bulatov and Dániel Marx. Constraint satisfaction parameterized by solution size. SIAM J. Comput., 43(2):573–616, 2014. doi:10.1137/120882160.
  • [12] Leizhen Cai. Parameterized complexity of cardinality constrained optimization problems. Comput. J., 51(1):102–121, 2008. doi:10.1093/COMJNL/BXM086.
  • [13] Nofar Carmeli, Shai Zeevi, Christoph Berkholz, Benny Kimelfeld, and Nicole Schweikardt. Answering (unions of) conjunctive queries using random access and random-order enumeration. In Dan Suciu, Yufei Tao, and Zhewei Wei, editors, Proceedings of the 39th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, PODS 2020, Portland, OR, USA, June 14-19, 2020, pages 393–409. ACM, 2020. doi:10.1145/3375395.3387662.
  • [14] Yi-Jun Chang. Hardness of RNA folding problem with four symbols. In Roberto Grossi and Moshe Lewenstein, editors, 27th Annual Symposium on Combinatorial Pattern Matching, CPM 2016, June 27-29, 2016, Tel Aviv, Israel, volume 54 of LIPIcs, pages 13:1–13:12. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2016. doi:10.4230/LIPICS.CPM.2016.13.
  • [15] Charles J. Colbourn and Jeffrey H. Dinitz, editors. Handbook of Combinatorial Designs. Chapman and Hall/CRC, New York, 2 edition, November 2006. doi:10.1201/9781420010541.
  • [16] Nadia Creignou. A dichotomy theorem for maximum generalized satisfiability problems. J. Comput. Syst. Sci., 51(3):511–522, 1995. doi:10.1006/jcss.1995.1087.
  • [17] Marek Cygan, Fedor V. Fomin, Lukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michal Pilipczuk, and Saket Saurabh. Parameterized Algorithms. Springer, 2015. doi:10.1007/978-3-319-21275-3.
  • [18] Mina Dalirrooyfard and Virginia Vassilevska Williams. Induced cycles and paths are harder than you think. In 63rd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2022, Denver, CO, USA, October 31 - November 3, 2022, pages 531–542. IEEE, 2022. doi:10.1109/FOCS54457.2022.00057.
  • [19] Uriel Feige. A threshold of ln n for approximating set cover. J. ACM, 45(4):634–652, 1998. doi:10.1145/285055.285059.
  • [20] Egor Gorbachev and Marvin Künnemann. Combinatorial designs meet hypercliques: Higher lower bounds for klee’s measure problem and related problems in dimensions d 4. In Erin W. Chambers and Joachim Gudmundsson, editors, 39th International Symposium on Computational Geometry, SoCG 2023, June 12-15, 2023, Dallas, Texas, USA, volume 258 of LIPIcs, pages 36:1–36:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2023. doi:10.4230/LIPICS.SOCG.2023.36.
  • [21] Sanjeev Khanna, Madhu Sudan, Luca Trevisan, and David P. Williamson. The approximability of constraint satisfaction problems. SIAM J. Comput., 30(6):1863–1920, 2000. doi:10.1137/S0097539799349948.
  • [22] Stefan Kratsch, Dániel Marx, and Magnus Wahlström. Parameterized complexity and kernelizability of max ones and exact ones problems. TOCT, 8(1):1:1–1:28, 2016. doi:10.1145/2858787.
  • [23] Marvin Künnemann. A tight (non-combinatorial) conditional lower bound for klee’s measure problem in 3d. In 63rd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2022, Denver, CO, USA, October 31 - November 3, 2022, pages 555–566. IEEE, 2022. doi:10.1109/FOCS54457.2022.00059.
  • [24] Marvin Künnemann and Dániel Marx. Finding small satisfying assignments faster than brute force: A fine-grained perspective into boolean constraint satisfaction. In Shubhangi Saraf, editor, 35th Computational Complexity Conference, CCC 2020, July 28-31, 2020, Saarbrücken, Germany (Virtual Conference), volume 169 of LIPIcs, pages 27:1–27:28. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPICS.CCC.2020.27.
  • [25] Andrea Lincoln, Virginia Vassilevska Williams, and R. Ryan Williams. Tight hardness for shortest cycles and paths in sparse graphs. In Artur Czumaj, editor, Proceedings of the Twenty-Ninth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2018, New Orleans, LA, USA, January 7-10, 2018, pages 1236–1252. SIAM, 2018. doi:10.1137/1.9781611975031.80.
  • [26] Dániel Marx. Parameterized complexity of constraint satisfaction problems. Computational Complexity, 14(2):153–183, 2005. doi:10.1007/s00037-005-0195-9.
  • [27] Dániel Marx. Parameterized graph separation problems. Theor. Comput. Sci., 351(3):394–406, 2006. doi:10.1016/J.TCS.2005.10.007.
  • [28] Luke Mathieson and Stefan Szeider. The parameterized complexity of regular subgraph problems and generalizations. In James Harland and Prabhu Manyem, editors, Theory of Computing 2008. Proc. Fourteenth Computing: The Australasian Theory Symposium (CATS 2008), Wollongong, NSW, Australia, January 22-25, 2008. Proceedings, volume 77 of CRPIT, pages 79–86. Australian Computer Society, 2008. URL: http://crpit.scem.westernsydney.edu.au/abstracts/CRPITV77Mathieson.html.
  • [29] Jaroslav Nešetřil and Svatopluk Poljak. On the complexity of the subgraph problem. Commentationes Mathematicae Universitatis Carolinae, 026(2):415–419, 1985.
  • [30] Thomas J. Schaefer. The complexity of satisfiability problems. In Proceedings of the 10th Annual ACM Symposium on Theory of Computing, May 1-3, 1978, San Diego, California, USA, pages 216–226, 1978. doi:10.1145/800133.804350.
  • [31] Virginia Vassilevska Williams. On some fine-grained questions in algorithms and complexity. In Proceedings of the international congress of mathematicians: Rio de janeiro 2018, pages 3447–3487, 2018.
  • [32] R. Ryan Williams. Algorithms and resource requirements for fundamental problems. PhD thesis, Carnegie Mellon University, USA, 2007. AAI3274191.
  • [33] Or Zamir. Algorithmic applications of hypergraph and partition containers. In Barna Saha and Rocco A. Servedio, editors, Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, Orlando, FL, USA, June 20-23, 2023, pages 985–998. ACM, 2023. doi:10.1145/3564246.3585163.
  • [34] Dmitriy Zhuk. A proof of CSP dichotomy conjecture. In Proc. 58th IEEE Annual Symposium on Foundations of Computer Science (FOCS 2017), pages 331–342, 2017. doi:10.1109/FOCS.2017.38.
  • [35] Dmitriy Zhuk. A proof of the CSP dichotomy conjecture. J. ACM, 67(5):30:1–30:78, 2020. doi:10.1145/3402029.