Faster Exponential Algorithms for Cut Problems via Geometric Data Structures
Abstract
For many hard computational problems, simple algorithms that run in time arise, say, from enumerating all subsets of a size- set. Finding (exponentially) faster algorithms is a natural goal that has driven much of the field of exact exponential algorithms (e.g., see Fomin and Kratsch, 2010). In this paper we obtain algorithms with running time on input graphs with vertices, for the following well-studied problems:
-
-Cut: find a proper cut in which no vertex has more than neighbors on the other side of the cut;
-
Internal Partition: find a proper cut in which every vertex has at least as many neighbors on its side of the cut as on the other side; and
-
()-Domination: given intervals , find a subset of the vertices, so that for every vertex the number of neighbors of in is from and for every vertex , the number of neighbors of in is from .
Our algorithms are exceedingly simple, combining the split and list technique (Horowitz and Sahni, 1974; Williams, 2005) with a tool from computational geometry: orthogonal range searching in the moderate dimensional regime (Chan, 2017). Our technique is applicable to the decision, optimization and counting versions of these problems and easily extends to various generalizations with more fine-grained, vertex-specific constraints, as well as to directed, balanced, and other variants.
Algorithms with running times of the form , for , were known for the first problem only for constant , and for the third problem for certain special cases of and ; for the second problem we are not aware of such results.
Keywords and phrases:
graph algorithms, cuts, exponential time, data structuresCopyright and License:
2012 ACM Subject Classification:
Theory of computation Design and analysis of algorithmsEditors:
Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz HermanSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
For many hard computational problems, simple algorithms that run in time can be obtained, say, by enumerating all subsets of a size- set. Obtaining speedups of a factor exponential in is a natural goal that has driven much of the field of exact exponential algorithms, motivating the development of powerful and general algorithmic techniques.
Landmark successes include Independent Set [30], Subset Sum [18], Hamiltonian Cycle [4], Max Cut [33], -CNF SAT [27], to name just a few. While all NP-hard, these problems admit algorithms running in time with , significantly faster than simple enumeration or dynamic programming over subsets.111For the problems we mention, refers to the most natural problem-specific input parameter, e.g., the number of vertices for graph problems, the number of variables for CNF SAT, or the number of elements for Set Cover or Subset Sum. We refer to the textbook of Fomin and Kratsch [16] for a broad overview of such results and techniques, as well as to the earlier, influential survey of Woeginger [34].
From a practical point of view, replacing a runtime of by (say) is, of course, rarely consequential, especially if the latter algorithm is more complicated. Yet, such improvements often point at problem-specific algorithmic structure that can be further exploited. Also, the search for algorithms that break such “triviality barriers” has often led to the invention of powerful algorithmic techniques that have found broader applications; examples include branch and reduce, measure and conquer, or split and list.
Yet, the set of techniques known to lead to such speedups is not large and several important problems seem to resist improvements beyond (Graph Coloring, CNF SAT, Set Cover, TSP are prominent examples). The hypothesis that no substantially faster algorithm exists for CNF SAT (the SETH Conjecture [19]) has become a cornerstone of fine-grained complexity theory, and other barriers are similarly conjectured (e.g., the Set Cover Conjecture [10, 21]). There are also many lesser known problems “stuck” at , where the existing techniques do not seem applicable. Problems where such a barrier was recently bypassed include, e.g., various scheduling [11, 25], bin packing [24], clustering [1, 13], multicut [23] and multiway cut [8, 35] problems, to mention just a few.
Restricted cut problems.
In this paper, we mainly consider cut problems in graphs, with certain natural restrictions on how vertices may interact with the cut. We focus on decision problems (whether a cut of the required form exists); in all cases, an actual cut can be constructed with minimal overhead.
Our first problem is -Cut: given an undirected graph and an integer , find a bipartition , where are nonempty disjoint sets, so that each vertex has at most neighbors on the other side of the partition. Precisely, denoting by the (open) set of neighbors of , we require for all and for all . The value is arbitrary, in particular, it may depend on .
The problem can be seen as a natural min-max variant of the famous Min Cut problem. While Min Cut is polynomial time solvable, -Cut is NP-hard, even for [9].
The -Cut problem was studied by Gomes and Sau [12] as a natural generalization of the well-known Matching Cut problem (the case ). Graphs admitting a matching cut are known as decomposable and have been extensively studied, e.g., see Graham [17], Chvátal [9], and Bonsma [5]. For Matching Cut, Komusiewicz, Kratsch, and Le [20] give an time algorithm, using a reduction to CNF SAT. For the general -Cut problem, Gomes and Sau obtain a runtime of the form where grows at least as . As such, the base is smaller than for fixed , but cannot, in general, be bounded away from ; the existence of an algorithm with runtime for was left open by Gomes and Sau. In fact, the SAT-based algorithm of Komusiewicz also applies to -Cut, resulting in a polynomial number of -CNF SAT instances with variables. Under the SETH conjecture [19], the running time of this approach also cannot be bounded as .
We note that the related Max Cut problem can be solved in time by reduction to fast matrix multiplication [33], this technique however, seems not to extend to -Cut.
A second general cut problem we consider is -Domination. Here, we seek a bipartition , where for all and for all . Note that and are subsets of the set .
This formulation, introduced by Telle [31], generalizes many natural structures and problems in graphs, such as Independent Set, Dominating Set, Induced Matching, etc., by setting and to particular values (we refer to [31, 14] for a list of special cases). While the fully general case seems hopeless, it is perhaps most natural to require and to be intervals, i.e., of the form . This already captures most, if not all of the interesting special cases. Notice, however, that the problem does not strictly generalize -Cut; while setting enforces the degree constraint on , the admissible degrees for vertices in depend on their original degrees in .
The problem was studied from the point of view of exponential algorithmics by Fomin, Golovach, Kratochvíl, Kratsch, and Liedloff [14]. They obtain algorithms with runtime for in the special case , as well as in some cases where and are not necessarily intervals, e.g., when or when and are generated by arithmetic sequences; these latter cases fall outside of our definition. The same authors, in a different paper [15] also consider other special cases. These require some combination of and being disjoint, finite, or successor-free (i.e., not containing consecutive integers). As such, the conditions are not comparable with our requirement for and to be intervals; in our case, and may overlap arbitrarily, may be infinite, but cannot have gaps.
The third problem we consider is Internal Partition. Here, we seek a bipartition , where and are nonempty and every vertex has at least half of its neighbors on its own side of the partition. Formally, for all we have , and for all we have .
Such a partition, called an internal partition, arises in many settings (in the literature the terms friendly-, cohesive-, or satisfactory- partition are also used). Determining whether a graph admits an internal partition is NP-hard [3]; this is in contrast to external partitions (where every vertex has at least half of its neighbors on the other side), which always exist. Research has mostly focused on conditions that guarantee the existence of an internal partition, e.g., see Thomassen [32], Stiebitz [29], and the survey of Bazgan, Tuza, and Vanderpooten [3]. A long-standing conjecture is that every sufficiently large -regular graph has an internal partition; e.g., see Ban and Linial [2] and Linial and Louis [22] for recent partial results. We are not aware of algorithms with runtime of the form for deciding the existence of an internal partition.
All three problems can trivially be solved in time . Our main result improves this for all three problems.
Theorem 1.
The -Cut, -Domination, and Internal Partition problems can be solved in time .
While we state our results for the decision problems (does a bipartition with the given property exist?), we can also easily extend them to the optimization versions of the problems (maximize or minimize the left side of the cut while satisfying the constraint), and to the counting versions (count the number of cuts with the given property). The optimization view is most natural for -Domination, where special cases include the task of finding e.g., a maximum independent set, a minimum dominating set, etc. (of course, for some of the well-known special cases faster algorithms already exist).
Additional constraints on the cuts (e.g., requiring to be of a certain size, say ), or making the edges directed (and extending the conditions accordingly to in- or out-neighborhoods) can easily be accommodated, as will be clear later.
Our algorithms also extend naturally to more fine-grained, vertex-specific conditions. For -Cut, this could mean setting an upper bound for each vertex on the number of neighbors across the cut, and possibly, a vertex-specific lower bound on the same quantity. For -Domination, we could set specific intervals for each vertex ; these may even depend on the degree of in the input, or on other properties of the graph.
Overview of our approach.
In spite of the unwieldy bound on the running time, our approach is conceptually exceedingly simple, and all three problems are tackled in essentially the same way.
Our starting point is the split and list technique originally used by Horowitz and Sahni [18] for Subset Sum: Given a set , find a subset of whose elements add up to a given value .
The technique works by splitting into and . Then, the sets , resp., of the possible sums given by subsets of , resp., are computed. The problem now amounts to searching, for all , for a matching pair . This can be solved in time by binary search if is sorted, which can be achieved by an time preprocessing. An overall runtime of results.
We can view the sorting/searching part of the algorithm as storing in an efficient comparison-based dictionary (say, a balanced binary search tree). Our extension of the technique amounts to replacing this dictionary with a more sophisticated multidimensional search structure.
In particular, we use the orthogonal range search data structure of Chan [7]. Such a data structure stores a set of vectors from . A dominance range query returns the number (or the list) of vectors stored that are upper bounded by on every coordinate. Precisely, a stored vector is reported for query vector if for all .
In most applications of range searching, the dimension is assumed constant, however, Chan’s data structure allows for that is polylogarithmic in . As we only need , we state the result in this form. Note that dominance range queries can be reduced to orthogonal range queries, where each query is specified by an axis-parallel hypercube (box), and vice versa (by doubling the number of dimensions). In our application, we always formulate our queries as dominance range queries.
A small remark on the computational model is in order. While the data structure is defined for vectors in , assuming a real RAM with constant time operations, in our application all vector coordinates are integers in , our results thus hold in more realistic models such as the word RAM, or even if we account for the number of bit-operations.
The result we rely on takes the following form.
Lemma 2 (Chan [7]).
We can preprocess vectors from for , and store them in a data structure that serves a sequence of dominance range queries in a total time (including the preprocessing) , where depends only on .
We use this geometric data structure to solve the mentioned graph cut problems similarly to the split and list technique sketched above. We split the set of vertices in two equal parts, and form all possible bipartitions of both parts. Combining two such bipartitions into one requires checking whether they are “compatible”, i.e., whether their union satisfies the degree constraints. This can, in all three cases, be verified with dominance range queries on appropriately preprocessed degree vectors. We precisely state and prove our results in Section 2.
In Section 3 we derive an exact constant for Lemma 2 in our setting; this is not easy to read off directly from previous work, as results are stated in asymptotic notation, e.g., where is allowed to be (slightly) superconstant. In our application, as we will show, and are feasible, which for yield the stated runtime.
Chan [7] gives several different implementations that result in a bound of the form given in Lemma 2. We work out the constant in the exponent for the variant that appears the simplest [7, § 2.1]. It is a randomized, “purely combinatorial” implementation, i.e., not using fast matrix multiplication or other algebraic tools. The data structure allows for an online sequence of queries. As in our setting the queries are offline (i.e., known upfront), we could use one of the more efficient offline, as well as deterministic versions from [7], to obtain a similar, possibly better constant; we refrain from this to keep the presentation simple.
While the use of a geometric data structure may seem surprising in a graph theoretic context (its implementation involves an intricate lopsided recursion reminiscent of k-d-trees), Chan’s technique did have previous graph-applications, although mostly in the polynomial-time regime, e.g., for the all pairs shortest paths problem [7]. For a different flavor of applying orthogonal range searching (in constant dimensions) to graph algorithms, see [6].
2 Results
We use standard graph notation. A bipartition (cut) of an undirected graph is a partitioning of the set of vertices, i.e., and . A proper bipartition is one where both and are nonempty. We denote the open neighborhood of a vertex by , i.e., . For convenience, for a set we denote .
2.1 Internal partition
We start with the Internal Partition problem whose solution is the simplest. We first formally define the problem.
Internal partition
Input: An undirected graph .
Output: Is there a proper bipartition of such that
-
for all ,
-
for all .
In words, every vertex has at least half of its neighbors in its own part.
Algorithm.
We now describe our algorithm for Internal Partition. Start by choosing an arbitrary subset of of size and let . For ease of notation, assume , .
We consider every possible bipartition of and every possible bipartition of . Our goal is to identify a “matching pair”, i.e., a bipartition of and a bipartition of so that is a valid bipartition of , satisfying the given degree constraints.
To achieve this, we encode every proper bipartition of as a vector and every proper bipartition as a vector , both of dimension . Our encoding is meant to ensure that is a feasible solution for Internal Partition if and only if , i.e., vector dominates vector . Intuitively, is a data vector stored in the data structure, and is a query vector.
We define as follows. For all :
-
if , then , and ,
-
if , then , and ,
-
if , then , and .
Intuitively, the entries and store the “excess degree” of to its own part. The two groups of entries correspond to the two inequalities in the definition of the problem. If , then its assignment to either set or is already determined, so only one of the two inequalities is relevant to . Accordingly, we set one of and to the correct excess degree, while we set the other to a large value () that ensures dominance. If , then its assignment to or is not yet determined, and thus, we compute the excess degrees to both parts.
Symmetrically, for every proper bipartition of , we create a vector , as follows. For all :
-
if , then , and ,
-
if , then , and ,
-
if , then , and .
The algorithm now stores all vectors in a data structure and queries, for each vector , whether it dominates any of the stored vectors. If a dominating pair is found, then the output is yes, otherwise no.
Both the number of stored vectors and the number of queries is at most . The construction of the vectors clearly takes time polynomial in per vector.
Of course, comparing each pair directly would yield no runtime improvement. However, by applying the data structure of Lemma 2, we can solve the problem in time time, where , since the dimension is .
We required and to be proper bipartitions of and , to avoid a possible trivial solution where one part is empty. This means that we need to check the cases or against all proper bipartitions of and the cases or against all proper bipartitions of . These special cases require time , which is absorbed in the overall runtime.
Correctness.
We must show that admits a feasible bipartition if and only if there is some that dominates some .
Suppose that is a feasible bipartition, and let be the algorithm’s choice, so and , and and are among the sets considered. The case when one of is empty is handled separately, and a solution can clearly be found. Otherwise, let , , for and denote the entries of and , as defined above.
If , or if , then . This must be nonnegative from the feasibility condition. For the second part, if , then , and , which clearly yields . If , then .
If , or if , then , again, nonnegative from the feasibility condition. For the first part, if , then , and , which yields . If , then .
Conversely, if , then from the corresponding we obtain for all . From we obtain for all . This means that is a feasible solution.
2.2 -Cut and -Domination
Let us formally state our other two problems.
-Cut
Input: An undirected graph and a nonnegative integer .
Output: Is there a proper bipartition of such that
-
for all ,
-
for all .
In words, every vertex is incident to at most edges that cross .
-Domination
Input: An undirected graph and intervals .
Output: Is there a proper bipartition of such that
-
for all ,
-
for all .
We next introduce a general problem formulation that allows for vertex-specific constraints and that extends both problems.
Interval-Constrained Cut
Input: An undirected graph and nonnegative integers , , for all .
Output: Is there a proper bipartition of such that
-
for all ,
-
for all ,
-
for all ,
-
for all .
The problem generalizes -Cut, by setting , i.e., “don’t care”, and setting , i.e., the required degree constraint.
The problem also generalizes -Domination, for intervals and , by setting , i.e., “don’t care”, and setting and , i.e., the required degree constraint.
Algorithm.
We now describe our solution for Interval-Constrained Cut, which is very similar to the earlier one (only slightly more complicated, since the problem definition involves inequalities instead of ).
Again, we choose an arbitrary subset of of size and let , denoting , .
We encode every proper bipartition of as a vector and every proper bipartition as a vector , where both vectors have dimension . (Bipartitions where one part is empty are handled separately, in the same way as in Section 2.1.) Additionally, we encode the problem constraints as a vector , also of dimension . The encoding ensures that dominates if and only if is a feasible solution.
The entries of vectors , , are indexed as , , for and , ordered lexicographically according to the index .
Intuitively, the entries form groups, corresponding to the inequalities of the interval constraints in the problem statement, and verifies whether the -th inequality holds for vertex .
The encoding is as follows. We start with , which is the simplest to state. The entries are, for all :
We next describe the entries of and the entries of . Recall that is a bipartition of and is a bipartition of . For all :
The three rows of the table correspond to the cases where is in , , or , showing the value to which is set. The columns distinguish the entries needed for verifying the inequalities (i.e., the index ). Notice that if or , then the placement of in or is determined and only of the inequalities in the definition are applicable to . For these cases we set to the relevant degree of (with plus sign where we have a lower bound, and minus sign where we have an upper bound). For the inequalities that do not apply to , we set to a sufficiently large value of which ensures that on the given coordinate dominates , as vertex trivially satisfies the condition .
On the other hand, if , then the placement of within the cut is not yet determined, in this case we account for all degrees, preparing for both cases.
The encoding of is symmetric. For all :
We then store all vectors in the data structure and execute orthogonal dominance queries for all vectors. A running time of follows similarly to the previous section, with , since the dimension is .
Correctness.
We need to verify that there is a feasible solution to the Interval-Constrained Cut problem if and only if there is a vector that dominates some vector.
Suppose first that a feasible solution exists and let and . We show that , so the algorithm correctly finds the solution.
We verify this dominance relation on the entries , , ; the calculations for the cases are entirely analogous and thus omitted.
If or then either or , and in both cases , since . Thus, the inequality holds.
If or if , then and . Thus is equivalent to . Since the left side equals , the inequality follows from the feasibility of .
Conversely, suppose that the algorithm identifies vectors and for which holds. In particular, if , then , which amounts to , yielding . This means that a partition with satisfies the first inequality of the problem statement. If , then the first inequality does not apply to , and thus trivially holds. The remaining inequalities follow analogously from the cases . Thus, a feasible solution exists.
Remarks.
If only the -Cut problem is being solved, then one can encode the partitions with only entries per vertex instead of , similarly to Internal Partition, yielding a (small) saving in runtime. An additional optimization that we omitted is to notice when some partial cuts and already violate the degree upper bounds; in such cases the corresponding vectors need not be stored, resp., queried.
2.3 Extensions
So far we have described how to solve the decision problem, i.e., deciding whether a cut with the given constraints exists. Constructing an actual feasible cut is easy, as the data structure can return for a query vector a single data vector that has led to the positive answer without any overhead. With minor bookkeeping, then and are identified, and the bipartition is constructed.
Chan’s data structure returns the number of data vectors dominated by the query vector. Adding up all these counts yields the total number of solutions without overcounting (notice that every solution is uniquely identified by its intersection with the partition chosen by the algorithm). Note that we must also add the solutions found in the special cases when one of is empty.
Finally, if we only look for solutions of a given size, say , then we can encode the size of partial cuts ( and ) and use two additional dimensions to enforce the inequalities and . This does not affect the stated runtimes. If we wish to minimize or maximize the size for which a solution exists, then we can repeat the procedure for and choose the optimum, with a factor overhead, which is negligible in our regime.
3 Obtaining a base constant
In this section, for concreteness, we derive a constant value for the base of the running time, without attempting to significantly optimize it.
This results from the analysis of the dominance range searching data structure stated in Lemma 2. We refer to the online implementation by Chan described and analyzed in [7, § 2.1]. While the online aspect of the data structure is not strictly needed in our application, the implementation is relatively simple (using a recursive k-d-tree-like approach) and in particular it does not rely on algebraic techniques. While Chan also describes deterministic, offline designs with stronger asymptotic guarantees, they rely on more complicated techniques, and exact constants are more difficult to derive for them (although their running times are also clearly of the form given in Lemma 2).
Chan upper bounds the cost of a query as . The parameter is set as , where is a user-chosen parameter, and is the base logarithm.
Recall that in our application and , where . Thus, the last factor can be upper bounded by a polynomial in , and thus omitted (polynomial factors are absorbed, if we slightly increase the base of the exponential).
As for the remaining parameters, Chan sets in the analysis , where , and finally, .
Thus, (after ignoring a polynomial factor, as mentioned), the cost of queries is at most:
We now plug in and and choose , from which follows.
For the first factor we get an upper bound of and for the second factor an upper bound of , making their product less than .
It remains to bound the preprocessing cost, i.e., the time needed to build the data structure that stores vectors. This is given by Chan [7, § 2.1] as multiplied by a factor which resolves to the form . While this is a quasi-polynomial factor, it can still be absorbed by the exponential by a slight rounding up of the exponential base, we thus omit this factor as well.
The crucial part is the constant hidden in the term in the exponent. One can observe in the analysis [7, § 2.1] that the term arises as . The -notation is used to suppress constant factors in two steps of the analysis. We will make these explicit, stating the bound as for constants (our notation).
For our earlier choices of , , , the quantity is upper bounded as . It remains to resolve the constants .
One of them, , captures a factor of , absorbed in the -notation in one step of the analysis. For our choice of , this geometric sum can be upper bounded as . The other constant, , arises from upper bounding as . We replace this estimate by the explicit upper bound . This follows from the well-known inequality , and since , the constant suffices for the inequality to hold.
Putting things together, we obtain a runtime of at most , which is within our stated bounds.
4 Conclusion
In this paper we gave an algorithm that solves a family of constrained cut problems, combining the split and list technique with a sophisticated geometric data structure that allows orthogonal range queries on degree vectors. Our method can also solve, with minor adaptation, the following vector-based generalization of Subset Sum in similar running time: given a set of vectors of dimension linear in , find a subset of vectors whose sum is within a target box .
Our overall approach is conceptually very simple. One may complain that this simplicity is illusory, since the complexity is all hidden inside the data structure; to which we would respond: precisely, that is the point of the method. In all fairness, one could attempt to “unroll” the data structure implementation to see the algorithm in more elementary steps. We instead propose a different question: can we replace the orthogonal range search structure by different kinds of data structures, to extend split and list to further algorithmic problems?
The space usage of Chan’s data structure is, with careful optimization, near-linear, i.e., of the form where and . In case of Subset Sum, an original space usage of has subsequently been reduced [28] to (with a recent small further improvement [26]). Can similar space improvements, or a finer time-space-tradeoff (again, similarly to Subset Sum) be obtained for the problems considered in this paper?
References
- [1] Enver Aman, Karthik C. S., and Sharath Punna. On connections between k-coloring and euclidean k-means. In Timothy M. Chan, Johannes Fischer, John Iacono, and Grzegorz Herman, editors, 32nd Annual European Symposium on Algorithms, ESA 2024, September 2-4, 2024, Royal Holloway, London, United Kingdom, volume 308 of LIPIcs, pages 9:1–9:18. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.ESA.2024.9.
- [2] Amir Ban and Nati Linial. Internal partitions of regular graphs. J. Graph Theory, 83(1):5–18, 2016. doi:10.1002/JGT.21909.
- [3] Cristina Bazgan, Zsolt Tuza, and Daniel Vanderpooten. Satisfactory graph partition, variants, and generalizations. Eur. J. Oper. Res., 206(2):271–280, 2010. doi:10.1016/J.EJOR.2009.10.019.
- [4] Andreas Björklund. Determinant sums for undirected hamiltonicity. SIAM J. Comput., 43(1):280–299, 2014. doi:10.1137/110839229.
- [5] Paul S. Bonsma. The complexity of the matching-cut problem for planar graphs and other graph classes. J. Graph Theory, 62(2):109–126, 2009. doi:10.1002/JGT.20390.
- [6] Sergio Cabello and Christian Knauer. Algorithms for graphs of bounded treewidth via orthogonal range searching. Comput. Geom., 42(9):815–824, 2009. doi:10.1016/J.COMGEO.2009.02.001.
- [7] Timothy M Chan. Orthogonal range searching in moderate dimensions: kd trees and range trees strike back. Discrete & Computational Geometry, 61:899–922, 2019. doi:10.1007/S00454-019-00062-5.
- [8] Rajesh Chitnis, Fedor V. Fomin, Daniel Lokshtanov, Pranabendu Misra, M. S. Ramanujan, and Saket Saurabh. Faster exact algorithms for some terminal set problems. J. Comput. Syst. Sci., 88:195–207, 2017. doi:10.1016/J.JCSS.2017.04.003.
- [9] Vasek Chvátal. Recognizing decomposable graphs. J. Graph Theory, 8(1):51–53, 1984. doi:10.1002/JGT.3190080106.
- [10] Marek Cygan, Holger Dell, Daniel Lokshtanov, Dániel Marx, Jesper Nederlof, Yoshio Okamoto, Ramamohan Paturi, Saket Saurabh, and Magnus Wahlström. On problems as hard as CNF-SAT. ACM Trans. Algorithms, 12(3):41:1–41:24, 2016. doi:10.1145/2925416.
- [11] Marek Cygan, Marcin Pilipczuk, Michal Pilipczuk, and Jakub Onufry Wojtaszczyk. Scheduling partially ordered jobs faster than . Algorithmica, 68(3):692–714, 2014. doi:10.1007/S00453-012-9694-7.
- [12] Guilherme de C. M. Gomes and Ignasi Sau. Finding cuts of bounded degree: Complexity, FPT and exact algorithms, and kernelization. Algorithmica, 83(6):1677–1706, 2021. doi:10.1007/S00453-021-00798-8.
- [13] Fedor V. Fomin, Petr A. Golovach, Tanmay Inamdar, Nidhi Purohit, and Saket Saurabh. Exact exponential algorithms for clustering problems. In Holger Dell and Jesper Nederlof, editors, 17th International Symposium on Parameterized and Exact Computation, IPEC 2022, September 7-9, 2022, Potsdam, Germany, volume 249 of LIPIcs, pages 13:1–13:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPICS.IPEC.2022.13.
- [14] Fedor V. Fomin, Petr A. Golovach, Jan Kratochvíl, Dieter Kratsch, and Mathieu Liedloff. Sort and search: Exact algorithms for generalized domination. Inf. Process. Lett., 109(14):795–798, 2009. doi:10.1016/J.IPL.2009.03.023.
- [15] Fedor V. Fomin, Petr A. Golovach, Jan Kratochvíl, Dieter Kratsch, and Mathieu Liedloff. Branch and recharge: Exact algorithms for generalized domination. Algorithmica, 61(2):252–273, 2011. doi:10.1007/S00453-010-9418-9.
- [16] Fedor V. Fomin and Dieter Kratsch. Exact Exponential Algorithms. Texts in Theoretical Computer Science. An EATCS Series. Springer, 2010. doi:10.1007/978-3-642-16533-7.
- [17] Ron L. Graham. On primitive graphs and optimal vertex assignments. Annals of the New York academy of sciences, 175(1):170–186, 1970.
- [18] Ellis Horowitz and Sartaj Sahni. Computing partitions with applications to the knapsack problem. J. ACM, 21(2):277–292, 1974. doi:10.1145/321812.321823.
- [19] Russell Impagliazzo and Ramamohan Paturi. On the complexity of k-sat. J. Comput. Syst. Sci., 62(2):367–375, 2001. doi:10.1006/JCSS.2000.1727.
- [20] Christian Komusiewicz, Dieter Kratsch, and Van Bang Le. Matching cut: Kernelization, single-exponential time fpt, and exact exponential algorithms. Discret. Appl. Math., 283:44–58, 2020. doi:10.1016/J.DAM.2019.12.010.
- [21] Robert Krauthgamer and Ohad Trabelsi. The set cover conjecture and subgraph isomorphism with a tree pattern. In Rolf Niedermeier and Christophe Paul, editors, 36th International Symposium on Theoretical Aspects of Computer Science, STACS 2019, March 13-16, 2019, Berlin, Germany, volume 126 of LIPIcs, pages 45:1–45:15. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPICS.STACS.2019.45.
- [22] Nathan Linial and Sria Louis. Asymptotically almost every 2 r-regular graph has an internal partition. Graphs and Combinatorics, 36(1):41–50, 2020. doi:10.1007/S00373-019-02116-0.
- [23] Daniel Lokshtanov, Saket Saurabh, and Ondrej Suchý. Solving multicut faster than 2 n. In Andreas S. Schulz and Dorothea Wagner, editors, Algorithms – ESA 2014 – 22th Annual European Symposium, Wroclaw, Poland, September 8-10, 2014. Proceedings, volume 8737 of Lecture Notes in Computer Science, pages 666–676. Springer, 2014. doi:10.1007/978-3-662-44777-2_55.
- [24] Jesper Nederlof, Jakub Pawlewicz, Céline M. F. Swennenhuis, and Karol Wegrzycki. A faster exponential time algorithm for bin packing with a constant number of bins via additive combinatorics. SIAM J. Comput., 52(6):1369–1412, 2023. doi:10.1137/22M1478112.
- [25] Jesper Nederlof, Céline M. F. Swennenhuis, and Karol Wegrzycki. A subexponential time algorithm for makespan scheduling of unit jobs with precedence constraints. In Yossi Azar and Debmalya Panigrahi, editors, Proceedings of the 2025 Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2025, New Orleans, LA, USA, January 12-15, 2025, pages 535–552. SIAM, 2025. doi:10.1137/1.9781611978322.16.
- [26] Jesper Nederlof and Karol Wegrzycki. Improving schroeppel and shamir’s algorithm for subset sum via orthogonal vectors. In Samir Khuller and Virginia Vassilevska Williams, editors, STOC ’21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event, Italy, June 21-25, 2021, pages 1670–1683. ACM, 2021. doi:10.1145/3406325.3451024.
- [27] Uwe Schöning. A probabilistic algorithm for k-sat and constraint satisfaction problems. In 40th Annual Symposium on Foundations of Computer Science, FOCS ’99, 17-18 October, 1999, New York, NY, USA, pages 410–414. IEEE Computer Society, 1999. doi:10.1109/SFFCS.1999.814612.
- [28] Richard Schroeppel and Adi Shamir. A , algorithm for certain NP-complete problems. SIAM J. Comput., 10(3):456–464, 1981. doi:10.1137/0210033.
- [29] Michael Stiebitz. Decomposing graphs under degree constraints. J. Graph Theory, 23(3):321–324, 1996. doi:10.1002/(SICI)1097-0118(199611)23:3\%3C321::AID-JGT12\%3E3.0.CO;2-H.
- [30] Robert Endre Tarjan and Anthony E. Trojanowski. Finding a maximum independent set. SIAM J. Comput., 6(3):537–546, 1977. doi:10.1137/0206038.
- [31] Jan Arne Telle. Complexity of domination-type problems in graphs. Nord. J. Comput., 1(1):157–171, 1994.
- [32] Carsten Thomassen. Graph decomposition with constraints on the connectivity and minimum degree. J. Graph Theory, 7(2):165–167, 1983. doi:10.1002/JGT.3190070204.
- [33] Ryan Williams. A new algorithm for optimal 2-constraint satisfaction and its implications. Theor. Comput. Sci., 348(2-3):357–365, 2005. doi:10.1016/J.TCS.2005.09.023.
- [34] Gerhard J. Woeginger. Open problems around exact algorithms. Discret. Appl. Math., 156(3):397–405, 2008. doi:10.1016/J.DAM.2007.03.023.
- [35] Mingyu Xiao. Solving directed multiway cut faster than . In Timothy M. Chan, Johannes Fischer, John Iacono, and Grzegorz Herman, editors, 32nd Annual European Symposium on Algorithms, ESA 2024, September 2-4, 2024, Royal Holloway, London, United Kingdom, volume 308 of LIPIcs, pages 104:1–104:13. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.ESA.2024.104.
