A Theory of Spectral CSP Sparsification
Abstract
We initiate the study of spectral sparsification for instances of Constraint Satisfaction Problems (CSPs). In particular, we introduce a notion of the spectral energy of a fractional assignment for a Boolean CSP instance, and define a spectral sparsifier as a weighted subset of constraints that approximately preserves this energy for all fractional assignments. Our definition not only strengthens the combinatorial notion of a CSP sparsifier but also extends well-studied concepts such as spectral sparsifiers for graphs and hypergraphs.
Recent work by Khanna, Putterman, and Sudan [SODA 2024] demonstrated near-linear sized combinatorial sparsifiers for a broad class of CSPs, which they term field-affine CSPs. Our main result is a polynomial-time algorithm that constructs a spectral CSP sparsifier of near-quadratic size for all field-affine CSPs. This class of CSPs includes graph (and hypergraph) cuts, XORs, and more generally, any predicate which can be written as .
Based on our notion of the spectral energy of a fractional assignment, we also define an analog of the second eigenvalue of a CSP instance. We then show an extension of Cheeger’s inequality for all even-arity XOR CSPs, showing that this second eigenvalue loosely captures the “expansion” of the underlying CSP. This extension specializes to the case of Cheeger’s inequality when all constraints are even XORs and thus gives a new generalization of this powerful inequality which converts the combinatorial notion of expansion to an analytic property.
Perhaps the most important effect of spectral sparsification is that it has led to certifiable sparsifiers for graphs and hypergraphs. This aspect remains open in our case even for XOR CSPs since the eigenvalues we describe in our Cheeger inequality are not known to be efficiently computable. Computing this efficiently, and/or finding other ways to certifiably sparsify CSPs are open questions emerging from our work. Another important open question is determining which classes of CSPs have near-linear size spectral sparsifiers.111Find a full version of the paper at https://arxiv.org/abs/2504.16206.
Keywords and phrases:
Sparsification, sketching, hypergraphsCategory:
Track A: Algorithms, Complexity and GamesFunding:
Sanjeev Khanna: Supported in part by NSF award CCF-2402284 and AFOSR award FA9550-25-1-0107.Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Sketching and samplingEditors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele PuppisSeries and Publisher:

1 Introduction
Spectral tools have played a powerful role in the analysis of graphs and hypergraphs and led to many celebrated successes including the use of Cheeger’s inequality to approximate computationally intractable notions like expansion [2, 1] and the use of spectral methods to sparsify graphs and hypergraphs [4, 21]. Recent years have seen the use of CSPs to add greater richness to the study of hypergraphs by considering generalized notion of when an edge (or hyperedge) is deemed to be cut by a (multi-)partition of the vertex set. In this work we initiate the study of spectral methods for CSP analysis, by introducing a concrete notion of the spectral energy of a (fractional) assignment to variables, and giving analogs to Cheeger’s inequality and spectral sparsification. We describe our work in more detail below after reviewing some of the classical works on spectral analysis, notably on sparsification.
1.1 Background
Given a graph , there are many important combinatorial properties of the graph that are hard to explicitly calculate. Perhaps most notably is the notion of conductance: given a graph , we can define its conductance as
Here, refers to the sum of degrees of vertices in , and refers to the edges crossing from to . In a celebrated result [2, 1], it was shown that can actually be well-approximated by the second eigenvalue of the normalized Laplacian of the graph:
The normalized Laplacian is simply , where is the diagonal matrix of vertex degrees, and is the (weighted) adjacency matrix of the graph. Subsequent to this discovery, spectral theory has emerged as a central topic in graph theory with wide-reaching implications and connections (see [22] for further discussion). In particular, one line of research emanating from spectral theory has been spectral sparsification, which is the key focus of this paper.
1.1.1 (Spectral) Graph Sparsification
Graph sparsification has emerged as a key topic in algorithmic graph theory. First proposed in the works of Karger [14] and Benczùr and Karger [5], graph sparsification takes as input a graph and a parameter , and returns a reweighted sub-graph such that the size of every cut in is simultaneously preserved to a multiplicative factor in . Although the algorithm presented in [5] is nearly-optimal for cut sparsification, these works spurred a flurry of research in sparsification in many different directions: for instance, the design of deterministic algorithms for graph sparsification [4], generalizing notions of sparsification to hypergraphs [17, 8, 12] and to norms [9], to CSPs [17, 15], and to arbitrary sets of vectors [6]. Along the way, one of the core advances in the study of sparsification has been the notion of a spectral sparsifier [23]. Here, instead of only preserving the cut-sizes of a graph, the sparsifier instead preserves the energy of the graph, which is defined for a graph , and a vector as . Specifically, a spectral sparsifier is a re-weighted subgraph such that for every , it is the case that . This energy can be written as a quadratic form involving the graph’s (un-normalized) Laplacian, which is denoted by , and satisfies
In graphs, the benefits from the use of spectral sparsification are several-fold:
-
1.
Spectral sparsification is a stronger condition than cut sparsification, as it preserves the eigenvalues of the graph’s Laplacian in addition to the cut sizes.
-
2.
The energy of a graph allows for an interpretation of the graph as an electrical network, which allows for well-motivated sampling schemes for designing sparsifiers.
-
3.
The energy of the graph has a convenient form as the quadratic form of the Laplacian of the graph. This means that spectral sparsifiers are efficiently certifiable (in the sense that one can verify in polynomial time whether a graph is a spectral-sparsifier of a graph ), a key fact that is used when designing deterministic sparsification algorithms [4].
-
4.
This property of efficient (and deterministic) certification has also contributed to linear-size spectral sparsifiers of graphs [4], shaving off the factor that is inherent to the sampling-based approaches that create cut-sparsifiers.
1.1.2 (Spectral) Hypergraph Sparsification
Given the wide-reaching benefits of spectral sparsification, as research on graph sparsification pivoted to hypergraph sparsification, spectral notions of sparsification in hypergraphs were quick to be proposed. In a hypergraph , each hyperedge is an arbitrary subset of vertices. A cut in the hypergraph is given by a subset of the vertices, and we say that a hyperedge is cut by if and . A cut-sparsifier of a hypergraph is then just a re-weighted subset of hyperedges which simultaneously preserves the weight of every cut to a factor. Analogous to the graph case, spectral sparsifiers of hypergraphs instead operate with an energy formulation. Given a hypergraph and a vector of potentials , the energy of the hypergraph is given by , and, as with graphs, a spectral sparsifier is simply a re-weighted subset of hyperedges which preserves this energy to a factor simultaneously for every .
A line of works by Louis and Makarychev [20], Louis [19], and Chan, Louis, Tang, and Zhang [7] showed that this energy definition in hypergraphs enjoys many of the same properties as the energy definition in graphs: in particular, that it can be viewed as a random walk operator on the hypergraph, that it generalizes the cuts of a hypergraph, and that it admits Cheeger’s inequalities in a manner similar to graphs. Because of these connections, a long line of works simultaneously studied the ability to cut-sparsify ([17, 8]) and spectrally-sparsify ([21, 3, 13, 12, 11, 18]) hypergraphs.
1.1.3 CSP Sparsification
However, subsequent to this unified effort to understand hypergraph sparsification in the cut and spectral settings, the directions of focus from the sparsification community have largely been separate; on the one hand, combinatorial generalizations of cut-sparsification have been studied for more general objects like linear (and non-linear) codes as well as CSPs [15, 16, 6], while on the other hand, continuous generalizations of spectral sparsification have been studied in the context of semi-norms and generalized linear models [9, 10]. The techniques used by these respective research efforts are likewise separate, with combinatorial sparsifiers driven mainly by progress on counting bounds, sampling, and union bounds, while continuous sparsifiers have relied on a deeper understanding of chaining.
In this work, we aim to close this gap between recent work on combinatorial and continuous notions of sparsification by introducing the model of spectral CSP sparsification. This framework simultaneously generalizes cut and spectral sparsification of graphs, cut and spectral sparsification of hypergraphs, and the (combinatorial) sparsification of codes and CSPs. We then show that even under this more general definition, there still exist sparsifiers of small sizes, and moreover, these sparsifiers can even be computed efficiently. We summarize our contributions more explicitly in the following subsection.
1.2 Our Contributions
To start, we formally define CSP sparsification, as it will be a key building block in our discussion of spectral CSPs.
Definition 1.
A CSP on variables and constraints is given by a predicate , along with ordered subsets of variables, denoted where each . The th constraint is then denoted by , where the arguments in are understood to be . The CSP can also be accompanied by weights for each constraint . Often, for a constraint , we will use to denote the evaluation of the constraint on assignment . Here it is understood that may only be acting on a subset of the variables in the assignment .
Definition 2.
Let be a CSP on variables and constraints. For an assignment , the value of the assignment is
In words, this is simply the weight of all satisfied constraints.
Definition 3.
For a CSP on variables and constraints, a sparsifier of is a re-weighted CSP , with weights such that :
The sparsity of is then given by .
As remarked in many previous works, when the predicate is the -XOR function, then CSP sparsification captures the notion of cut-sparsification in ordinary graphs. This is because a constraint simulates an edge and will evaluate to if and only if (equivalently, if are on different sides of the cut - see [15] for further discussion). Likewise, when the predicate is the not-all-equal function (i.e., evaluates to on and ), then this exactly captures the notion of a hyperedge of arity being cut. Now, recall that in graphs, the corresponding energy of an edge is . Spectral sparsification of graphs thereby captures cut sparsification as when , then . For arbitrary CSPs, we capture this behavior as follows:
Definition 4.
For a vector , and a value , the (deterministic) rounding of with respect to is the vector such that
This leads to a straightforward definition of the energy of a CSP:
Definition 5.
For a CSP on variables and constraints and a vector , the energy of is defined as
Our notion of spectral sparsification is then a natural continuation of this idea:
Definition 6.
For a CSP on variables and constraints, we say that a re-weighted sub-CSP is a spectral-sparsifier of if
For instance, let us consider an assignment of potentials , and a constraint . Now, our goal is to understand what the expression looks like; indeed, the only case where the constraint evaluates to is when falls in the interval . If is smaller than , then both become in the rounded vector and the XOR becomes , and if is larger than , then they both become . Thus, the only way for the XOR constraint to become is if is larger than exactly one of : in this case one of the variables gets rounded to , and the other gets rounded to . Thus, , and when we consider this expression squared, we recover , which exactly mirrors the energy for ordinary graphs. A similar analysis shows that when we consider -NAE constraints, we recover exactly the same energy expression as in hypergraphs of arity .
Naturally, in the context of sparsification, the next question to ask is whether this more general notion of CSP sparsification still allows for sparsifiers of small sizes. Our first result shows that for a broad class of CSPs, this is indeed the case. Specifically, we build upon the work of [15], and consider field-affine CSPs; namely CSPs using a predicate where for a prime . For these CSPs, we show the following:
Theorem 7.
Let be any CSP on variables and constraints using a predicate such that , for some prime . Then, there is a randomized, polynomial time algorithm which with high probability computes a spectral-sparsifier of that retains only re-weighted constraints.
Note that this theorem encapsulates a large variety of predicates, including arbitrary arity XORs, graph and hypergraph cut functions, and hedge-graph cut functions [15]. Additionally, while there has been a large research effort for sparsifying continuous functions [9, 10], many classes of CSPs do not fit into the established frameworks. For instance, if we consider a -XOR constraint , then we can immediately observe that for the assignment , , as there is no choice of for which an odd number of the ’s round to . The same holds true when we consider the assignment . However, when we add these assignments together, yielding , then in fact , as there is always an odd number of ’s that round to , for any choice of . All together, this means that the XOR functions strongly disobey the triangle inequality, which is one of the key properties that results on sparsifying sums of continuous functions rely on [9, 10]. Nevertheless, Theorem 7 shows that these functions still admit sparsifiers of small size, suggesting that there may be other avenues towards showing the sparsifiability of sums of continuous functions.
Finally, we show that despite the generality of our definition of the spectrum of a CSP, our definitions still admit many of the same properties enjoyed by the spectra of graphs and hypergraphs. In particular, we prove that a type of Cheeger inequality holds, relating eigenvalues of the CSP to a combinatorial notion of the expansion of the CSP. Our Cheeger inequality is defined with respect to the eigenvalues of a discrepancy ratio (defined analogously to [7]), and we use to denote the second smallest eigenvalue. Likewise, we define the expansion in an intuitive way to generalize graphs and hypergraphs: the expansion is typically measured as a ratio of the number of edges leaving a set of vertices divided by the number of edges inside this same set. Under a CSP perspective, a constraint is considered to be leaving a set if the constraint evaluates to when we apply the assignment . For example, in graphs, each edge can be modeled by an XOR of the variables and . A simple check shows that the edge is only crossing between and if . We defer formal definitions of these notions to the full version. Assuming we denote the expansion of a CSP by , we establish the following theorem:
Theorem 8.
Given any XOR CSP where each constraint is of even size and maximum arity ,
where is a notion of the eigenvalue of the XOR-CSP Laplacian.222See the full version for a formal definition.
1.3 Technical Overview
In this subsection, we summarize the main ideas that go into our proof of the ability to spectrally sparsify certain classes of CSPs.
1.3.1 Writing as a Quadratic Form
The starting point for our approach is a technique used in the work of Soma and Yoshida [21]. To convey the main idea, we will consider the simplified setting of spectrally sparsifying XOR constraints of arity . That is, we consider the predicate , and each constraint . We denote the entire CSP by , which consists of constraints , and we denote the set of variables by . We will often index the set by a constraint (as there are different constraints). Note that we choose arity XORs instead of so that the all ’s assignment is unsatisfying (and thus we can shift any assignment by the all ’s vector WLOG). We later show that arity XORs can be simulated by arity XORs.
Recall then that our goal is to sparsify the set of constraints while still preserving the energy of the CSP to a factor, where the energy is exactly
Now, when we focus on a single constraint, we can actually interpret this energy definition concretely. Since a single constraint is a -XOR over variables , then with respect to a choice of the rounding parameter , the constraint evaluates to if and only if an odd number of the variables are larger than the rounding threshold . Thus, if we sort according to their value (say the sorted order is ) the constraint evaluates to if and only if or . Because is chosen uniformly at random from , we can directly calculate this probability to be
Ultimately, the energy is the square of this expression, and so it can be written as
Here is where we use a simple observation: for a fixed ordering of the variables (i.e., such that ), the energy expression for each constraint evaluates to a fixed quadratic form. We denote this set of vectors that satisfies the ordering by , and call a vector satisfying -consistent. Said another way, once we fix the ordering of the variables, then in the above expression would also all be fixed. For such a permutation , we call this resulting quadratic form the induced quadratic form by . For example, if we take the ordering to be the identity permutation (i.e., ), and we suppose that , then the formula for the energy of the expression would be exactly . Note that our choice of the words quadratic form is intentional, as once the energy expression is a fixed sum of these quadratic terms, we can express the energy as , where , and is the matrix such that , and is simply the diagonal matrix whose th entry is the weight of the th constraint in .
1.3.2 Sufficient Conditions for Preserving Quadratic Forms
Now, as observed in [21], in order to sparsify the CSP , it suffices to choose a re-weighted subset of indices , such that simultaneously for every ordering , the induced quadratic form by has its energy preserved to a factor. Because of our above simplifications, we can re-write this another way: we want to find a new, sparser set of weights on our constraints (which we denote by ), such that for every ordering , and for every vector , it is the case that
We can re-write this condition as saying that
and
If this condition holds, then we will have indeed created a spectral sparsifier. Recall that we want to use as few re-weighted constraints as possible, and so we want this new set of re-weighted constraints to be as sparse as possible.
Next, observe that the expression above is a type of positive definiteness condition, where we focus our attention on vectors in , and want to ensure that the quadratic form of some matrix is non-negative. In particular, the work of [21] studied similar matrices, and gave an exact characterization for when such a claim holds:
Lemma 9 (Lemma 3.3 in [21]).
Let be a symmetric matrix, such that , and let be a permutation on . Then, for all if and only if is of the form for a matrix such that , and where is the matrix such that if , and otherwise is and , where , , and otherwise is .
This lemma gives us a roadmap for how to proceed. Recall that there are two matrices that we wish to show satisfy the positivity condition, namely and , and these will each be the matrix in the above lemma (though to simplify discussion, we will simply focus on the first, as the analysis will be identical). Our goal is to show that we can re-write the matrix as
where the matrix is co-positive (meaning for all positive vectors, the quadratic form is non-negative). To do this, we will ultimately define a new, specific matrix with the intention that . While [21] also define a crossing matrix, ours is necessarily different as it models a different structure. It is here where our analysis begins to diverge from theirs.
1.3.3 Building the Crossing Matrix
However, before defining this crossing matrix , we first require some definitions. First is the notion of an active region, which is essentially the intervals where the contribution to the energy comes from (i.e., the choices of for which a constraint is satisfied):
Definition 10.
For a fixed ordering on , and a 4-XOR constraint operating on elements , we say the active regions are
For an index , let us consider the bisector between and (i.e., between the th and st smallest elements in the ordering ). This bisector takes on value . We use this to define crossing indices:
Definition 11.
For , we say that is a crossing index with respect to the ordering and constraint if the bisector is in an active region of the constraint with ordering .
Likewise, we generalize this definition to a pair of indices :
Definition 12.
We say that pairs of indices are crossing indices with respect to the constraint , ordering if both individually are crossing indices with .
For instance, let us consider what happens when the permutation , and suppose we are looking at a constraint which is the XOR of . Then the crossing indices are exactly and . More intuitively, the crossing indices correspond to the intervals which contribute to the overall energy (i.e., for which choices of does the constraint evaluate to ). For instance, because is an active region the energy contributed from these points is and so every sub-interval contributes to the energy. This is because if for instance, then in the rounding , and , and hence the constraint is satisfied.
With these definitions established, we define the crossing matrix such that if and only if is a crossing index for the constraint under permutation (and otherwise, the th entry is ). In particular, this specific definition allows us to show that
We do not discuss this equality exactly here, as it requires an extensive case analysis; we defer this to the technical sections below.
However, using this, we can re-write our original expression:
Thus, if we revisit Lemma 9, the matrix is exactly this interior portion of the expression , and thus our goal becomes to show that (while allowing for sparsity in the selected constraints of course).
1.3.4 Understanding Crossing Indices Through Codes
Now, we make a simple observation: a sufficient (but not necessary) condition for to satisfy is for every entry in to be non-negative. Thus, it remains to understand exactly what the entries in look like: we start by looking at a simpler expression, namely . Indeed, in this matrix the th entry can be written as
where we simply use to denote the total weight of constraints that have both and as crossing indices under the permutation .
Here comes the final, key technical lemma: we show that there is a matrix such that for every choice of and permutation , there is a vector such that (i.e., the th coordinate of the vector ) is exactly the indicator of whether or not are crossing indices for the constraint under permutation . Thus, this value can be written as the weighted hamming weight of the vector .
To construct this matrix , we start with the generating matrix of our original XOR CSP : this is the matrix , where for the constraint operating on , there is a single row in the matrix corresponding to , with ’s exactly in columns . Now, the matrix is a type of tensor-product code that is generated by . Explicitly, when we consider the linear space generated by (denoted ), we want
where refers to the entry-wise multiplication of the two vectors. Note that will be expressible as a space of dimension as it essentially corresponds to a space of degree polynomials over variables (though see the full version for a more thorough discussion).
Now, recall that our goal is to show that for a fixed permutation , as well as indices , that we can find a vector in whose th coordinate is exactly the indicator of whether are crossing indices for the th constraint under permutation . For simplicity, let us suppose that the permutation is just the identity permutation; under this permutation, recall that an index is considered to be a crossing index for a constraint if and only if is in an active region of . If we denote the variables in the constraint by , then is a crossing index under if and only if is greater than or equal to an odd number of : this is because the active regions are and , and so must either be between , or between .
A priori, it may seem that this condition is completely arbitrary. However, it can be exactly captured by our matrix : indeed, let us consider the vector which is in the first entries, and in the others. The th constraint then performs an XOR on the variables . Plugging this vector in, we obtain that
which is exactly the indicator of whether is greater than or equal to an odd number of (since we are using the identity ordering of ).
Thus, the matrix alone captures when single indices are crossing a constraint. To generalize to pairs of crossing indices, we then simply take the coordinate-wise product of whether indices are both crossing a constraint under a permutation . This is exactly what is captured by the code .
1.3.5 Creating Sparsifiers through Code Sparsification
Because of this correspondence above, this means that can be exactly written as the weighted hamming weight of the codeword , where is the particular vector we use to enforce
To conclude then, we recall the work of Khanna, Putterman, and Sudan [15], who showed that every code admits a sparsifier which preserves only a set of coordinates of size nearly-linear in the dimension of the code. In our case, the dimension is , so this means that there exists a re-weighted subset of coordinates , of size which preserves the weight of every codeword to a factor. In particular, this means for the CSP defined on the same re-weighted subset of constraints, it must be the case that
By sparsifying this auxiliary code, we obtain a re-weighted subset of the constraints which satisfies that the parameter is approximately preserved. When we revisit the matrices we created: , because the entries in are exactly , and the entries in are exactly , we see that the constraints computed by the code sparsifier do indeed lead to a matrix with all non-negative entries as
and thus this matrix is non-negative on all non-negative vectors (and so too constitutes a spectral sparsifier by our previous discussions).
Likewise, observe that because the matrix we defined above is the same for all permutations (we only need to change the vector above that we multiply the matrix by), this implies that the same set of constraints will be a sparsifier across all choices of permutations, and thus we have a set of constraints which is a spectral CSP sparsifier of our original instance. By generalizing this argument (and using the efficient constructions of code sparsifiers [16]), we obtain Theorem 7.
1.3.6 Discussion
We end the technical overview with some high-level remarks:
- 1.
-
2.
Likewise, the framework from [21] is particular suited for sparsifying hypergraphs. Our framework reveals a much more general connection between spectral sparsification and sparsifying a type of tensor product of the underlying object which holds across the entire CSP landscape.
-
3.
The method presented in this paper avoids any complex continuous machinery like chaining or matrix-chernoff, which have appeared in prior works on spectral sparsification [13, 12, 9]. We view it is an interesting open question whether those techniques can be combined with ours to create nearly-linear size spectral CSP sparsifiers.
References
- [1] Noga Alon. Eigenvalues and expanders. Comb., 6(2):83–96, 1986. doi:10.1007/BF02579166.
- [2] Noga Alon and V. D. Milman. lambda, isoperimetric inequalities for graphs, and superconcentrators. J. Comb. Theory B, 38(1):73–88, 1985. doi:10.1016/0095-8956(85)90092-9.
- [3] Nikhil Bansal, Ola Svensson, and Luca Trevisan. New notions and constructions of sparsification for graphs and hypergraphs. In David Zuckerman, editor, 60th IEEE Annual Symposium on Foundations of Computer Science, FOCS 2019, Baltimore, Maryland, USA, November 9-12, 2019, pages 910–928. IEEE Computer Society, 2019. doi:10.1109/FOCS.2019.00059.
- [4] Joshua D. Batson, Daniel A. Spielman, and Nikhil Srivastava. Twice-Ramanujan sparsifiers. In Michael Mitzenmacher, editor, Proceedings of the 41st Annual ACM Symposium on Theory of Computing, STOC 2009, Bethesda, MD, USA, May 31 - June 2, 2009, pages 255–262. ACM, 2009. doi:10.1145/1536414.1536451.
- [5] András A. Benczúr and David R. Karger. Approximating s-t minimum cuts in Õ(n) time. In Gary L. Miller, editor, Proceedings of the Twenty-Eighth Annual ACM Symposium on the Theory of Computing, Philadelphia, Pennsylvania, USA, May 22-24, 1996, pages 47–55. ACM, 1996. doi:10.1145/237814.237827.
- [6] Joshua Brakensiek and Venkatesan Guruswami. Redundancy is all you need. arXiv preprint arXiv:2411.03451, 2024. doi:10.48550/arXiv.2411.03451.
- [7] T.-H. Hubert Chan, Anand Louis, Zhihao Gavin Tang, and Chenzi Zhang. Spectral properties of hypergraph laplacian and approximation algorithms. J. ACM, 65(3):15:1–15:48, 2018. doi:10.1145/3178123.
- [8] Yu Chen, Sanjeev Khanna, and Ansh Nagda. Near-linear size hypergraph cut sparsifiers. In Sandy Irani, editor, 61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020, Durham, NC, USA, November 16-19, 2020, pages 61–72. IEEE, 2020. doi:10.1109/FOCS46700.2020.00015.
- [9] Arun Jambulapati, James R. Lee, Yang P. Liu, and Aaron Sidford. Sparsifying sums of norms. CoRR, abs/2305.09049, 2023. doi:10.48550/arXiv.2305.09049.
- [10] Arun Jambulapati, James R. Lee, Yang P. Liu, and Aaron Sidford. Sparsifying generalized linear models. In Bojan Mohar, Igor Shinkar, and Ryan O’Donnell, editors, Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, Vancouver, BC, Canada, June 24-28, 2024, pages 1665–1675. ACM, 2024. doi:10.1145/3618260.3649684.
- [11] Arun Jambulapati, Yang P. Liu, and Aaron Sidford. Chaining, group leverage score overestimates, and fast spectral hypergraph sparsification. In Barna Saha and Rocco A. Servedio, editors, Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, Orlando, FL, USA, June 20-23, 2023, pages 196–206. ACM, 2023. doi:10.1145/3564246.3585136.
- [12] Michael Kapralov, Robert Krauthgamer, Jakab Tardos, and Yuichi Yoshida. Spectral hypergraph sparsifiers of nearly linear size. In 62nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2021, Denver, CO, USA, February 7-10, 2022, pages 1159–1170. IEEE, 2021. doi:10.1109/FOCS52979.2021.00114.
- [13] Michael Kapralov, Robert Krauthgamer, Jakab Tardos, and Yuichi Yoshida. Towards tight bounds for spectral sparsification of hypergraphs. In Samir Khuller and Virginia Vassilevska Williams, editors, STOC ’21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event, Italy, June 21-25, 2021, pages 598–611. ACM, 2021. doi:10.1145/3406325.3451061.
- [14] David R. Karger. Global min-cuts in rnc, and other ramifications of a simple min-cut algorithm. In Vijaya Ramachandran, editor, Proceedings of the Fourth Annual ACM/SIGACT-SIAM Symposium on Discrete Algorithms, 25-27 January 1993, Austin, Texas, USA, pages 21–30. ACM/SIAM, 1993. URL: http://dl.acm.org/citation.cfm?id=313559.313605.
- [15] Sanjeev Khanna, Aaron Putterman, and Madhu Sudan. Code sparsification and its applications. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 5145–5168. SIAM, 2024.
- [16] Sanjeev Khanna, Aaron L Putterman, and Madhu Sudan. Characterizations of sparsifiability for affine CSPs and symmetric CSPs. arXiv preprint arXiv:2404.06327, 2024.
- [17] Dmitry Kogan and Robert Krauthgamer. Sketching cuts in graphs and hypergraphs. In Tim Roughgarden, editor, Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, ITCS 2015, Rehovot, Israel, January 11-13, 2015, pages 367–376. ACM, 2015. doi:10.1145/2688073.2688093.
- [18] James R. Lee. Spectral hypergraph sparsification via chaining. In Barna Saha and Rocco A. Servedio, editors, Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, Orlando, FL, USA, June 20-23, 2023, pages 207–218. ACM, 2023. doi:10.1145/3564246.3585165.
- [19] Anand Louis. Hypergraph markov operators, eigenvalues and approximation algorithms. In Rocco A. Servedio and Ronitt Rubinfeld, editors, Proceedings of the Forty-Seventh Annual ACM on Symposium on Theory of Computing, STOC 2015, Portland, OR, USA, June 14-17, 2015, pages 713–722. ACM, 2015. doi:10.1145/2746539.2746555.
- [20] Anand Louis and Yury Makarychev. Approximation algorithms for hypergraph small set expansion and small set vertex expansion. In Klaus Jansen, José D. P. Rolim, Nikhil R. Devanur, and Cristopher Moore, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2014, September 4-6, 2014, Barcelona, Spain, volume 28 of LIPIcs, pages 339–355. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2014. doi:10.4230/LIPICS.APPROX-RANDOM.2014.339.
- [21] Tasuku Soma and Yuichi Yoshida. Spectral sparsification of hypergraphs. In Timothy M. Chan, editor, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 2570–2581. SIAM, 2019. doi:10.1137/1.9781611975482.159.
- [22] Daniel A. Spielman. Spectral graph theory and its applications. In 48th Annual IEEE Symposium on Foundations of Computer Science (FOCS’07), pages 29–38, 2007. doi:10.1109/FOCS.2007.56.
- [23] Daniel A. Spielman and Shang-Hua Teng. Spectral sparsification of graphs. SIAM J. Comput., 40(4):981–1025, 2011. doi:10.1137/08074489X.