Near-Optimal Hypergraph Sparsification in Insertion-Only and Bounded-Deletion Streams
Abstract
We study the problem of constructing hypergraph cut sparsifiers in the streaming model where a hypergraph on vertices is revealed either via an arbitrary sequence of hyperedge insertions alone (insertion-only streaming model) or via an arbitrary sequence of hyperedge insertions and deletions (dynamic streaming model). For any , a hypergraph cut-sparsifier of a hypergraph is a reweighted subgraph whose cut values approximate those of to within a factor. Prior work shows that in the static setting, one can construct a hypergraph cut-sparsifier using bits of space [Chen-Khanna-Nagda FOCS 2020], and in the setting of dynamic streams using bits of space [Khanna-Putterman-Sudan FOCS 2024]; here the notation hides terms that are polylogarithmic in , and we use to denote the total number of hyperedges in the hypergraph. Up until now, the best known space complexity for insertion-only streams has been the same as that for the dynamic streams. This naturally poses the question of understanding the complexity of hypergraph sparsification in insertion-only streams.
Perhaps surprisingly, in this work we show that in insertion-only streams, a cut-sparsifier can be computed in bits of space, matching the complexity of the static setting. As a consequence, this also establishes an factor separation between the space complexity of hypergraph cut sparsification in insertion-only streams and dynamic streams, as the latter is provably known to require bits of space. To better explain this gap, we then show a more general result: namely, if the stream has at most hyperedge deletions then bits of space suffice for hypergraph cut sparsification. Thus the space complexity smoothly interpolates between the insertion-only regime () and the fully dynamic regime (). Our algorithmic results are driven by a key technical insight: once sufficiently many hyperedges have been inserted into the stream (relative to the number of allowed deletions), we can significantly reduce the underlying hypergraph by size by irrevocably contracting large subsets of vertices.
Finally, we complement this result with an essentially matching lower bound of bits, thus providing essentially a tight characterization of the space complexity for hypergraph cut-sparsification across a spectrum of streaming models.111Find a full version of the paper at https://arxiv.org/abs/2504.16321.
Keywords and phrases:
Sparsification, sketching, hypergraphsCategory:
Track A: Algorithms, Complexity and GamesFunding:
Sanjeev Khanna: Supported in part by NSF award CCF-2402284 and AFOSR award FA9550-25-1-0107.Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Sketching and samplingEditors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele PuppisSeries and Publisher:

1 Introduction
In this work, we continue the line of study on designing hypergraph cut sparsifiers in the streaming model.
Recall that a hypergraph, denoted by , is given by a set of hyperedges where each hyperedge is an arbitrary subset of the vertices . In the streaming model, the hyperedges are revealed in a sequence of steps, where each step consists of either an insertion of a new hyperedge, or the deletion of an existing hyperedge from the hypergraph. At the end of the stream, our goal is to return a sparsifier of the resulting hypergraph. Recall that for a hypergraph , and , a hypergraph cut-sparsifier of is a re-weighted sub-hypergraph such that for every cut ,
Here, denotes the total weight of hyperedges that cross from to (i.e., the set of hyperedges ). Cuts in hypergraphs capture a variety of applications, ranging from scientific computing on sparse matrices [3], to clustering and machine learning [18, 19], and to modeling transistors and other circuitry elements in circuit design [2, 14]. In each of these applications, the ability to sparsify the hypergraph while still preserving cut-sizes is a key building block, as these dramatically decrease the memory footprint (and complexity) of the networks being optimized. In fact, while an arbitrary hypergraph may have as many as distinct hyperedges, the work of Chen, Khanna, and Nagda [5] showed that hypergraph cut-sparsifiers can be constructed using only re-weighted hyperedges, a potentially exponential size saving in the bit complexity of the object.
This dramatic reduction in the complexity of the hypergraph while still preserving key properties has also prompted a line of research into the feasibility of sparsifying hypergraphs in the streaming model of computation. Here, the hyperedges are presented one at a time, with each hyperedge either being inserted into or deleted from the hypergraph constructed so far. In the streaming model, the goal is to compute some function of the hypergraph (in our case, to construct a sparsifier of the hypergraph) after the final update in the stream has arrived, while using as few bits of memory as possible in each intermediate step of processing the stream. As mentioned above, computing sparsifiers of a hypergraph in a stream is valuable as it immediately yields itself to streaming algorithms for any problem that relies only on a sparsifier (for instance, computing cut sizes, flows, many clustering objectives, and more), and as such has seen study in several papers [6, 5, 12].
Most recently, the work of Khanna, Putterman, and Sudan [12] studied the space complexity of of hypergraph cut-sparsification in the setting of dynamic streams where a hypergraph is revealed via an arbitrary sequence of hyperedge insertions and deletions. They showed an upper bound of 222Here hides factors. Importantly, in hypergraphs may be as large as , and thus can potentially be as large . Hence does not hide factors of . bits for computing sparsifiers in dynamic streams, and also established a nearly-matching lower bound of bits, where is the number of vertices, is the maximum size of any hyperedge, and is the number of hyperedges. In particular, because dynamic streaming is a strictly harder setting than insertion-only streams, this also implies an bit upper bound for the complexity of constructing hypergraph cut-sparsifiers in the insertion-only model.
For comparison, in the static setting (i.e., when the algorithm has unrestricted random-access to the underlying hypergraph), it is known that hypergraph sparsifiers require bits to represent [10, 9], and moreover, can be constructed in bits of space. Thus, one interpretation of the work of [12] is that hypergraph sparsifiers can be constructed in dynamic streams at the cost of only a times increase in the space complexity (ignoring factors), as compared to the static setting. However, this highlights two natural questions:
-
1.
What is the space complexity of constructing hypergraph sparsifiers in insertion-only streams? Is the complexity essentially same as in the static setting, or can it be as large as the dynamic setting?
-
2.
More generally, how does the space complexity of hypergraph sparsification change as a function of the number of hyperedge deletions? Does it smoothly interpolate between the space complexity needed for insertion-only stream (no deletions) and the dynamic setting (unrestricted number of deletions)?
1.1 Our Contributions
As our first contribution, we provide an answer to the first question above regarding the complexity of constructing sparsifiers in insertion-only streams:
Theorem 1.
There is an insertion-only streaming algorithm requiring bits of space which creates a cut-sparsifier for a hypergraph on vertices and hyperedges of arity , with probability .
Specifically, this improves over the prior state of the art algorithms for constructing hypergraph sparsifiers in insertion-only streams by a factor of where denotes an upper bound on the number of hyperedges. This implies that (perhaps surprisingly) the complexity of the insertion-only setting mirrors that of the static sparsification setting, and thus there is also a strict separation between the space complexity of the insertion-only and dynamic streaming settings, as the latter is known to require bits of space [12].
Because of this large separation, it is natural to ask if there is some other parameter which governs the space complexity of sparsifying hypergraphs in streams. As our second contribution, we show that this is indeed the case: if we parameterize the dynamic streaming setting by the maximum number of allowed hyperedge deletions, then the space complexity smoothly interpolates between the insertion-only setting and the unrestricted dynamic setting:
Theorem 2.
For , there is a -bounded deletion streaming algorithm requiring 333Technically, when , the term should be . bits of space which creates a cut-sparsifier for a hypergraph on vertices and hyperedges of arity , with probability .
When is the maximum number of hyperedges in the stream, then the number of deletions is effectively bounded by . Thus, the dynamic streaming setting is effectively the case when , and the above theorem captures the space complexity in this setting. Likewise, as the number of deletions decreases and approaches , the setting approaches the insertion-only setting, and the above theorem explains exactly the space savings that are achieved.
Finally, building off of the prior work of Jayaram and Woodruff [8] in the bounded-deletion streaming model, we show that this space complexity is essentially optimal:
Theorem 3.
Any streaming algorithm for -bounded deletion streams, which for hypergraphs on vertices, of arity , produces a cut-sparsifier for , must use bits of space.
In summary, this provides a complete picture of the space complexity of producing hypergraph sparsifiers in the streaming setting: as the number of deletions increases from to , the space complexity grows by a factor of over the space complexity of the static sparsification regime, leading to a smooth phase transition in the space complexity of these algorithms. In the following subsection, we explain more of the techniques that go into these results.
1.2 Technical Overview
1.2.1 Importance Sampling for Hypergraph Cut Sparsification
To start, let us recap how we create hypergraph sparsifiers in the static setting. After an extensive line of works studying hypergraph sparsification [13, 17, 5, 10, 9, 7, 15, 11], the work of Quanrud [16] provided the simplest lens through which one can build hypergraph sparsifiers. Roughly speaking, given a hypergraph , each hyperedge is assigned a value which is called its strength. The strength of a hyperedge is intuitively a measure of the (un)importance of a hyperedge; the smaller the strength of the hyperedge , this means is crossing smaller cuts in the hypergraph, and so we are more likely to need to keep , while if the strength is larger, then there are many other hyperedges which cross the same cuts as , and thus it is not as necessary for us to keep . [16] showed that for a specific definition of strength, sampling each hyperedge (independently) at rate roughly (ignoring constant factors), and assigning weight to the surviving hyperedges then yields a sparsifier with high probability.
In fact, this procedure lends itself to a simple iterative algorithm for designing sparsifiers: starting with the original hypergraph , we recover all of the hyperedges in whose strength is smaller than , and denote these hyperedges by . Then, it must necessarily be the case that all hyperedges in have large strength, and so we can afford to sample these hyperedges at rate (denote this sampled hypergraph by ). By the same analysis as above, it turns out that we can show that will be a sparsifier of with high probability. Now, it remains only to re-sparsify , which we can do by repeating the same procedure. Thus, after levels of this procedure (where is the starting number of hyperedges), we can recover a sparsifier of our original hypergraph.
1.2.2 Formal Definitions of Strength
However, to continue our discussion, we will require the formal definition of strength, as well as some auxiliary facts about strength in hypergraphs. The key notion that [16] introduces to measure strength in hypergraphs is the notion of -cuts in hypergraphs (and here we adopt the language used by [12]):
Definition 4.
For any , a -cut in a hypergraph is defined by a -partition of the vertices, say, . The un-normalized size of a -cut in an unweighted hypergraph is the number of hyperedges that are not completely contained in any single (we refer to these as the crossing hyperedges), denoted by .
The normalized size of a -cut in a hypergraph is its un-normalized size divided by . We will often use to denote the minimum normalized -cut, defined formally as follows:
Note that when we generically refer to a -cut, this refers any choice of . That is, we are not restricting ourselves to a single choice of , but instead allowing ourselves to range over any partition of the vertex set into any number of parts.
The work of [16] established the following result regarding normalized and un-normalized -cuts:
Theorem 5 ([16]).
Let be a hypergraph, with associated minimum normalized -cut size . Then for any , and , there are at most un-normalized -cuts of size .
A direct consequence of the above is that in order to preserve all -cuts (again, simultaneously for every ) in a hypergraph to a factor , it suffices to sample each hyperedge at rate , and re-weight each sampled hyperedge by .
Similar to Benczúr and Karger’s [4] approach for creating size graph sparsifiers, Quanrud [16] next uses this notion to define -cut strengths for each hyperedge. To do this, we fix a minimum normalized -cut, with denoting the partition of the vertices created by this cut. For any hyperedge crossing this minimum normalized -cut, we define its strength to be exactly . For the remaining hyperedges (i.e, those which are completely contained within the components ), their strengths are determined recursively (within their respective induced subgraphs) using the same scheme. This allows Quanrud [16] to calculate sampling rates of hyperedges, which when sampled, approximately preserve the size of every -cut (for all ). Note that just as in the graph setting, the reciprocal sum of strengths is bounded, which allows for convenient bounds on the number of low strength hyperedges:
Claim 6.
[16] Let be a hypergraph on vertices. Then,
However, the power of the strength definition extends beyond just identifying sampling rates of hyperedges, and can also be used to identify sets of vertices which can be contracted away. In particular, we can define the notion of the strength of a component, as we do below:
Definition 7.
For a subset of vertices , we say that the strength of in is . That is, when we look at the induced subgraph from looking at , is the minimum strength of any edge in this induced subgraph.
We will take advantage of the following fact when working with these “contracted” versions of hypergraphs:
Claim 8 ([12]).
Let be a hypergraph, and let be a set of connected components of strength . Then, the hyperedges of strength in are exactly those hyperedges of strength in , where we use to denote the hypergraph where have each been contracted to their own super-vertices.
This final claim will be very important for our algorithm. In particular, it implies that once certain components have become sufficiently strongly connected, we no longer have to worry about recovering low-strength hyperedges within the components, and can instead focus only on recovering hyperedges that cross between such components. However, before diving more into the details of this approach, we provide a more detailed re-cap of how prior work [12] recovers low-strength hyperedges generically.
1.2.3 The Work of Khanna, Putterman, and Sudan [12]
With this formal notion of strength now in hand, we can formally present the algorithm discussed in the previous subsection for sparsification (essentially the framework of [4, 1, 6, 12]):
This approach is exactly what was used by [12] when designing their hypergraph sparsifiers for dynamic streams.
Indeed, their primary contribution was a linear sketch which can be used to exactly recover these low-strength hyperedges at each level of the algorithm. Recall that a linear sketch is simply a set of linear measurements of the hypergraph and thus is directly implementable as a dynamic streaming algorithm on hypergraphs, as both insertions and deletions can be modeled as linear updates. Formally, when a hypergraph on vertices is viewed as a vector in , then a linear measurement of size is obtained by mutliplying a (possibly random) matrix with this vector. Inserting a hyperedge is simply a update in the coordinate corresponding to the hyperedge, and a deletion is simply a .
With this established, we can now summarize the space complexity resulting from the linear sketching implementation of [12]:
-
1.
At each of the levels of sampling in the above algorithm, the linear sketch Section 1.2.1 can be used to recover the low strength edges. This is accomplished by storing independent copies of a sketch for recovering low-strength hyperedges (one copy at each level).
-
2.
Within each individual sketch (at a fixed level of the sampling process), there is a specific linear sketch stored for the neighborhood of the vertices.
-
3.
Each such vertex neighborhood sketch requires space bits.
In total then, this yields a linear sketch (and hence a dynamic streaming algorithm) which stores bits. Once this sketch has been stored, the algorithm can simply iteratively recover the low strength edges at each of the levels of sampling, thereby recovering a sparsifier of the original hypergraph.
1.2.4 Optimizing the Complexity in Insertion-only Streams
At first glance, it may seem that none of these parameters from the work of [12] can be optimized in the insertion-only setting: indeed, there are hyperedges in the hypergraph initially, and thus any iterative procedure will require levels before exhausting the hypergraph if the sampling rate is . Likewise, the vertices are fixed, and the linear sketch designed by [12] requires storing the linear sketches for each vertex. Lastly, the complexity of the sketches for each vertex cannot hope to be improved, as simply recovering a single hyperedge yields bits of information. Thus, it may seem that one cannot build on top of this framework while achieving a space complexity that beats bits of space.
However, our first theorem shows that by cleverly merging and contracting vertices as hyperedges are inserted, we can in fact improve the space complexity. Perhaps counterintuitively, our optimization actually comes from decreasing the number of vertices (the parameter above).
To illustrate, let us consider a sequence of hyperedge insertions, and let us suppose that at some point in time, a large polynomial number of hyperedges have been inserted (say, some hyperedges). As one might expect, if so many hyperedges have been inserted, there will naturally emerge certain components in the hypergraph which are very strongly connected. More formally, if we revisit Claim 6, we can observe that the number of hyperedges of strength , must be less than . This implies that among the hyperedges which have been inserted, the vast majority are high strength hyperedges, and thus also define many high strength components.
As a consequence, after these hyperedges are inserted, there will be components for which all of the hyperedges in the component are high-strength hyperedges, and therefore do not need to be recovered in order to perform sampling by Claim 8. Algorithmically, because we are dealing with an insertion-only stream, once such a component becomes a high-strength component, it will forever remain a high-strength component, and thus, as per Claim 8, we can effectively contract this component away.
Thus, our algorithmic plan is principled: we will estimate the strength of components as hyperedges are inserted (separately for each level of the levels of sampling in the hypergraph), and whenever a component gets sufficiently large strength, we irrevocably contract the component away to a single super-vertex. We do this for the hypergraphs at each of the levels of sampling. Thus, there are two key points that must be shown:
-
1.
We must show that contracting these vertices saves space in our sketch.
-
2.
We must show that we can estimate the strength in bits of space in the (insertion-only) streaming setting.
In what follows, we explain how we achieve both of these goals.
1.2.5 Saving Space by Contracting Vertices
First, we show that we can save space by contracting vertices. Let us consider the top level of sampling in the hypergraph. As mentioned above, as hyperedges are inserted, there will naturally become certain strongly connected components. So, let us denote one such component by . Now, once this component is strong, it remains strong, and so we can be sure that we do not need to recover any hyperedges within the component, as per Claim 8. So, instead of storing the individual linear sketches for the vertices , we instead add the sketches together, yielding . Note that this operation is not reversible, and we are in fact losing information about the hypergraph when we perform this addition. However, this is the same reason that we will save some space: indeed, if there are strong components, we end up storing only the linear sketches of [12] for the super-vertices, leading to a total space usage of bits in a single level, as opposed to the bits in [12] (note there is no here as we are only looking at the top level of sampling for now).
Unfortunately, this analysis alone is not sufficient for us to claim any space savings. Indeed, it is possible that (for instance) in the top level of sampling, there are no strong components. It follows then that we cannot add together any linear sketches, and must instead pay at this level of sampling. However, this is where we now use a key bound on the number of low-strength edges Claim 6 [16]. If there are no components of strength say, greater than , then there must be fewer than hyperedges remaining. It follows then that instead of storing this sketch of size bits at each of the levels of sampling, we must only store it at levels of sampling, thereby replacing this factor with a factor.
This argument as we have presented it though is far from general and must be extrapolated to work on all instances. In particular, it is possible that at different levels of sampling, the strong components are different, and thus there is no single set of strong components that we can look at. To address this, we make the observation that the strong components form a laminar family. That is to say, the strong components at the st level of sampling are a refinement of the strong components at the th level of sampling. Thus, across all levels of sampling, the number of distinct strong components that appear is bounded by . Likewise, because any component of strength is merged away, by the same logic as above, each strong component must have incident hyperedges (i.e., hyperedges which touch this component, as well as some other component). Thus, each component that appears will only have a non-empty neighborhood for levels of sampling (after which point we do not need to store anything - as the sketch of an empty neighborhood is empty).
In summary then, for each of the strong component that appears, we store the sketch from [12] of size for different levels of sampling. This then yields the desired complexity of bits of space for the final algorithm.
1.2.6 Identifying Strong Components
Finally, we show that we can approximately find the strong components in the hypergraph in an insertion-only stream. Fortunately, the foundation for this algorithm was presented in [12]: let us consider again the hypergraph at the top level of sampling, which we denote by . Simultaneously, we consider an auxiliary hypergraph which is the result of sampling the hyperedges of at rate .
As shown in [12], it turns out the connected components in exactly correspond to the strong components in . Thus, in the insertion-only streaming model, this admits an exceedingly simple implementation: as hyperedges arrive, we sample them at rate , and keep track of the components. Then, whenever a new component is formed, we simply add together the corresponding linear sketches as discussed in the previous section (i.e., merging those vertices together). Note that storing the set of connected components can be done in bits of space, and thus across levels of sampling, requires only bits. Because , this then yields the desired space complexity. These ideas then suffice for the insertion-only implementation.
1.2.7 Generalizing to Bounded-Deletions and Remarks
The key observation for generalizing the bounded deletion setting is that once a component has strength , then after any sequence of deletions, the strength of the component remains at least . Thus, instead of adding samplers together after the components reach strength , we instead add samplers together once the strength reaches . Observe then that the term is a natural artifact: each strong component can have a non-empty neighborhood of incident hyperedges for levels of sampling, as opposed to simply levels of sampling, and this yields the final complexity.
We now finish with some remarks:
-
1.
The primary work-horse of [12] is a linear sketch known as an -sampler, and it is tempting to simply try to replace the -samplers via linear sketching in their paper with an -sampler specifically for insertion-only streams, thereby saving space. However, such a replacement would necessitate an entirely new analysis: even though the stream itself may not have deletions, the process of recovering hyperedges, merging vertices together, and more all rely on the ability to linearly add together -samplers (which causes deletions). While the aforementioned approach may work, it would not build on the existing framework. The same goes for the bounded deletion setting, where it is tempting to use -samplers defined for bounded-deletion streams (as discussed in [8]).
-
2.
There are several subtleties that arise in the analysis regarding the estimation of strong components, as this will never be an exact decomposition of the graph. We instead provide upper and lower bounds on the strength of the components and remaining hyperedges, and use this to facilitate our analysis. We defer a more complete description to the technical sections below.
-
3.
Likewise, in the bounded-deletion setting, there is considerable difficulty in optimizing the dependence on to not be . Roughly speaking, this is because the dependence tries to show up in both (1) the number of levels of sampling in which a strong component has a non-empty neighborhood (a factor as described above) and (2) the support size of the -samplers that are needed when using the sketch of [12] (another factor of ). We use a more refined analysis along with a second round of component merging to bypass this other factor of .
-
4.
The lower bound follows from the augmented index problem along with an argument from [8] on bounding the complexity of this problem in the bounded-deletion setting. We defer a proof to the full version of the paper.
References
- [1] Kook Jin Ahn, Sudipto Guha, and Andrew McGregor. Analyzing graph structure via linear measurements. In Yuval Rabani, editor, Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012, Kyoto, Japan, January 17-19, 2012, pages 459–467. SIAM, 2012. doi:10.1137/1.9781611973099.40.
- [2] Charles J Alpert and Andrew B Kahng. Recent directions in netlist partitioning: a survey. Integration, 19(1):1–81, 1995. doi:10.1016/0167-9260(95)00008-4.
- [3] Grey Ballard, Alex Druinsky, Nicholas Knight, and Oded Schwartz. Hypergraph partitioning for sparse matrix-matrix multiplication. ACM Trans. Parallel Comput., 3(3):18:1–18:34, 2016. doi:10.1145/3015144.
- [4] András A. Benczúr and David R. Karger. Approximating s-t minimum cuts in Õ(n) time. In Gary L. Miller, editor, Proceedings of the Twenty-Eighth Annual ACM Symposium on the Theory of Computing, Philadelphia, Pennsylvania, USA, May 22-24, 1996, pages 47–55. ACM, 1996. doi:10.1145/237814.237827.
- [5] Yu Chen, Sanjeev Khanna, and Ansh Nagda. Near-linear size hypergraph cut sparsifiers. In Sandy Irani, editor, 61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020, Durham, NC, USA, November 16-19, 2020, pages 61–72. IEEE, 2020. doi:10.1109/FOCS46700.2020.00015.
- [6] Sudipto Guha, Andrew McGregor, and David Tench. Vertex and hyperedge connectivity in dynamic graph streams. In Tova Milo and Diego Calvanese, editors, Proceedings of the 34th ACM Symposium on Principles of Database Systems, PODS 2015, Melbourne, Victoria, Australia, May 31 - June 4, 2015, pages 241–247. ACM, 2015. doi:10.1145/2745754.2745763.
- [7] Arun Jambulapati, Yang P. Liu, and Aaron Sidford. Chaining, group leverage score overestimates, and fast spectral hypergraph sparsification. In Barna Saha and Rocco A. Servedio, editors, Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, Orlando, FL, USA, June 20-23, 2023, pages 196–206. ACM, 2023. doi:10.1145/3564246.3585136.
- [8] Rajesh Jayaram and David P. Woodruff. Data streams with bounded deletions. In Jan Van den Bussche and Marcelo Arenas, editors, Proceedings of the 37th ACM SIGMOD-SIGACT-SIGAI Symposium on Principles of Database Systems, Houston, TX, USA, June 10-15, 2018, pages 341–354. ACM, 2018. doi:10.1145/3196959.3196986.
- [9] Michael Kapralov, Robert Krauthgamer, Jakab Tardos, and Yuichi Yoshida. Spectral hypergraph sparsifiers of nearly linear size. In 62nd IEEE Annual Symposium on Foundations of Computer Science, FOCS 2021, Denver, CO, USA, February 7-10, 2022, pages 1159–1170. IEEE, 2021. doi:10.1109/FOCS52979.2021.00114.
- [10] Michael Kapralov, Robert Krauthgamer, Jakab Tardos, and Yuichi Yoshida. Towards tight bounds for spectral sparsification of hypergraphs. In Samir Khuller and Virginia Vassilevska Williams, editors, STOC ’21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event, Italy, June 21-25, 2021, pages 598–611. ACM, 2021. doi:10.1145/3406325.3451061.
- [11] Sanjeev Khanna, Aaron Putterman, and Madhu Sudan. Code sparsification and its applications. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 5145–5168. SIAM, 2024.
- [12] Sanjeev Khanna, Aaron L Putterman, and Madhu Sudan. Near-optimal size linear sketches for hypergraph cut sparsifiers. arXiv preprint arXiv:2407.03934, 2024.
- [13] Dmitry Kogan and Robert Krauthgamer. Sketching cuts in graphs and hypergraphs. In Tim Roughgarden, editor, Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, ITCS 2015, Rehovot, Israel, January 11-13, 2015, pages 367–376. ACM, 2015. doi:10.1145/2688073.2688093.
- [14] Eugene L. Lawler. Cutsets and partitions of hypergraphs. Networks, 3(3):275–285, 1973. doi:10.1002/NET.3230030306.
- [15] James R. Lee. Spectral hypergraph sparsification via chaining. In Barna Saha and Rocco A. Servedio, editors, Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, Orlando, FL, USA, June 20-23, 2023, pages 207–218. ACM, 2023. doi:10.1145/3564246.3585165.
- [16] Kent Quanrud. Quotient sparsification for submodular functions, pages 5209–5248. SIAM, 2024. doi:10.1137/1.9781611977912.187.
- [17] Tasuku Soma and Yuichi Yoshida. Spectral sparsification of hypergraphs. In Timothy M. Chan, editor, Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 2570–2581. SIAM, 2019. doi:10.1137/1.9781611975482.159.
- [18] Naganand Yadati, Madhav Nimishakavi, Prateek Yadav, Vikram Nitin, Anand Louis, and Partha P. Talukdar. Hypergcn: A new method for training graph convolutional networks on hypergraphs. In Hanna M. Wallach, Hugo Larochelle, Alina Beygelzimer, Florence d’Alché-Buc, Emily B. Fox, and Roman Garnett, editors, Advances in Neural Information Processing Systems 32: Annual Conference on Neural Information Processing Systems 2019, NeurIPS 2019, December 8-14, 2019, Vancouver, BC, Canada, pages 1509–1520, 2019. URL: https://proceedings.neurips.cc/paper/2019/hash/1efa39bcaec6f3900149160693694536-Abstract.html.
- [19] Dengyong Zhou, Jiayuan Huang, and Bernhard Schölkopf. Learning with hypergraphs: Clustering, classification, and embedding. In Bernhard Schölkopf, John C. Platt, and Thomas Hofmann, editors, Advances in Neural Information Processing Systems 19, Proceedings of the Twentieth Annual Conference on Neural Information Processing Systems, Vancouver, British Columbia, Canada, December 4-7, 2006, pages 1601–1608. MIT Press, 2006. URL: https://proceedings.neurips.cc/paper/2006/hash/dff8e9c2ac33381546d96deea9922999-Abstract.html.