Abstract 1 Introduction 2 Preliminaries 3 An 𝑶~(𝒏𝑲) Space Algorithm for Maximal Matching 4 Space Lower Bound for Maximal Matching 5 Conclusion References Appendix A Technical Lemma

Streaming Maximal Matching with Bounded Deletions

Sanjeev Khanna ORCID School of Engineering and Applied Sciences, University of Pennsylvania, Philadelphia, PA, USA Christian Konrad ORCID School of Computer Science, University of Bristol, UK Jacques Dark Unaffiliated Researcher, London, UK
Abstract

We initiate the study of the Maximal Matching problem in bounded-deletion graph streams. In this setting, a graph G is revealed as an arbitrary sequence of edge insertions and deletions, where the number of insertions is unrestricted but the number of deletions is guaranteed to be at most K, for some given parameter K. The single-pass streaming space complexity of this problem is known to be Θ(n2) when K is unrestricted, where n is the number of vertices of the input graph. In this work, we present new randomized and deterministic algorithms and matching lower bound results that together give a tight understanding (up to poly-log factors) of how the space complexity of Maximal Matching evolves as a function of the parameter K: The randomized space complexity of this problem is Θ~(nK), while the deterministic space complexity is Θ~(nK). We further show that if we relax the maximal matching requirement to an α-approximation to Maximum Matching, for any constant α>2, then the space complexity for both, deterministic and randomized algorithms, strikingly changes to Θ~(n+K).

A key conceptual contribution of our work that underlies all our algorithmic results is the introduction of the hierarchical maximal matching data structure, which computes a hierarchy of L maximal matchings on the substream of edge insertions, for an integer L. This deterministic data structure allows recovering a Maximal Matching even in the presence of up to L1 edge deletions, which immediately yields an optimal deterministic algorithm with space O~(nK). To reduce the space to O~(nK), we compute only K levels of our hierarchical matching data structure and utilize a randomized linear sketch, i.e., our matching repair data structure, to repair any damage due to edge deletions. Using our repair data structure, we show that the level that is least affected by deletions can be repaired back to be globally maximal. The repair data structure is computed independently of the hierarchical maximal matching data structure and stores information for vertices at different scales with a gradually smaller set of vertices storing more and more information about their incident edges. The repair process then makes progress either by rematching a vertex to a previously unmatched vertex, or by strategically matching it to another matched vertex whose current mate is in a better position to find a new mate in that we have stored more information about its incident edges.

Our lower bound result for randomized algorithms is obtained by establishing a lower bound for a generalization of the well-known Augmented-Index problem in the one-way two-party communication setting that we refer to as Embedded-Augmented-Index, and then showing that an instance of Embedded-Augmented-Index reduces to computing a maximal matching in bounded-deletion streams. To obtain our lower bound for deterministic algorithms, we utilize a compression argument to show that a deterministic algorithm with space o(nK) would yield a scheme to compress a suitable class of graphs below the information-theoretic threshold.

Keywords and phrases:
Streaming Algorithms, Maximal Matching, Maximum Matching, Bounded-Deletion Streams
Category:
Track A: Algorithms, Complexity and Games
Funding:
Sanjeev Khanna: Supported in part by NSF award CCF-2402284 and AFOSR award FA9550-25-1-0107.
Christian Konrad: Supported by EPSRC New Investigator Award EP/V010611/1.
Jacques Dark: Part of the work of J.D. was done while at the University of Warwick, supported by an EMEA Microsoft Research PhD studentship and ERC grant ERC-2014-CoG 647557.
Copyright and License:
[Uncaptioned image] © Sanjeev Khanna, Christian Konrad, and Jacques Dark; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Streaming models
; Theory of computation Streaming, sublinear and near linear time algorithms ; Theory of computation Graph algorithms analysis
Related Version:
Full Version: https://arxiv.org/abs/2502.15330
Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis

1 Introduction

In the streaming model of computation, an algorithm is tasked with computing the solution to a given problem by performing a single pass over the input data while maintaining a memory of sublinear size in the input.

Graph Streams.

Streaming algorithms have been studied for more than 25 years [3], and, since early on, significant attention has been devoted to the study of graph problems in this setting [18, 15]. The dominant model is the insertion-only model, where the input stream consists of a sequence of edges that make up a graph. In 2012, Ahn et al. [1] initiated the study of graph problems in the insertion-deletion setting, where the input stream consists of a sequence of edge insertions and deletions so that previously inserted edges can be deleted again. Subsequent research has revealed that some problems are equally difficult to solve in both the insertion-only and the insertion-deletion models in that algorithms require roughly the same amount of space, e.g., Connectivity [1] and Maximum Independent Set [16], while others are significantly harder to solve in the context of deletions, e.g., Maximal Matching, Maximum Matching, and Minimum Vertex Cover [4, 14].

Handling deletions requires a very different algorithmic toolkit than when deletions are not allowed, in particular, randomization is crucial even for solving seemingly trivial problems. For example, in insertion-deletion streams, deterministic algorithms for outputting a single edge of the input graph require space Ω(n2), where n is the number of vertices in the input graph, while this problem can be solved via 0-sampling [19, 12] and poly-logarithmic space when randomization is allowed. The predominant algorithmic technique for designing insertion-deletion streaming algorithms are linear sketches, which are perfectly suited to the insertion-deletion model since they can naturally handle an arbitrary (unbounded) number of deletions (and insertions). It is even known that, under some conditions, linear sketches are universal for the class of insertion-deletion streaming algorithms in that the behavior of any such algorithm can be replicated by one that solely relies on the computation of linear sketches [25, 2] (see also [20] that further investigates said conditions). It is therefore not surprising that most previous work on streaming algorithms either considers the insertion-only setting with no deletions at all, or the insertion-deletion setting that allows for an unbounded/arbitrary number of deletions.

Bounded-deletion Graph Streams.

In this work, we take a more refined view and consider graph problems in bounded-deletion streams. In this setting, the input stream consists of an unrestricted number of edge insertions and at most K edge deletions, for some integer K that is known to the algorithm. We are interested in how the space requirements of streaming algorithms change as a function of K.

Bounded-deletion, and, similarly, bounded-length, streams have been considered at least since the work of Cormode et al. [13] in 2017 (see, e.g., [13, 9, 20, 28, 1, 26, 7, 17, 27, 11]), who exploited a bound on the input stream length for the design of a sampling-based algorithm for estimating the size of a largest matching in bounded-arboricity graphs. Besides this work, bounded-deletion graph streams appear also in Kallaugher and Price [20], who gave a graph problem for which linear sketching is exponentially harder than bounded-deletion streaming, and Chou et al. [11], who studied CSPs in this context, which in some cases can also be represented as graph problems.

The bounded-deletion setting is well-motivated from a practical perspective. In all growing data sets, deletions are naturally less frequent than insertions since otherwise the data set would not grow in size. Regarding massive graphs, in social networks, new connections/friendships between entities are established much more often than deleted, and significantly more new hyperlinks are introduced into the web graph than deleted.

The Maximal Matching Problem.

In this work, we study the Maximal Matching problem through the lens of bounded-deletion streams. A matching in a graph is a subset of non-adjacent edges, and a maximal matching is one that is inclusion-wise maximal, i.e., it cannot be extended by adding an edge outside the matching to it. Specifically, our goal is to understand the function f(n,K) that describes the streaming space complexity of computing a maximal matching in an n-vertex graph as a function of the bound K on the number of deletions. We will, however, also consider approximation algorithms to Maximum Matching, where the goal is to compute a matching of size at least α|M|, with approximation factor 0<α1 and M being a largest matching. It is well-known that a maximal matching constitutes a 12-approximation to Maximum Matching.

The single-pass streaming space complexity of Maximal Matching is well-understood in both insertion-only and insertion-deletion streams with unrestricted deletions. In insertion-only streams, the simple Greedy matching algorithm, which greedily inserts every incoming edge into an initially empty matching if possible and constitutes the main building block of most streaming algorithms for matchings (e.g. [15, 22, 8, 5, 23, 24]), yields an O(nlogn) space algorithm for Maximal Matching, and this is also tight. In insertion-deletion streams, Dark and Konrad [14] showed an Ω(n2) space lower bound for computing a Maximal Matching, strengthening a previous lower bound of n2o(1) [4] (see also [21]). In other words, there is essentially no better algorithm than storing the entire graph. The lower bound of Dark and Konrad requires Θ(n2) deletions in the input stream. When the number of deletions is restricted to be K[n2], the Dark and Konrad lower bound can be restated as showing that, for any O(1)-approximation to Maximum Matching, Ω(K) space is necessary.

Summarizing the previous discussion, we know that f(n,0)=Θ~(n), and that f(n,n2)=Θ(n2), but for arbitrary K[n2], known results only tell us that f(n,K)=Ω(n+K). On the algorithmic side, even when K is just O(n), no results are known for computing a maximal matching that utilize o(n2) space (i.e. do better than storing the entire graph). Our current state of knowledge, for instance, gives us the inequalities Ω(n)f(n,n)O(n2), leaving a huge gap between the upper and lower bounds. This raises a natural question: Does the required space f(n,K) abruptly transitions to Ω(n2) when K is just O(n), or is there an algorithm that achieves a space complexity that smoothly interpolates between the O~(n) space for insertion-only streams and Ω(n2) space for unrestricted deletion streams? We show that, indeed, the space complexity smoothly interpolates between the two extremes.

1.1 Our Results

We resolve the space complexity of bounded-deletion streaming algorithms for Maximal Matching and show that f(n,K)=Θ~(nK). We obtain our result by designing a new space-efficient algorithm for bounded deletion streams, as well as by establishing a stronger new lower bound.

Theorem 1.

There is a single-pass randomized O~(nK) space streaming algorithm that, with high probability, outputs a maximal matching in any dynamic graph stream with at most K deletions.

Theorem 2.

Any possibly randomized single-pass streaming algorithm that outputs a maximal matching with probability at least 2/3 in dynamic graph streams with at most K deletions requires Ω(nK) space.

It is worth noting that Theorem 1 implies that, up to poly-log factors, the Θ(n2) deletions used in the lower bound by Dark and Konrad are necessary in order to obtain an Ω(n2) space lower bound for Maximal Matching. Together these results show that the streaming space complexity of maximal matching increases smoothly as a function of the number of deletions, and there is no abrupt phase transition. On one extreme, when the number of deletions is O(1), that is, when the deletions change the graph negligibly, the space complexity of Θ~(n) is essentially the same as the space needed to store a maximal matching in an insertion-only stream. But then as the number of deletions reaches Ω(n2), that is, when deletions can alter almost the entire graph, the space complexity rises to Θ~(n2), essentially the same as storing the entire graph.

We also observe that our work is the first that establishes a complete characterization of the space complexity of streaming algorithms for a graph problem as a function of the number of edge deletions K.

We show that randomization is crucial to achieving the space complexity given in Theorem 1, as deterministic algorithms for maximal matching require Θ(nK) space.

Theorem 3.

There is a deterministic single-pass streaming algorithm that uses O~(nK) space and outputs a maximal matching in any dynamic graph stream with at most K deletions. Moreover, any deterministic algorithm for Maximal Matching requires Ω(nK) space.

Finally, we show that, unlike in unrestricted dynamic graph streams, in the bounded-deletion model, the space complexity of Maximal Matching behaves fundamentally differently to the space complexity of computing an O(1)-approximation to Maximum Matching. Let gc(n,K) be the streaming space complexity of computing a c-approximation to Maximum Matching in an n-vertex graph when the number of deletions is bounded by K. Then we know that for any constant c>2, gc(n,0)=Θ~(n)=f(n,0), and that gc(n,n2)=Θ(n2)=f(n,n2). Furthermore, for arbitrary K[n2], we know that gc(n,K)=Ω(n+K). We show that in a sharp contrast to the Maximal Matching problem, there is an algorithm that achieves the space complexity of O~(n+K), that is, gc(n,K)=Θ~(n+K), and this is achieved by a deterministic algorithm.

Theorem 4.

For any ϵ>0, there is a deterministic single-pass streaming algorithm that uses O((n+K/ϵ)logn) space and outputs a (2+ϵ)-approximation to Maximum Matching in any dynamic graph stream with at most K deletions.

1.2 Techniques

Bounded-deletions Maximal Matching Algorithm.

We start by motivating and explaining the main ideas underlying our main algorithmic result, namely, Theorem 1. We start by observing that in absence of any deletions, one can simply store a maximal matching to solve the problem. On the other hand, when deletions are unbounded, we can simply store Θ~(n2) 0-samplers to solve the problem by recreating the surviving graph. Our algorithmic approach below is based on a new data structure, called hierarchical maximal matching, which in conjunction with a hierarchical approach for storing 0-samplers, allows us to design an algorithm whose space complexity smoothly interpolates between these two extremes.

Let I denote the edges inserted during the stream, and let D denote the edges deleted in the stream. Observe that I and D may be multisets since an edge can be inserted, subsequently deleted, and then reinserted again, and so on. Together, these sets define the graph G(V,E) revealed by the dynamic stream where E=ID.

Our algorithm has two phases where in the first phase, we build a hierarchical collection of maximal matchings using only edges in I along with a data structure 𝐃 for matching repair to account for deletions. In the second phase, we recover a maximal matching by starting with the least damaged maximal matching in our hierarchical collection, and repairing it using the edges stored in the data structure 𝐃. We now explain this approach in more detail.

Suppose that we create a sequence of hierarchical maximal matchings, say, M1,M2,,ML, using only the edges inserted in the stream (that is, the multiset I), ignoring all deletions. Specifically, we start by initializing M1,M2,,ML to be . Now, whenever an edge, say (x,y), is inserted, we first try adding it to matching M1. If one of x or y is already matched, then we try adding this edge (x,y) to M2, and continue in this manner. If we are unsuccessful all the way up to ML, then we simply discard this edge. It is clear that this hierarchical collection can be implemented as a streaming algorithm using only O~(nL) space, since each matching can have only O(n) edges. Now suppose there is an index [L] such that none of the edges in M are deleted, that is, MD=. Then we can recover a maximal matching M in the graph G(V,E) as follows. We initialize M=M, and then greedily add edges in (M1M2M1)D to M so that M is a maximal matching w.r.t. edges in (M1M2M)D. We now claim that M must be maximal in G(V,E). Suppose not, then there is an edge (u,v)E, such that neither u nor v are matched in M. But then the edge (u,v) must not be present in any of (M1M2M)D. This means that when the edge (u,v) arrived in the stream, at least one of u or v must have been matched in M. This now is a contradiction to our assumption, MD=, and hence it cannot be that both u and v are unmatched in M. Thus M is maximal with respect to E=ID.

Of course, the only way to ensure that there exists some index [L] such that none of the edges in M are deleted is to set L=Ω(K), and doing so immediately yields our deterministic algorithm that uses space O~(nK). To obtain an O~(nK) space algorithm, we instead set L=K, and observe that this ensures that there exists an index [L] such that at most K edges in M are deleted. The second phase of the algorithm now starts on the task of repairing M to be a maximal matching. Suppose an edge (u,w) is deleted from M. We would like to see if the edges in E(M1M2M) can match u and/or w again. Let us focus on vertex u. Our data structure 𝐃 will store Θ(log3n) 0-samplers for each vertex xV, with each sampler sampling uniformly at random from the incident edges on x. We open these 0-samplers for u, and if we find an edge (u,v) such that the vertex v is unmatched, we add it to M. Otherwise, there are two possibilities: (a) the degree of u is O(log2n) and we have recovered all incident edges on u, or (b) the degree of u is Ω(log2n), and we recover at least Ω(log2n) distinct neighbors of u, all of whom are matched in M. In case (a), we are immediately in good shape because we have recovered all edges incident on u, and we can use them to match u at the end if one of the neighbors of vertex u remains unmatched. The more interesting situation is case (b) where on the one hand, the recovered information is not sufficient to repair u. On the other hand, we cannot rule out the possibility that u could have been matched, if only we had allocated more space in 𝐃 to recover additional edges incident on u. We next describe how we eliminate this uncertainty.

We create a collection V1,V2,,VR of subsets of V where the set Vi is a random subset of V of size n/login, and R=Θ(logn/loglogn). For each vertex wVi, our data structure 𝐃 stores Θ(logi+3n) 0-samplers. It is worth noting that while we are storing more and more 0-samplers per vertex as i increases, the total space used by vertices in Vi remains O~(n) as the size of Vi shrinks proportionately. We now return to repairing vertex u that ended in case (b) above: we recovered a set N(u) of Ω(log2n) neighbors of u, such that each vertex in N(u) is currently matched. Since every vertex is contained in V1 with probability 1/logn, it follows that, with high probability, there must be a vertex v1N(u) such that v1’s mate in M, say u1, belongs to V1. We now add the edge (u,v1) to M, thereby matching vertex u, but now creating a new unmatched vertex, namely, u1. It may seem as if we have not made any progress and the vertices u and u1 have simply traded places. But in fact we have made some progress. Since u1V1, compared to vertex u, our data structure 𝐃 stores Θ(logn) times more information about edges incident on u1, better positioning us to find a mate for u1. We now repeat the above process for vertex u1, either successfully matching it to an unmatched vertex, or recovering all incident edges on u1, or finding a matched vertex v2N(u1) such that v2’s mate in M, say u2, belongs to V2. The repair successfully terminates if either of the first two events occurs. Otherwise, we now add the edge (u1,v2) to M, thereby matching vertex u1, but now creating a new unmatched vertex in M, namely, u2. We then continue this process from u2. The repair process is guaranteed to successfully terminate when we reach VR since for each vertex in the final set VR, our data structure 𝐃 stores Θ(nlogn) 0-samplers each, enough to recover their entire neighborhoods.

To summarize, using O~(n) space, the data structure 𝐃 provides a mechanism to repair an unmatched vertex u in M due to edge deletions. Since M can have up to K edge deletions, to repair all of them, the data structure 𝐃 independently replicates the above strategy O(K) times to repair all deletions in M. The overall space used by the algorithm is thus O~(nK) for storing the edges in the hierarchical matching, and another O~(nK) space for the repair data structure 𝐃, giving us the desired space bound of O~(nK).

Finally, it is worth underlining that the computations of the hierarchical matching and the matching repair data structures are independent. Furthermore, the matching repair data structure is computed by a linear sketching algorithm, as it is usual in the insertion-deletion setting, and the hierarchical matching data structure is computed by a (non-linear) deterministic sketch, i.e., a Greedy algorithm, as is it typical in the insertion-only setting.

Space Lower Bound for Bounded-deletions Maximal Matching.

We now explain the ideas behind our main lower bound result, established in Theorem 2. Our lower bound is best understood as an extension of the tight Ω(n2/α3) space lower bound by Dark and Konrad [14] for one-pass insertion-deletion streaming algorithms that compute an α-approximation to Maximum Matching111[6] gives an algorithm that achieves this space bound up to constant factors, see also [21, 4, 10].. We will denote this lower bound as the DK20 lower bound in the following. Due to the well-known fact that any maximal matching is at least half the size of a largest matching, DK20 immediately yields an Ω(n2) space lower bound for Maximal Matching.

In the following, we will treat the DK20 lower bound as if it was established for Maximal Matching. DK20 is proved in the one-way two-party communication setting. In this setting, the first party, denoted Alice, holds the edge set E of a bipartite graph G=(A,B,E), and the second party, denoted Bob, holds edge deletions DE. Alice sends a message to Bob, and, upon receipt, Bob is required to output a maximal matching in the graph G=(A,B,ED), i.e., Alice’s input graph with Bob’s deletions applied. Since Alice and Bob can simulate the execution of an insertion-deletion streaming algorithm for Maximal Matching on their input, by forwarding the memory state of the algorithm from Alice to Bob in form of a message, a lower bound on the size of the message used in the communication setting therefore also constitutes a lower bound on the memory required by such algorithms.

In DK20, Alice holds a bipartite random graph G=(A,B,E) with |A|=|B|=n so that every edge is inserted into the graph with probability 12. Bob holds subsets AA and BB, with |A|=|B|=45n, and inserts deletions into the vertex-induced subgraph G[AB] such that, after the deletions are applied, the remaining edges in G[AB] form a large matching M. It is proved that recovering a constant fraction of the edges of M – a task that a protocol for Maximal Matching necessarily must achieve – requires space Ω(n2). On a technical level, DK20 give a sophisticated reduction to the well-known one-way two-party Augmented-Index communication problem. In Augmented-Index, Alice holds a bitstring X{0,1}n, and Bob holds a index J[n] as well as the suffix X[J+1,n]. Alice sends a message to Bob who is tasked with reporting the bit X[J]. It is well-known that solving Augmented-Index with probability bounded away from 12 requires a message of size Ω(n). In DK20, the reduction is such that the bits X of an Augmented-Index instance correspond to edges in the random graph with the property that, with constant probability, the bit X[J] corresponds to an edge in the matching M. This construction is such that the mapping between X[J] and the edges in M is random and, most importantly, unknown to the underlying protocol. Hence, a protocol that reports a constant fraction of the edges of M will therefore report the edge that corresponds to bit X[J] with constant probability, which then solves Augmented-Index.

Bob holds Θ(n2) deletions in the DK20 construction. In order to decrease the number of deletions to K, we proceed as follows. In our lower bound construction, Alice holds a graph G=G1˙G2˙˙Gs, which is a vertex-disjoint union of s graphs (Gi)1is. Each graph Gi has K vertices and constitutes a scaled-down version of Alice’s input graph in the DK20 construction, namely a bipartite random graph with edge probability 12. Bob holds an index I{1,,s}, which identifies one of the graphs GI. Furthermore, Bob holds the counterpart to Bob’s input in the DK20 construction to graph GI, i.e., edge deletions that apply to GI. Bob leaves all the other graphs Gj, with jI, untouched.

We establish a direct sum argument to show that this problem requires a large message size. The key insight is that, since Alice does not know the index I, the message sent from Alice to Bob must contain sufficient information so that Bob can output a maximal matching no matter from which graph Gi Bob has deleted some edges. In other words, Alice and Bob must be able to solve s independent copies of the DK20 lower bound instance. Since each graph Gi has K vertices, DK20 implies that Ω(K) bits are required for solving one copy. Hence, overall, space Ω(Ks) is required. Last, to make sure that the final graph has n vertices, we need to set s=Θ(nK), which delivers the claimed Ω(nK) space lower bound.

To implement this approach, we first define a generalization of Augmented-Index denoted Embedded-Augmented-Index, where Alice holds s binary strings X1,,Xs{0,1}t, Bob holds two indices I[s] and J[t] as well as the suffix XI[J+1,t], and the objective for Bob is to output the bit XI[J]. Using by now standard information-theoretic arguments, we show that this problem requires Ω(ts) bits of communication. Next, following the proof outline of DK20, we show that a bounded-deletion streaming algorithm can be used to solve Embedded-Augmented-Index, which completes the proof.

Lower Bound for Deterministic Algorithms for Bounded-Deletion Maximal Matching.

Our lower bound for deterministic algorithms works with a family of bipartite graphs G=(A,B,E) that have the property that, even when any K edges DE are deleted from G, then still every maximal matching in GD matches all A-vertices. The family of graphs 𝒢K(n) is obtained as follows. Given the deletion budget K, define K to be the family of bipartite graphs H=(A,B,E) with |A|=K,|B|=3K, and the degree of every A-vertex is 2K. Then, 𝒢K(n) is the family of graphs consisting of the disjoint union of any n/(4K) graphs from K.

Similar to our lower bound for randomized algorithms, we prove our lower bound in the one-way two-party communication setting. Alice holds a graph G𝒢K(n) as input, and Bob holds up to K edge deletions DE(G). We now claim that a protocol π for Maximal Matching that outputs a maximal matching in the graph GD allows Bob to learn K+1 incident edges on every A-vertex. Once this claim is established, we finalize the proof by observing that, if the message from Alice to Bob was of size o(nK), then π also constitutes an encoding of these overall n(K+1)4 edges‘using only o(nK) bits. This in turn can be used to encode the graph class 𝒢K(n) with fewer bits than dictated by the information-theoretic threshold - a contradiction.

To see that Bob can learn K+1 edges incident on every A-vertex in input graph G𝒢K(n), Bob proceeds as follows. Let aA be any vertex and π the message received from Alice. Bob then completes the protocol without introducing any deletions, recovering a maximal matching M1 that, as discussed above, necessarily matches the vertex a. Let e1 be the edge incident on a in M. Next, Bob completes another run of the protocol starting with message π and feeding the edge e1 as edge deletion into the protocol. In doing so, Bob obtains another maximal matching M2 that necessarily matches a, thereby recovering a second edge e2 incident on a. Repeating this process, Bob learns more and more edges incident on a, feeding these edges as deletions into the protocol in the next simulation. Overall, Bob can repeat this process K times, thereby exhausting the deletion budget, which allows him to learn K+1 edges incident on a. Finally, this process can be repeated for every aA, which completes the argument.

Bounded-deletion (𝟐+𝒐(𝟏))-Approximate Matching Algorithm.

We now briefly describe the main idea behind the algorithmic result stated in Theorem 4. Once again, similar to Theorem 1, our algorithm utilizes the hierarchical matching data structure, but with some crucial modifications.

Specifically, this time we implement a budgeted version of the hierarchical matching data structure on the insertions part of the stream, where instead of having a fixed number of levels, we now have an overall budget B on the total number of edges stored in the hierarchical matching data structure. The data structure maintains a lexicographic maximality property whereby an edge insertion that may cause the number of stored edges to exceed the budget B triggers removal of an edge from the current last level of the data structure. The number of levels in the data structure forms a bitonic sequence over time, in that, it may keep increasing as long as the space budget B has not been reached, but once that happens, the number of levels steadily decreases as enforcing the lexicographic property can trigger many deletions from the last levels of the hierarchical matching as the remainder of the insertion stream is processed.

An immediate consequence of this lexicographic property is that if upon termination, we have a sequence of hierarchical matchings, say, M1,M2,,ML, then M1,M2,,ML1 must necessarily constitute a (L1)-level hierarchical maximal matching data structure. We also separately store all deletions that appear in the stream. Now by setting our budget B to be Θ~(n+K), we can show that when we apply deletions, one of the (L1) maximal matchings, say Mi, necessarily loses at most an o(1)-fraction of its edges. We can then extend Mi to be an “almost” maximal matching M in the overall stream, by greedily inserting surviving edges in M1,M2,,Mi1 into it. Here the “almost” refers to the fact that unlike the maximal matching algorithm of Theorem 1, we do not need to repair the missing o(1)-fraction edges in Mi to ensure maximality. The final matching M is easily shown to be a (2+o(1))-approximate matching, giving us the desired result.

1.3 Outline

In Section 2, we give notation and the definition of bounded-deletion graph streams. We then present our bounded-deletion algorithm for Maximal Matching in Section 3, and give our matching lower bound in Section 4. Due to space restrictions, we postpone the presentation of our results on deterministic algorithms to the full version of this paper. We conclude with some directions for future research in Section 5.

2 Preliminaries

Notation.

For an integer k we write [k]:={1,2,,k}. For a graph G=(V,E), we denote by N(v) the neighborhood of a vertex vV.

Bounded-deletion Graph Streams.

Given an integer K, in the bounded-deletion graph stream setting, the input stream consists of a sequence of edge insertions and at most K edge deletions, which together make up the edge set of a graph G=(V,E). An edge deletion can only occur if the edge was previously inserted. Observe that it is possible that an edge is introduced, subsequently deleted, and then introduced again, and so on. We assume that every prefix of the input stream describes a simple graph, i.e., a previously inserted edge can only be inserted again if it was deleted in the meantime. Observe, however, that the substream of edge insertions I may constitute a multiset, and, when regarded as a graph, then I yields a multigraph. The same holds for the substream of edge deletions D. In the following, we will write E=ID to denote the edges of the final graph described by the input stream. We will see that our algorithm considers the substream of edge insertions and deals with edge multiplicities in a rather natural way.

We also assume that the parameter K is known in advance. This is a necessary assumption since, if K was not known, any algorithm that uses space o(n2) could not produce a maximal matching in case K=Θ(n2), as demonstrated by the Dark and Konrad lower bound [14]. As it is standard, for simplicity, we also assume that algorithms know the vertex set V of the graph described by the input stream in advance.

Our algorithm makes use of 0-sampling. Given an insertion-deletion stream that describes a vector on m-coordinates, 0-sampling refers to the process of sampling a uniform random non-zero coordinate of this vector. We will use the 0-samplers of Jowhari et al. [19]:

Theorem 5 (Jowhari et al. [19]).

There exists a 0-sampler that uses O(log2mlog(1/δ)) bits of space and outputs a uniform random non-zero coordinate i[m] with probability at least 1δ.

Observe that an edge stream of an n-vertex graph describes a Θ(n2)-dimensional vector. 0-sampling on a graph stream therefore produces a uniform random edge of the input graph with space O(log2nlog(1/δ)). We will use 0-sampling to sample uniform random edges incident to a given vertex vV, by feeding only edge insertions and deletions incident on v into the sampler.

3 An 𝑶~(𝒏𝑲) Space Algorithm for Maximal Matching

3.1 Description of the Algorithm

We start by briefly summarizing the main steps of our algorithm. We refer the reader to the detailed overview in Section 1.2 for further intuition underlying the steps of our algorithm.

Let G=(V,E) be the underlying graph that is revealed as a sequence I of edge insertions and a sequence D of deletions, that is, E=ID. In the first phase, our algorithm builds a sequence of hierarchical matchings, say, M1,M2,,ML, using only I, the edges inserted in the stream, ignoring all deletions, where L=K. Since there are at most K deletions, there exists an index [L] such that at most K edges in M are deleted in the set D. The second phase of the algorithm is then focused on restoring the maximality of the matching M after the deletions are applied.

This is achieved as follows. We create a collection V0,V1,V2,,VR of subsets of V where the set Vi is a random subset of V of size n/login, and R=Θ(logn/loglogn). For each vertex wVi, our data structure 𝐃 stores Θ(logi+3n) 0-samplers, sampling uniformly at random from the edges incident on w. Now suppose vertex u used to be matched in M via an edge (u,v), and this edge is deleted by D, then the repair process starts by opening the 0-samplers of u. This results in one of the following three events with high probability:

  1. (i)

    either u can be matched to a previously unmatched vertex, or

  2. (ii)

    u recovers all its incident edges in G, or

  3. (iii)

    u can be matched to an already matched vertex v1 whose current partner u1 is in set V1.

Events (i) and (ii) are clearly (successful) termination events for u’s repair as we have either re-matched u, or we have all incident edges on u available for us to try to rematch u – we, however, only do this after the repair process of all vertices has terminated. In case (iii), the repair process now continues from u1, exploiting the fact that our data structure stores more 0-samplers for each vertex in V1. We will once again encounter one of the above three events, and in case of the third event, we will once again repair u1 by unmatching one of its neighbors, and continue repair from a vertex, say u2V2. The process is guaranteed to terminate successfully when it reaches a vertex in VR since we store enough 0-samplers to recover the entire neighborhood of each vertex in VR. Finally, the algorithm replicates the data structure Θ(K) times to ensure the repair of every vertex in M that is affected by deletions in D.

Our main algorithm is depicted as Algorithm 2, and it uses the Hierarchical-Greedy() algorithm, which is stated as Algorithm 1, as a subroutine.

Algorithm 1 Hierarchical-Greedy(L).

We now give some more implementation details of our main algorithm that are not covered by the pseudo-code.

In Line 9 in the listing of Algorithm 2, we apply the deletions D to the matchings of the hierarchical matching. While this is straightforward if the substream I of edge insertions does not contain any multiedges, some care needs to be taken if I does contain multiedges. We apply the deletions in order as they arrived in the stream and bottom-up, i.e., we first apply deletions to M1, then M2, and so on. This is to ensure that deletions are matched to the relevant insertions.

In Line 18 of our main algorithm, Algorithm 2, we evaluate whether the entire neighborhood of a vertex is contained in a set of 0-samplers. This condition can, for example, be checked by comparing the number of different incident edges produced by the 0-samplers to the degree of the vertex. Observe that, in insertion-deletion streams, the degree of a vertex can easily be computed by using a counter in O(logn) space.

Algorithm 2 Bounded-deletion Streaming Algorithm for Maximal Matching.

3.2 Analysis

Before analyzing our algorithm, we point out that the description of the algorithm uses large constants C,C with C>C that are appropriately chosen. We will see that the analysis only imposes weak constraints on C and C and that such constants are easy to pick. We also assume that all our 0-samplers succeed. Since we run the 0-samplers with error parameter δ=1n4, and there are less than n2 such samplers, by the union bound, this is a high probability event.

The first key ingredient of our analysis is the fact that, in the absence of deletions, every matching M, 1L, produced by Hierarchical-Greedy() can easily be extended to a globally maximal matching by greedily adding edges of the matchings M1,,M1 to it if possible. The next lemma captures this idea and combines it with a key insight that allows us to fix edge deletions: We do not need to immediately rematch vertices incident to a deleted edge as long as we know their entire neighborhoods. This lemma is key for establishing our algorithm’s correctness.

Lemma 6.

Let G=(V,E) be a graph where E=ID such that I is a multiset of edge insertions and DI is a multiset of edge deletions. Let (M1,M2,,ML) be the output of Hierarchical-Greedy(L), i.e., a sequence of hierarchical matchings constructed using the edges in I, processed in an arbitrary order. Let FE be a subset of edges. Then, for any [L], let M be any matching such that, for every vertex v matched in M, either (i) v is also matched in M, or (ii) the entire neighborhood of v is known, i.e., N(v)F. Then M can be extended to become a maximal matching of the graph G by simply greedily adding to it edges in F((M1M2M1)D) if possible.

Proof.

We prove this lemma by contradiction. To this end, assume that there exists an edge (u,v)EM such that M{(u,v)} is a matching.

We first argue by contradiction that V(M){u,v}=. Indeed, suppose that this was not the case and, w.l.o.g., assume that uV(M). By the statement of the lemma, then either u is matched in M or the entire neighborhood of u is known, i.e., N(u)F, which implies that (u,v)F. In the first case, we immediately arrive at a contradiction since the edge (u,v) could not be added to M since u is already matched. In the second case, observe that we attempted to greedily add the edges of F to M, in particular, we already attempted to add the edge (u,v) to M, which also yields a contradiction.

Assume therefore that V(M){u,v}=. This, however, implies that, when the edge (u,v) arrived in the stream, it was not included in M despite both endpoints being free in M. Suppose the edge (u,v) occurs β times in the stream I. It must then be the case that all β copies of this edge in I must have been added to one or more of the matchings M1,,M1. On the other hand, since edge (u,v)EM, it means that the numbers of times (u,v) is deleted in D is strictly less than β. So at least one copy of the edge (u,v) in M1,,M1 remains undeleted, after we remove edges in D. Since we attempted to add the surviving edges of M1,,M1 to M, we also arrive at a contradiction.

It follows then that the matching M is indeed maximal in G.

The second key ingredient of our analysis is a progress lemma that shows that the fixing process yields the desired result. In more detail, we show that, in iteration i of the loop in Line 17, when fixing any vertex uj(V(M)Vi) that is currently unmatched but was matched in M, then we either (i) recover all of uj’s neighbors, (ii) we are able to match uj directly to a yet unmatched vertex, or (iii) we can match uj to an already matched vertex v such that v’s mate u in the current matching is contained in level i+1, i.e., uVi+1.

Lemma 7 (Progress Lemma).

Let M be any matching in G=(V,E), where G is the final graph described by the input stream, and let uViV(M) be an unmatched vertex contained in level i. Then, for any j, with high probability, at least one of the following assertions is true:

  1. 1.

    Ni,j(u) contains the entire neighborhood of u;

  2. 2.

    Ni,j(u) contains a vertex v such that M{(u,v)} is a matching;

  3. 3.

    Ni,j(u) contains a vertex v that is matched in M to a vertex u such that uVi+1.

Proof.

We first recall that Ni,j(u) consists of logi+3(n) 0-samplers.

Suppose that Items 1 and 2 are false. Since Item 1 is false, the degree of u cannot be too small since otherwise the 0-samplers would have picked up every single edge incident to u. We prove in Lemma 8 that, w.h.p., the degree of u is at least logi+2(n)/C. Then, since Item 2 is false, all vertices produced by the 0-samplers in Ni,j(u) are matched in M. Denote by U their mates in M. Then, |U|logi+2(n)/C. Then, the probability that none of the vertices in U are contained in Vi+1 is:

(n|U||Vi+1|)(n|Vi+1|)(1|Vi+1|n)|U|exp(|Vi+1||U|/n)=exp(CClogn)1n5,

where we used Lemma 9 stated in the appendix, then used the inequality 1+xexp(x), and then assumed that C and C are picked such that CC is large enough. Hence, such a vertex exists with high probability, which completes the proof.

Lemma 8.

Let vVi be any vertex. Then, if deg(v)logi+2(n)/C then N(v)Ni,j(v), for every j, with high probability.

Proof.

First, observe that Ni,j(v) consists of logi+3(n) 0-samplers for each vertex vVi. Let uN(v) be a vertex. Then, assuming that none of the 0-samplers fail, the probability that the outcome of an 0-sampler for v is u is at least 1deg(v)Clogi+2(n). Since all samplers operate independently, the probability that none of the samplers produce u is at most:

(1Clogi+2n)logi+3nexp(Clogi+2nlogi+3n)=exp(Clogn)1n10,

using the inequality 1+xexp(x) and by picking a sufficiently large value for C. By a union bound over the vertices in N(v), we obtain that all vertices in N(v) are contained in the 0-samplers with high probability. Last, by a union bound over all values of j, the result follows.

We observe that the previous lemma implies the important property that, for every vertex vVR with R=log(n)+log(C)loglogn2 as in the algorithm, i.e., a vertex contained in the last level R, the entire neighborhood of v is contained in NR,j(v) since logR+2(n)/Cn.

We are now ready to prove our main result, i.e., Theorem 1.

Theorem 1.

Algorithm 2 is a O~(nK) space streaming algorithm for Maximal Matching, where K is an upper bound on the number of deletions in the stream, and succeeds with high probability.

Proof.

We first address the space complexity and then establish correctness.

Space Complexity.

The algorithm stores the K matchings M1,,MK, which requires space O~(nK). In addition, the algorithm stores various 0-samplers. The number of these samplers is bounded by

i=0R|Vi|2Klogi+3(n)i=0RCnlogin2Klogi+3(n)=nKlog3nR=O~(nK),

and, since each 0-sampler uses space O~(1), the space bound follows.

Correctness.

To establish correctness, we will argue that after the loop in Line 12, which ends in Line 27, but before Line 28 is executed, the matching M together with the set of 0-samplers fulfill the premises of Lemma 6. Invoking the lemma then establishes that executing Line 28 turns M into a globally maximal matching, which completes the proof.

To argue that M fulfills the premises of Lemma 6 after Line 27 (but before Line 28), we need to ensure that, for every vertex uV(M), either u is matched in M or the entire neighborhood of u is contained in our set of 0-samplers. In the following, we say that a vertex uV(M) fulfills property P if it is either matched in M or its entire neighborhood is contained in our 0-samplers.

Observe that when entering the loop in Line 12, the vertices V(MM) are exactly those vertices that violate property P. We will now argue that each iteration of this for-loop fixes one of these vertices and, in particular, it does not introduce any new vertices that may violate this property. This then completes the proof.

Consider thus the iteration j that fixes vertex uj. First, it may happen that uj was matched when fixing a vertex uj, for some j<j. This is checked in Line 13, and if this happens then we are done. Otherwise, we enter the loop in Line 17. The vertex u is initialized as the vertex uj that we attempt to fix. The loop satisfies the invariant that, in iteration i, we are guaranteed that uVi. This is clearly the case for the first iteration i=0 since V0=V. To see that this holds throughout, we will argue that an iteration of the loop always establishes property P for the current vertex u. Fixing vertex u may however come at the cost of introducing a new violation of property P for a previously matched vertex u. This vertex u, however, is then necessarily contained in the vertex set Vi+1. Since at the end of the loop, we set uu, we therefore see that in the subsequent iteration i+1, the vertex u is now contained in Vi+1, as desired.

Now, to see that u is always fixed, we invoke our progress lemma, Lemma 7, which states that either the entire neighborhood of u is contained in our 0-samplers (case 1) or u is rematched (cases 2 and 3), which implies that u now satisfies property P. Case 3 matches u to a previously matched vertex v whose mate u becomes unmatched. However, as proved in Lemma 7, u is then necessarily contained in Vi+1.

Last, to see that in the final iteration of the loop in Line 17, no new vertex that violates property P is introduced, we observe that the last level VR is such that the entire neighborhood of every vertex in VR is stored in the 0-samplers NR,j(u), for every j, which is a consequence of Lemma 8.

Thus, overall, we have established that each iteration of the main for loop fixes a vertex and does not introduce any new vertices that violate property P. Lemma 6 thus establishes that Line 28 turns M into a globally maximal matching, which establishes that the output matching is maximal and completes the proof.

4 Space Lower Bound for Maximal Matching

In this section, we give our space lower bound for bounded-deletion streaming algorithms for Maximal Matching. To this end, in Subsection 4.1 we define the Embedded-Augmented-Index problem, an extension of the Augmented-Index problem, and lower-bound its communication complexity. Then, in Subsection 4.2, we show that a bounded-deletion streaming algorithm for Maximal Matching can be used to solve Embedded-Augmented-Index, which yields our main lower bound result.

4.1 The Embedded-Augmented-Index Problem

The Embedded-Augmented-Indexs,t problem is a one-way two-party communication problem and is parametrized by two integers s and t. Alice holds s bitstrings X:=X1,,Xs{0,1}t. Bob holds two indices I[s] and J[t] as well as the suffix XI[J+1,,t]. Alice sends a single message to Bob, and, upon receipt, Bob needs to output the bit XI[J]. Protocols can be randomized and need to be correct on every input with probability at least 12+ϵ, for some small constant ϵ>0.

Let μ denote the uniform input distribution, where all variables (X,I,J) are chosen uniformly at random from their domains.

We now prove that the communication complexity of Embedded-Augmented-Indexs,t is Ω(st) (proof provided in the full version of this paper).

Theorem 9.

Every randomized communication protocol for Embedded-Augmented-Indexs,t that is correct with probability 12+ϵ, for any ϵ>0, requires a message of size at least

(1H2(12ϵ))st,

where H2(.) denotes the binary entropy function.

4.2 From Embedded-Augmented-Index to Maximal Matching

In this section, we will prove that every randomized bounded-deletion streaming algorithm for Maximal Matching requires space Ω(nK). The proof closely follows [14], adapted to Embedded-Augmented-Index as the underlying hard communication problem and somewhat simplified since we only require a lower bound for small constant factor approximations as they are provided by a Maximal Matching algorithm.

Theorem 2.

Every bounded-deletion streaming algorithm that computes a Maximal Matching with probability at least 23 requires space Ω(nK).

Proof.

We will give a reduction to Embedded-Augmented-Indexs,t. For integers s,t whose values we will determine later, let X=X1,,Xs{0,1}t,I[s], and J[t] be an Embedded-Augmented-Indexs,t instance, and let 𝒜 be a bounded-deletion streaming algorithm for Maximal Matching that outputs a maximal matching with probability 23 on every instance. We assume that, if 𝒜 fails, then it still outputs a matching, but it may not be maximal.

Alice.

Given X, Alice constructs the bipartite graph G=G1˙G2˙˙Gs, which is the disjoint union of the graphs (Gi)1is. We now describe the construction of graph Gi, for any 1is. The vertex set AiBi of Gi is such that |Ai|=|Bi|=10t. The edge set Ei of Gi is best described via its bipartite incidence matrix Mi. To obtain Mi, we first construct the matrix M~i, which has the same dimensions as Mi. The construction process is as follows:

  1. 1.

    The top-left square sub-matrix of M~i with side length t is filled with the entries of Xi from left-to-right and top-to-bottom.

  2. 2.

    All other entries of M~i are filled with uniform random bits obtained via public randomness.

  3. 3.

    Alice and Bob sample a random binary square matrix Yi with side length 10t such that each entry is 1 with probability 12. They compute the entry-wise XOR between M~i and Yi. Let M~i denote the resulting matrix.

  4. 4.

    Alice and Bob sample random permutations σi,πi:[10t][10t] from public randomness. The matrix Mi is obtained from M~i by permuting the rows and columns with the permutations σi and πi, respectively.

Alice then runs algorithm 𝒜 on the edges of G, presented to 𝒜 in random order.

Bob.

Recall that Bob holds the indices I[s] and J[t] as well as the suffix XI[J+1,,t]. Bob introduces edge deletions, but only into the graph GI. To this end, consider the matrix M~I, and let a,b be such that the entry M~I[a,b] corresponds to the bit XI[J]. Then, observe that Bob knows all entries of the submatrix M~I(a,b) with top-left corner at position (a,b) and bottom-right corner being the bottom-right corner of M~I except the top-left entry M~I(a,b)[a,b], either because the bits XI[J+1,t] (XORed with entries of YI) correspond to these entries or the entries were constructed entirely from public randomness. Denote by

={(i,j)|ai10t,bj10t and iajb}

the entries of M~I that constitute off-diagonal entries in the submatrix M~I(a,b). Then, for each index (i,j), if MI[σ(i),π(j)]=1 then Bob feeds an edge deletion that turns this entry to 0 into the algorithm 𝒜, i.e., MI[σ(i),π(j)]=0 after the deletion. All deletions are fed into 𝒜 in random order.

Bob then obtains the output matching OUT produced by algorithm 𝒜.

Analysis.

We denote by OUTiOUT the subset of edges that are contained in graph Gi.

We assume from now on that the algorithm 𝒜 does not fail. Then, we claim that, with high probability, |OUTI|149to(t). Indeed, observe that none of the diagonal entries in M~I(a,b) are subsequently deleted, and the edges corresponding to the 1-entries of this diagonal form a matching. Observe that both dimensions of M~I(a,b) are at least 9t. By concentration bounds, with high probability, there are at least 129to(t) 1-entries. The claim then follows from the observation that MI is obtained from M~I by permuting the rows and columns, which are operations that preserve the matching size, and by the fact that a maximal matching is at least half the size of a maximum matching.

Hence, with probability 2/3o(1), the algorithm 𝒜 succeeds and |OUTI|149to(t) holds. We assume that these events hold from now on.

We say that the entry XI[J] materializes as an edge in graph GI if M~I[a,b]=1. Recall that M~I[a,b]=M~I[a,b]XORYI[a,b], where YI[a,b] is a uniform random bit. Hence, XI[J] materializes as an edge with probability 1/2.

We will now argue that, if the edge corresponding to XI[J] materializes, then it is also contained in OUTI with constant probability. Indeed, due to the symmetry of the construction that is achieved by the applications of the random permutations σi and πi and the XOR computations with YI, each of the diagonal entries of M~I(a,b) with value 1 are reported in the matching OUTI with equal probability. Hence, conditioned on the entry XI[J] materializing as an edge in GI, the algorithm 𝒜 succeeding, and the event |OUTI|94to(t), this edge is contained in OUTI with probability at least

|OUTI|2t10t94to(t)2t10t140o(1),

since OUTI contains at least |OUTI|2t diagonal entries from the matrix M~I(a,b), and any matching in GI is of size at most 10t.

Hence, the probability that the edge corresponding to XI[J] materializes and is reported in OUTI is at least 180o(1) conditioned on 𝒜 succeeding and on the event |OUTI|94to(t). If the edge is not observed in OUTI then the algorithm reports either 0 or 1 as a guess for XI[J], each with probability 12.

Using the following definition of p:

p =Pr[𝒜 succeeds and |OUTI|94to(t) and XI[J] materializes as an edge
 and is reported in OUTI](23o(1))(180o(1))=1120o(1),

we can bound the overall success probability of this strategy as follows:

Pr[XI[J] recovered correctly] p+(1p)12
=1120+11912012o(1)=12+1240o(1).

By Theorem 9, 𝒜 therefore requires space Ω(st). Lastly, to ensure that at most K deletions are introduced, we set t=K, and to ensure that the input graph has n vertices, we set s=n210t. The lower bound thus gives Ω(st)=Ω(nKK)=Ω(nK), as desired.

5 Conclusion

In this work, we initiated the study of Maximal Matching in bounded-deletion streams. We settled the space complexity of both randomized and deterministic algorithms up to polylog factors by given algorithms as well as lower bounds, where the randomized space complexity is Θ~(nK), and the deterministic space complexity is Θ~(nK). Our results constitute the first trade-off results between space complexity and the number of deletions for a fundamental graph problem.

We hope that our work will spark work on other fundamental graph problems in the bounded-deletion stream setting, especially when the presence of deletions seems to significantly increase the space complexity of the streaming algorithm. Two natural and closely related problems for future investigation are approximate Maximum Matching and approximate Minimum Vertex Cover, where a refined understanding of space complexity in terms of the deletion parameter K will reveal how the space complexity changes as we gradually move from insertion-only to unrestricted deletions streams. Another problem that will be interesting to study through the lens of bounded deletions is the problem of building low stretch graph spanners. Similar to Maximum Matching and Minimum Vertex Cover, there is a polynomial-factor separation in the space needed to build a low stretch spanner in an insertion-only stream, and in dynamic streams with unrestricted deletions.

Finally, we believe that our algorithmic approach of building a complementary pair of sketches, one that is non-linear, focusing on the insertions, and another one that is non-uniform and linear, focusing on the dynamic component of the stream, will also prove useful for other graph problems.

References

  • [1] Kook Jin Ahn, Sudipto Guha, and Andrew McGregor. Analyzing graph structure via linear measurements. In Yuval Rabani, editor, Proceedings of the Twenty-Third Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2012, Kyoto, Japan, January 17-19, 2012, pages 459–467. SIAM, 2012. doi:10.1137/1.9781611973099.40.
  • [2] Yuqing Ai, Wei Hu, Yi Li, and David P. Woodruff. New characterizations in turnstile streams with applications. In Ran Raz, editor, 31st Conference on Computational Complexity, CCC 2016, May 29 to June 1, 2016, Tokyo, Japan, volume 50 of LIPIcs, pages 20:1–20:22. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2016. doi:10.4230/LIPICS.CCC.2016.20.
  • [3] Noga Alon, Yossi Matias, and Mario Szegedy. The space complexity of approximating the frequency moments. In Gary L. Miller, editor, Proceedings of the Twenty-Eighth Annual ACM Symposium on the Theory of Computing, Philadelphia, Pennsylvania, USA, May 22-24, 1996, pages 20–29. ACM, 1996. doi:10.1145/237814.237823.
  • [4] Sepehr Assadi, Sanjeev Khanna, Yang Li, and Grigory Yaroslavtsev. Maximum matchings in dynamic graph streams and the simultaneous communication model. In Robert Krauthgamer, editor, Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2016, Arlington, VA, USA, January 10-12, 2016, pages 1345–1364. SIAM, 2016. doi:10.1137/1.9781611974331.CH93.
  • [5] Sepehr Assadi, S. Cliff Liu, and Robert E. Tarjan. An auction algorithm for bipartite matching in streaming and massively parallel computation models. In Hung Viet Le and Valerie King, editors, 4th Symposium on Simplicity in Algorithms, SOSA 2021, Virtual Conference, January 11-12, 2021, pages 165–171. SIAM, 2021. doi:10.1137/1.9781611976496.18.
  • [6] Sepehr Assadi and Vihan Shah. An asymptotically optimal algorithm for maximum matching in dynamic streams. In Mark Braverman, editor, 13th Innovations in Theoretical Computer Science Conference, ITCS 2022, January 31 - February 3, 2022, Berkeley, CA, USA, volume 215 of LIPIcs, pages 9:1–9:23. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPICS.ITCS.2022.9.
  • [7] Omri Ben-Eliezer, Rajesh Jayaram, David P. Woodruff, and Eylon Yogev. A framework for adversarially robust streaming algorithms. J. ACM, 69(2):17:1–17:33, 2022. doi:10.1145/3498334.
  • [8] Lidiya Khalidah binti Khalil and Christian Konrad. Constructing large matchings via query access to a maximal matching oracle. In Nitin Saxena and Sunil Simon, editors, 40th IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2020, December 14-18, 2020, BITS Pilani, K K Birla Goa Campus, Goa, India (Virtual Conference), volume 182 of LIPIcs, pages 26:1–26:15. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPICS.FSTTCS.2020.26.
  • [9] Mark Braverman, Sumegha Garg, and David P. Woodruff. The coin problem with applications to data streams. In Sandy Irani, editor, 61st IEEE Annual Symposium on Foundations of Computer Science, FOCS 2020, Durham, NC, USA, November 16-19, 2020, pages 318–329. IEEE, 2020. doi:10.1109/FOCS46700.2020.00038.
  • [10] Rajesh Chitnis, Graham Cormode, Hossein Esfandiari, MohammadTaghi Hajiaghayi, Andrew McGregor, Morteza Monemizadeh, and Sofya Vorotnikova. Kernelization via sampling with applications to finding matchings and related problems in dynamic graph streams. In Robert Krauthgamer, editor, Proceedings of the Twenty-Seventh Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2016, Arlington, VA, USA, January 10-12, 2016, pages 1326–1344. SIAM, 2016. doi:10.1137/1.9781611974331.CH92.
  • [11] Chi-Ning Chou, Alexander Golovnev, Madhu Sudan, and Santhoshini Velusamy. Sketching approximability of all finite csps. J. ACM, 71(2):15:1–15:74, 2024. doi:10.1145/3649435.
  • [12] Graham Cormode and Donatella Firmani. A unifying framework for 0-sampling algorithms. Distributed Parallel Databases, 32(3):315–335, 2014. doi:10.1007/S10619-013-7131-9.
  • [13] Graham Cormode, Hossein Jowhari, Morteza Monemizadeh, and S. Muthukrishnan. The sparse awakens: Streaming algorithms for matching size estimation in sparse graphs. In Kirk Pruhs and Christian Sohler, editors, 25th Annual European Symposium on Algorithms, ESA 2017, September 4-6, 2017, Vienna, Austria, volume 87 of LIPIcs, pages 29:1–29:15. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2017. doi:10.4230/LIPICS.ESA.2017.29.
  • [14] Jacques Dark and Christian Konrad. Optimal lower bounds for matching and vertex cover in dynamic graph streams. In Shubhangi Saraf, editor, 35th Computational Complexity Conference, CCC 2020, July 28-31, 2020, Saarbrücken, Germany (Virtual Conference), volume 169 of LIPIcs, pages 30:1–30:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPICS.CCC.2020.30.
  • [15] Joan Feigenbaum, Sampath Kannan, Andrew McGregor, Siddharth Suri, and Jian Zhang. On graph problems in a semi-streaming model. In Josep Díaz, Juhani Karhumäki, Arto Lepistö, and Donald Sannella, editors, Automata, Languages and Programming: 31st International Colloquium, ICALP 2004, Turku, Finland, July 12-16, 2004. Proceedings, volume 3142 of Lecture Notes in Computer Science, pages 531–543. Springer, 2004. doi:10.1007/978-3-540-27836-8_46.
  • [16] Magnús M. Halldórsson, Xiaoming Sun, Mario Szegedy, and Chengu Wang. Streaming and communication complexity of clique approximation. In Artur Czumaj, Kurt Mehlhorn, Andrew M. Pitts, and Roger Wattenhofer, editors, Automata, Languages, and Programming - 39th International Colloquium, ICALP 2012, Warwick, UK, July 9-13, 2012, Proceedings, Part I, volume 7391 of Lecture Notes in Computer Science, pages 449–460. Springer, 2012. doi:10.1007/978-3-642-31594-7_38.
  • [17] Avinatan Hassidim, Haim Kaplan, Yishay Mansour, Yossi Matias, and Uri Stemmer. Adversarially robust streaming algorithms via differential privacy. J. ACM, 69(6):42:1–42:14, 2022. doi:10.1145/3556972.
  • [18] Monika Rauch Henzinger, Prabhakar Raghavan, and Sridhar Rajagopalan. Computing on data streams. In James M. Abello and Jeffrey Scott Vitter, editors, External Memory Algorithms, Proceedings of a DIMACS Workshop, New Brunswick, New Jersey, USA, May 20-22, 1998, volume 50 of DIMACS Series in Discrete Mathematics and Theoretical Computer Science, pages 107–118. DIMACS/AMS, 1998. doi:10.1090/DIMACS/050/05.
  • [19] Hossein Jowhari, Mert Saglam, and Gábor Tardos. Tight bounds for lp samplers, finding duplicates in streams, and related problems. In Maurizio Lenzerini and Thomas Schwentick, editors, Proceedings of the 30th ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, PODS 2011, June 12-16, 2011, Athens, Greece, pages 49–58. ACM, 2011. doi:10.1145/1989284.1989289.
  • [20] John Kallaugher and Eric Price. Separations and equivalences between turnstile streaming and linear sketching. In Konstantin Makarychev, Yury Makarychev, Madhur Tulsiani, Gautam Kamath, and Julia Chuzhoy, editors, Proceedings of the 52nd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2020, Chicago, IL, USA, June 22-26, 2020, pages 1223–1236. ACM, 2020. doi:10.1145/3357713.3384278.
  • [21] Christian Konrad. Maximum matching in turnstile streams. In Nikhil Bansal and Irene Finocchi, editors, Algorithms - ESA 2015 - 23rd Annual European Symposium, Patras, Greece, September 14-16, 2015, Proceedings, volume 9294 of Lecture Notes in Computer Science, pages 840–852. Springer, 2015. doi:10.1007/978-3-662-48350-3_70.
  • [22] Christian Konrad. A simple augmentation method for matchings with applications to streaming algorithms. In Igor Potapov, Paul G. Spirakis, and James Worrell, editors, 43rd International Symposium on Mathematical Foundations of Computer Science, MFCS 2018, August 27-31, 2018, Liverpool, UK, volume 117 of LIPIcs, pages 74:1–74:16. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2018. doi:10.4230/LIPICS.MFCS.2018.74.
  • [23] Christian Konrad and Kheeran K. Naidu. On two-pass streaming algorithms for maximum bipartite matching. In Mary Wootters and Laura Sanità, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2021, August 16-18, 2021, University of Washington, Seattle, Washington, USA (Virtual Conference), volume 207 of LIPIcs, pages 19:1–19:18. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPICS.APPROX/RANDOM.2021.19.
  • [24] Christian Konrad, Kheeran K. Naidu, and Arun Steward. Maximum matching via maximal matching queries. In Petra Berenbrink, Patricia Bouyer, Anuj Dawar, and Mamadou Moustapha Kanté, editors, 40th International Symposium on Theoretical Aspects of Computer Science, STACS 2023, March 7-9, 2023, Hamburg, Germany, volume 254 of LIPIcs, pages 41:1–41:22. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2023. doi:10.4230/LIPICS.STACS.2023.41.
  • [25] Yi Li, Huy L. Nguyen, and David P. Woodruff. Turnstile streaming algorithms might as well be linear sketches. In David B. Shmoys, editor, Symposium on Theory of Computing, STOC 2014, New York, NY, USA, May 31 - June 03, 2014, pages 174–183. ACM, 2014. doi:10.1145/2591796.2591812.
  • [26] Fuheng Zhao, Divy Agrawal, Amr El Abbadi, and Ahmed Metwally. Spacesaving± an optimal algorithm for frequency estimation and frequent items in the bounded deletion model. Proc. VLDB Endow., 15(6):1215–1227, 2022. doi:10.14778/3514061.3514068.
  • [27] Fuheng Zhao, Divyakant Agrawal, Amr El Abbadi, Claire Mathieu, Ahmed Metwally, and Michel de Rougemont. A detailed analysis of the spacesaving± family of algorithms with bounded deletions. CoRR, abs/2309.12623, 2023. doi:10.48550/arXiv.2309.12623.
  • [28] Fuheng Zhao, Sujaya Maiyya, Ryan Weiner, Divy Agrawal, and Amr El Abbadi. Kll±: Approximate quantile sketches over dynamic datasets. Proc. VLDB Endow., 14(7):1215–1227, 2021. doi:10.14778/3450980.3450990.

Appendix A Technical Lemma

Lemma 9.

Let a,b,n be integers such that a<n and bna. Then, the following inequality holds:

(nab)(nb)(1bn)a.

Proof.

We compute as follows:

(nab)(nb) =(na)!(nb)!(nab)!n!=(nb)(nb1)(nab+1)n(n1)(na+1)
(nbn)a=(1bn)a,

where we used the inequality nbjnjnbn, for every 0ja.