Abstract 1 Introduction 2 Related work 3 Preliminaries 4 Dynamic algorithm for submodular matching References

Dynamic Algorithms for Submodular Matching

Kiarash Banihashem111Equal Contribution 222Corresponding Author Department of Computer Science, University of Maryland, College Park, MD, USA Leyla Biabani111Equal Contribution Department of Mathematics and Computer Science, TU Eindhoven, The Netherlands Samira Goudarzi111Equal Contribution 222Corresponding Author Department of Computer Science, University of Maryland, College Park, MD, USA MohammadTaghi Hajiaghayi111Equal Contribution Department of Computer Science, University of Maryland, College Park, MD, USA Peyman Jabbarzade111Equal Contribution Department of Computer Science, University of Maryland, College Park, MD, USA Morteza Monemizadeh111Equal Contribution Department of Mathematics and Computer Science, TU Eindhoven, The Netherlands
Abstract

The Maximum Submodular Matching (MSM) problem is a generalization of the classical Maximum Weight Matching (MWM) problem. In this problem, given a monotone submodular function f:2E0 defined over subsets of edges of a graph G(V,E), we are asked to return a matching whose submodular value is maximum among all matchings in graph G(V,E). In this paper, we consider this problem in a fully dynamic setting against an oblivious adversary. In this setting, we are given a sequence 𝒮 of insertions and deletions of edges of the underlying graph G(V,E), along with an oracle access to the monotone submodular function f. The goal is to maintain a matching M such that, at any time t of sequence 𝒮, its submodular value is a good approximation of the value of the optimal submodular matching while keeping the number of operations minimal.

We develop the first dynamic algorithm for the submodular matching problem, in which we maintain a matching whose submodular value is within expected (8+ϵ)-approximation of the optimal submodular matching at any time t of sequence 𝒮 using expected amortized poly(logn,1ϵ) update time.

Our approach incorporates a range of novel techniques, notably the concept of Uniform Hierarchical Caches (UHC) data structure along with its invariants, which lead to the first algorithm for fully dynamic submodular matching and may be of independent interest for designing dynamic algorithms for other problems.

Keywords and phrases:
Matching, Submodular, Dynamic, Polylogarithmic
Category:
Track A: Algorithms, Complexity and Games
Copyright and License:
[Uncaptioned image] © Kiarash Banihashem , Leyla Biabani, Samira Goudarzi, MohammadTaghi Hajiaghayi,
Peyman Jabbarzade, and Morteza Monemizadeh; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Dynamic graph algorithms
Funding:
The work is partially supported by DARPA QuICC, ONR MURI 2024 award on Algorithms, Learning, and Game Theory, Army-Research Laboratory (ARL) grant W911NF2410052, NSF AF:Small grants 2218678, 2114269, 2347322.
Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis

1 Introduction

Matching theory.

The theory of matching is a fundamental part of graph theory, not only due to its applications but also because it has been a source of important concepts during the rapid growth of combinatorics in recent years [52]. Two pivotal problems within matching theory are the maximum cardinality matching (MCM) and the maximum weight matching (MWM). In MCM [37], the objective is to find a matching in an unweighted graph G(V,E) that maximizes its cardinality. In contrast, MWM [29] extends MCM to an edge-weighted graph G(V,E), aiming to compute a matching that maximizes the collective weight (i.e., the sum of edge weights) among all matchings within G. Extensive research has been conducted on MCM [37, 52, 29, 53] and MWM [62, 61, 22, 66].

Submodularity.

Submodularity, a fundamental concept with extensive applications, is widely utilized in various domains such as combinatorial optimization [17, 45, 54], machine learning [42, 50, 68], and online auction theory [31, 2, 40]. A function f:2N+ defined on a ground set N is termed submodular if for all ABN and an element eB, we have f(A{e})f(A)f(B{e})f(B). The practical applications of the submodular concept are significant, especially in real-world scenarios where various objectives demonstrate diminishing returns. Indeed, its applications span diverse domains, ranging from machine translation [49] to internet advertising [41, 21], along with substantial contributions to combinatorial auctions [14, 47, 67].

In this paper, we consider the problem of approximating maximum submodular matching (MSM) in the (fully) dynamic model. The MSM problem is defined as follows: Given a monotone (non-negative) submodular function f:2E+ defined over subsets of edges of a graph G(V,E), the goal of the MSM problem is to return a matching whose submodular value is OPT=maxMis a matching in Gf(M), where OPT is the maximum submodular value of any matching in G(V,E). We say a matching M is an α-approximation of a maximum submodular matching in G(V,E), if f(M)OPTα for α1. Given its applications in areas such as computational linguistics (e.g., see Lin and Bilmes [51, 3]), and the fact that it is a natural generalization of MWM, many works have previously studied MSM [28, 44, 46, 31, 27], and the current best approximation factor for the problem is (2+ϵ) as obtained by [27].

Fully dynamic model.

The increasing prevalence of massive data applications and models has generated significant interest in algorithms tailored for real-time scenarios. In these contexts, the time constraints are too limited for recomputing solutions after every update. To address these real-world constraints, researchers have developed methods to maintain problem solutions with fast update times in the (fully) dynamic model [63, 36, 35, 34]. In this model, given a sequence 𝒮 of updates (insertions and deletions) involving the edges of the underlying graph G(V,E). The objective is to efficiently preserve an approximate or exact solution for a problem defined on graph G as updates occur. The fully dynamic model is often regarded as the “holy grail” of dynamic algorithm design [60].

Onak and Rubinfeld [58] explored approximating the maximum cardinality matching in the dynamic model, significantly increasing interest in this area. Subsequent works further advanced dynamic algorithms for matching and maximum weighted matching [65, 12, 32]. This model has also been studied extensively for many other graph problems. We refer to the recent survey by [33] for more details.

Our focus is on the maximum submodular matching (MSM) problem within the fully dynamic model.

In this problem, our goal is to maintain an approximate submodular matching at any time using a few (hopefully polylogarithmic) operations. Despite the extensive research on developing dynamic graph algorithms, obtaining a fully dynamic algorithm for maximum submodular matching has remained open. This was formally posed as on open problem by Levin and Wajc [48] in SODA’21.

Inapplicability of Dynamic Maximum Weight Matching.

One might wonder whether it is feasible to employ a reduction from maximum submodular matching to maximum weight matching in the dynamic model in a manner similar to how maximum weight matching is reduced to maximum cardinality matching. However, existing reduction methods for solving maximum weight matching in the dynamic model, as demonstrated in the works of Stubbs and Williams [65] and Bernstein, Dudeja, and Langley [12], require that each edge has a predetermined weight, and they heavily rely on these weights throughout their algorithms. While it is possible to assign a “weight” to each edge in submodular matching based on its marginal gain (the value it adds to a set), this “weight” is not predetermined and can vary depending on the other edges collected in a set.

In this paper, we answer this question affirmatively by providing the first dynamic algorithm for the maximum submodular matching problem with polylogarithmic running time. We state our main result as follows.

Theorem 1.

For any ϵ>0, there exists a fully dynamic randomized algorithm for the MSM problem that achieves an expected (8+ϵ)-approximation against an oblivious adversary. The algorithm has an expected amortized update time of poly(logn,1ϵ), where n is the number of vertices in the graph.

To establish this result, we introduce several new techniques, including the introduction of the Uniform Hierarchical Caches (UHC) data structure. Although inspired by recent dynamic algorithms for submodular maximization with cardinality constraints [55, 43, 4, 7], our approach addresses the unique challenges of matchings, which these works cannot handle. As noted by [23], these approaches do not immediately extend to more complex constraints such as matroids and matchings. Recent works have also developed efficient algorithms for submodular optimization under matroid constraints [6, 23]. However, their update time depends linearly (or even quadratically) on k, the maximum size of their solution, which is analogous to Θ(n) for matchings, making their methods impractical for our problem.

To sidestep these issues, we borrow techniques from the streaming literature, specifically the work of [16] who designed a 7.75-approximation algorithm for the problem. Inspired by this work, we define the notion of the admissibility and extension of an edge with respect to a matching, which loosely correspond to the behavior of the streaming algorithm when seeing an edge in the stream. However, their algorithm relies on a set of “shadow elements”, which are a subset of the elements which were previously in the stream. These elements cause the algorithm to behave in a “non-monotone” way; specifically, the marginal gain of an element may actually grow as the solution evolves. This causes problems in the dynamic setting, as existing approaches heavily rely on the reduction of marginal gain to bound the number of levels in the data structure and consequently bound the running time.

To avoid this problem, we consider a simplification of the approach in [16]; specifically, we calculate the marginal gain of an element with respect to the union of all previous solutions. Since, by definition, this set is always increasing, the marginal gain always decreases by submodularity. While the approximation factor increases slightly (from 7.75 to 8+ϵ), this allows us to obtain the first efficient algorithm for dynamic submodular matching.

1.1 Overview of approach

Here, we provide an overview of our techniques and outline our approach. We begin by defining a few fundamental concepts used throughout the paper. Next, we design an offline algorithm that recursively constructs a hierarchical data structure, which we call Uniform Hierarchical Caches (UHC). The primary objective of UHC is to store a matching M in a graph G whose submodular value provides an (8+ϵ)-approximation of an optimal submodular matching of G. In Maximum Weight Matching (MWM), edge weights remain fixed throughout the algorithm, whereas in Maximum Submodular Matching (MSM), the marginal gain of an edge depends on the current matching. To address this challenge, we introduce the concept of admissibility. To efficiently handle edge insertions and deletions, we develop Insert and Delete subroutines that maintain the UHC data structure (and matching M) within polylog(n) update time, periodically invoking the offline algorithm to partially reconstruct UHC. Finally, to ensure that the update time of our dynamic algorithm remains polylogarithmic, we design a sampling mechanism that makes it difficult for an adversary to predict our matching edges.

We begin by defining the fundamental concepts of admissibility, filtering, and extension. Let M be any matching in G. An edge eE\M is admissible to M if it can be added to M while removing its neighboring edges333Two edges e and e are neighbors if they share an endpoint. in a way that sufficiently increases the submodular value of M. Otherwise, we say that e is not admissible to M. Given an ordered sequence of edges SE, we define the extension of M by S as the matching obtained by iterating through eS and substituting e for its neighbors in M if e is admissible. We denote this operation as Extend(M,S), which produces a new matching M. Edges in E\M that are not admissible to M can be safely filtered out from E, as their submodular value can be charged to their neighboring edges within M.

1.1.1 Offline algorithm

Next, we describe the offline construction of the UHC data structure. We begin by initializing the cache C0, which consists of an empty matching M0 and a set R0 containing all edges in graph G whose submodular values are not too low. At each level of the UHC data structure, we construct a cache C that includes a matching M and a set R of edges that are admissible with respect to M.

Figure 1: Overview of our UHC data structure: We maintain a hierarchical cache structure where each cache C contains a matching M and a set of edges R that are admissible with respect to M. Observe that as we go from first cache C0 toward final cache CT, the size of sets R is monotonically decreasing and at the same time the submodular value of matchings M are monotonically increasing.

In order to obtain sets R and M at every cache C, we develop a sampling mechanism. This sampling mechanism samples a set S of edges according to a distribution 𝒟 that is defined over R1 and modifies matching M1 using these sampled edges to obtain matching M. During this modification process, we first let M be a copy of M1 and then extend M by S using operation Extend(M,S). That is, we insert those edges of S that are admissible to M and remove their neighbors from M to preserve the matching constraint. We then obtain set R from set R1 by filtering out those edges that are not admissible with respect to M. At the end of this process, we obtain set R and matching M. Our final cache CT consists of reported matching MT and an empty set RT. An overview of our UHC data structure can be seen in Figure 1.

In the previous section, we introduced the admissibility concept and we used it in the offline algorithm, but we have not defined it precisely. In addition, we explained that there exists a distribution 𝒟 for every cache C from which set S of sampled edges are taken. However, we have not explained how we obtain this distribution and why we need such a sampling distribution (instead of, for example, a simple greedy algorithm to obtain M and R). Intuitively, the admissibility concept is because the marginal gains of edges are not predetermined and can change, and the sampling mechanism is to make guessing our matching edges a hard task for the adversary and this in turn bounds the update time of our dynamic algorithm polylogarithmically. Next, we explain these two notions in detail.

1.1.2 Computing submodular weights and admissibility concept

The big difference between maximum weight matching (MWM) and maximum submodular matching (MSM) is that in contrast to MWM where edges have predetermined weights (that do not change in the course of a dynamic algorithm), in MSM, the marginal gain444Given a subset AE and an element eE\A, we denote by f(e|A)=f(A+e)f(A) the marginal gain of adding e to A. of any arbitrary edge with respect to a matching M may change if edges are deleted from M or inserted to M. To handle this issue, we introduce the concept of admissibility that describes when an element can be added to a matching.

For defining admissibility, we maintain two additional objects in every cache C. The first one is a union set U, the union of all matchings until step l, i.e., M0M1M. The second one is a weight function W:U0 that serves as a proxy for submodular function f and represents the marginal gain of each edge eU when it was added to U.

Figure 2: Here an edge e is checked if it is admissible to a matching M. The two edges e and e′′ are in M. The marginal gain of e is, however, computed with respect to union set U (not matching M).
Admissibility.

Now, we define the admissibility concept as follows. An edge eE is admissible to matching M if its marginal gain f(e|U)=f(U{e})f(U) with respect to union set U (not matching M) is more than the maximum of a threshold τmin (that will be clear from our analysis) and two times the total weight of its neighboring edges that are in M. Recall that in the offline algorithm, at every cache C, we extend M by S using operation Extend(M,S). To this end, for each edge eS, if f(e|U) is less than two times the total weight of its neighbors in M or less than threshold τmin, we ignore edge e. Otherwise, we fix (submodular) weight w(e)=f(e|U) and we add e to M and U and then delete its neighbors from M. Later, We obtain set R from set R1 by filtering out those edges that are not admissible with respect to M.

Here we phrase a couple of facts about the admissibility concept: First, while edges are added to and removed from M, they are only added to U, but not deleted from it. Second, the weight of each element will be fixed when it is added to a matching M, and can only change if the corresponding cache C is reconstructed. Finally, the reason that we consider the marginal gain with respect to union set U (and not matching M) is to ensure that marginal gains of edges in remainder sets always decrease as we go from a cache C to next cache C+1 and the marginal gain of any edge e that is added to a matching increases the submodular value of the matching by a factor two of the total weights of its neighboring edges in the matching. The former observation helps us to bound the update time and the second one helps to prove the approximation guarantee.

We note that similar weight assignments have been used previously, but in the streaming model, which differs from the dynamic setting considered here. In particular, Chakrabarti and Kale [16] presented a one-pass algorithm for maximum submodular matching by employing the maximum weight matching algorithm of Zelke [69] in their streaming algorithm. However, beyond the difference in models, with theirs being insertion-only streaming and ours being dynamic, our algorithm differs from theirs in two key ways: First, in their algorithm, the weight of an edge e is its marginal gain f(eMA), where M is a matching and A is selected from an O(1)-sized neighborhood of e. In contrast, we define the weight w(e)=f(eU), where U is the set of all edges added to the matching in previous caches. Second, to ensure that the number of caches remains polylogarithmic, we discard any edge e whose weight falls below a threshold τmin.

Approximation guarantee.

Using the admissibility notion, we can now obtain the approximation guarantee of our offline algorithm as follows. Let MT and UT be the final matching and union set at final cache CT and let M be an optimal matching. First, observe that for any edge eEUT, the (submodular) weight is not defined. For such an edge e, the weight w(e) is defined as f(e|UT), and for an edge set X, the weight w(X) is calculated as the sum of the weights of all edges in X, i.e., w(X)=eXw(e). Then, we show that f(M)2w(MT)+w(M). and also w(M)6w(MT)+ϵMAX by charging the weight of the edges in M to the edges in MT. Here MAX=maxef({e}) is the maximum submodular value of any edge in the update sequence 𝒮. Putting things together, we prove that MT is an (8+ϵ)-approximate submodular matching.

1.1.3 Designing a dynamic algorithm

Now, we explain how to maintain the UHC data structure dynamically. Our main idea is to deploy a lazy strategy for insertions and deletions. This will help us to polylogarithmically bound the running time of our dynamic algorithm. Let us fix a cache C. In the lazy strategy for insertions, we consider two sets Rsnap and R of edges that we call them, the snapshot and remainder sets, respectively. Remainder set R stores edges that are inserted before they are added to snapshot set Rsnap. When a new edge is inserted, we iterate through caches C1,,CT and add the edge to each remainder set until the edge is no longer admissible with respect to the matching that is stored in that cache. We should mention that the matching is always constructed from snapshot sets and not remainder sets. Once a remainder set R has changed enough (i.e., its size is an O(1+ϵ)-fraction of size snapshot Rsnap), we set Rsnap to R, and reconstruct all follow-up caches C.555For any , we use the C to denote the set {Cj:j} and C< to denote the set {Cj:j<}.

For deletions, a similar lazy strategy is used. However, for deletions, we do not consider an extra set for each cache. Instead, we consider a global deletion buffer that we denote by D. When an edge e is deleted, we mark it as deleted and add it to D, and then, we iterate through caches C1,,CT and delete the edge from all remainder sets.666Remainder sets should contain up-to-date admissible edges with respect to their corresponding matchings. But, we do not remove edge e from any of the matchings or snapshot sets. Once an O(ϵ)-fraction of elements of a snapshot set Rsnap are marked deleted, we remove these elements from Rsnap and reconstruct all caches C. When reporting the final matching, we output MT\D, i.e., we drop all edges that were deleted.

In the dynamic setting, we need to maintain the relationship between the remainder and snapshot sets in each cache and across two consecutive caches. To achieve this, we introduce two invariants and ensure they are upheld during edge insertions and deletions. The first is the update invariant, which asserts that the snapshot set Rsnap and the remainder set R are O(ϵ)-close to each other. If this invariant is violated, we reconstruct caches C to restore the O(ϵ)-closeness. The second is the monotonicity invariant, which ensures the consistency between snapshot sets in consecutive caches. Specifically, it states that if an update (either insertion or deletion of an edge) is reflected in snapshot R1snap, it must also be reflected in snapshot Rsnap.

1.1.4 Sampling strategy

Recall that for every cache C, we develop a sampling mechanism that samples a set S of edges from remaining set R1snap and modifies matching M1 using these sampled edges to obtain matching M. A subtle point here is that admissible edges that are stored in remaining set R1snap may have different marginal gains with respect to union set U1. This in turn makes handling deletions and insertions a difficult task as a few edges may have very high marginal gains and the adversary may produce a sequence of updates based on these edges to cause a dynamic algorithm to have a high running time. To resolve this issue, distribution 𝒟 that we use to extract sample set S is defined as a combination of a uniform distribution that is defined over B1max (B1max is defined a bit later) and a suitable sample size s=|S| so that the following properties hold:

  • Property 1: All edges in S should have roughly the same marginal gain with respect to union U.

  • Property 2: Almost all of them (indeed, (1ϵ)-fraction of edges in S) must be admissible with respect to matching M1, and so those admissible edges will be added to matching M.

  • Property 3: At least Ω(ϵ)-fraction of edges of R1 are filtered out and they are not shown in R.

For the first property, we partition the edges in snapshot set R1snap into buckets based on their marginal gains to U1 so that the edges that have almost the same (up to a factor of (1+ϵ)) marginal gain are placed in the same bucket. We then choose the largest bucket B1max (i.e., the one that has the most number of edges777If there are multiple buckets with the largest number of edges, we choose one arbitrarily.), estimate a suitable sample size s for this bucket, and sample a set S of s edges from B1max.

As for the second and third ones, we develop a procedure EstimateSampleSize that computes a suitable estimator s for the number of admissible edges of largest bucket B1max that can be added to matching M1. The suitable estimator s will play an important role in our analysis. On one hand, we prefer a smaller sample size as this ensures that a larger fraction of the sampled edges are accepted (i.e., they can be added to matching M1). This will help us to fool the adversary when he wants to delete matching edges in M. On the other hand, we prefer to have a larger sample size as it ensures that more edges are filtered out from remaining set R1 when we construct the follow-up remaining set R. This will in turn decreases the number of caches leading to better running time.

To deal with this issue, we devise a procedure EstimateSampleSize that optimizes this trade-off, by repeatedly adding elements of the largest bucket B1max in different random permutations and checking the length of the prefix of each such permutation that can be added to M1. This is an intricate task, however, since the expected fraction of edges that can be added to matching M1 is a function of the sample size which is not necessarily decreasing. Therefore, the ideas based on binary-search sampling (e.g., the “Threshold-Sampling” algorithm of [25]) cannot be directly utilized here.

These three properties together will help us to obtain an amortized expected polylogarithmic bound on the running time of our dynamic algorithm. Indeed, using the first two properties, we show that any sequence of updates that influences (i.e., inserts and/or deletes) at least Ω(ϵs) edges in S, should be of size Ω(ϵ|R|). This means, at any time when Ω(ϵ)-fractions of samples of S are influenced by a sequence of updates, we have enough credit to rebuild all caches C. This gives us the amortized expected bound.

The third property helps us bound the number of caches constructed in the UHC data structure. In particular, using this property, at least ϵ2-fraction of edges of R1 are filtered out and they are not shown in R. Therefore, after O(ϵ1logn) recursive steps of the offline algorithm, at final cache CT, set RT is an empty set and the offline algorithm terminates.

To maintain the properties of our sampling mechanism, we introduce two invariants and ensure they hold in the dynamic setting. The first is the sample size invariant, which asserts that with high probability, |S| satisfies a notion of suitability (see Definition 4). The second is the uniform invariant, which asserts that S is a uniformly random sample set of size s from the largest bucket B1max. This invariant allows us to bound the effect of edge deletions performed by the adversary. Intuitively, if the adversary aims to delete at least an ϵ-fraction of the edges in the sample set S, they must delete at least an ϵ-fraction of the edges from B1max.

2 Related work

Dynamic algorithms for MCM.

The first breakthrough result for the approximate matching problem in the dynamic model was developed in 2010 by Onak and Rubinfeld [57]. In particular, Onak and Rubinfeld [57] were the first who developed a dynamic algorithm that maintains an O(1)-approximate matching with polylog(n) amortized update times. Baswana and Gupta and Sen [9] (FOCS’11) showed how to maintain a maximal matching of a graph in O(logn) amortized update time.

Later Neiman and Solomon [56] (STOC’13) developed a dynamic algorithm that maintains a 3/2-approximation matching of an underlying graph G using a deterministic worst-case update time of O~(m) where m is the maximum number of edges that exists at any time in the graph G.888O~(f(n,m)) means polylog(n)f(n,m) factor where f(n,m) is a function of m and n. Gupta and Peng [32] (FOCS’13) improved the approximation factor down to (1+ϵ) using the same update times as of Neiman and Solomon [56]. Subsequently, Bhattacharya, Henzinger, and Italiano [13] (SODA’15) developed (3+ϵ)-approximate and (4+ϵ)-approximate dynamic matching algorithms using deterministic amortized and worst-case update times of O(ϵ2m1/3) respectively. Bhattacharya, Henzinger, and Nanongkai [13] showed how to maintain an (2+ϵ)-approximate matching with poly(ϵ1logn) deterministic amortized update times. Solomon [64] (FOCS’16) showed that we can maintain (2+ϵ)-approximate matching in constant amortized update time. Recently, Behnezhad, Łącki, and Mirrokni[11] (SODA’20) developed (2ϵ)-approximate dynamic matching with O(nϵ) update times.

Dynamic algorithms for MWM.

Anand, Baswana, Gupta, and Sen [1] were the first who studied MWM in the dynamic model and showed how to maintain a matching with the weight that is expected to be at most 4.9 times the optimal solution. Their algorithm has O(lognlogC) update time where C is the ratio of the weights of the highest weight edge to the smallest weight edge in the given graph. Later, Gupta and Peng [32] studied dynamic MWM and developed a deterministic (1+ϵ)-approximate algorithm for MWM using O(mlogN), where we assume edges weights are in range [1..N].

Stubbs and Vassilevska Williams [65] established a reduction from the dynamic MWM problem to the dynamic MCM problem. This reduction shows that if there exists an α-approximation algorithm for MCM with update time t(m,n) in a graph with m edges and n vertices, then this would yield a (2+ϵ)α-approximation algorithm for MWM with update time O(t(m,n)ϵ2log4n+lognloglogC+loglogN)2, where C is the ratio between the largest edge weight N and the smallest edge weight L. Bernstein, Dudeja, and Langley [12] (STOC’21) developed two dynamic algorithms for the MWM problem. First, they present a randomized dynamic algorithm that maintains a (2+ϵ)-approximate MWM with worst-case update time (1ϵ)O(1ϵ)polylog(n)loglogC with high probability against an adaptive adversary. Second, they propose a deterministic dynamic algorithm that maintains a (3/2+ϵ)-approximate MWM with update time (1ϵ)O(1ϵ)m1/4loglogC.

Known dynamic algorithms for submodular functions.

The study of submodular maximization in the dynamic model was initiated by two independent works [43, 55]. Lattanzi et al. [43] presented a randomized dynamic algorithm that maintains an expected (2+ϵ)-approximate solution for a submodular function under a cardinality constraint k, with polylogarithmic update time over a dynamic sequence of updates 𝒮. Monemizadeh [55] presented a randomized dynamic algorithm with a (2+ϵ) approximation guarantee and expected amortized query complexity polynomial in k and logn. An alternative algorithm with polylogarithmic update time was later proposed by Banihashem et al. [4]. These results were subsequently generalized by [5], who provided a dynamic algorithm for non-monotone submodular maximization under a cardinality constraint.

Chen and Peng [18] (STOC’22) show two lower bounds for submodular maximization in the dynamic model. Their first lower bound shows that any randomized algorithm that achieves an approximation ratio of (2ϵ) for dynamic submodular maximization under cardinality constraint k requires amortized query complexity nΩ~(ϵ)/k3. They also prove a stronger result by showing that any randomized algorithm for dynamic submodular maximization under cardinality constraint k that obtains an approximation guarantee of 1.712 must have an amortized query complexity of Ω(n/k3).

The problem of fully dynamic algorithms for submodular maximization under the matroid constraint was recently studied in [23] and [6]. Dütting et al. [23] provide a (4+ϵ)-approximation algorithm with O(k2ϵlog(k)log2(n)log3(kϵ)) amortized expected query complexity. Banihashem et al. [6] propose a (4+ϵ) approximation algorithm with O(klog(k)log3(kϵ)) expected query complexity per update. While our methods shares some similarities with theirs, their update times depend on the solution size k, which for matchings can be as large as θ(n). Therefore, even if adapted to the matching setting, their approaches are unlikely to yield sublinear update times in n. In contrast, our update time depends polylogarithmically on n.

Known streaming algorithms for the MSM and MWM problems.

Chakrabarti and Kale [16] pioneered the first streaming algorithm for the maximum submodular matching (MSM) problem. They developed a one-pass streaming algorithm that uses O(n) space to compute a matching with a 7.75-approximation of the optimal submodular matching in the graph. Their work established a streaming framework for submodular maximization under matroid and matching constraints, including constraints defined by p-hypergraphs or the intersection of p matroids.

For the maximum weight matching (MWM) problem, Crouch and Stubbs [19] proposed a (4+ϵ)-approximation in the streaming model, improving earlier algorithms by Zelke [69] and Epstein et al. [24]. Paz and Schwartzman [59] introduced the first (2+ϵ)-approximate streaming algorithm for MWM using the local-ratio technique. Ghaffari and Wajc [30] optimized the space complexity of this algorithm, reducing it from O(nlog2n) to O(nlogn) using a primal-dual method. The connection between the local-ratio and primal-dual methods was first explored by Bar-Yehuda and Rawitz [8].

Levin and Wajc [48] (SODA’21) improved the streaming algorithm for the MSM problem initially proposed by Chakrabarti and Kale [16]. They applied the (randomized) primal-dual method, also explored by Ghaffari and Wajc [30] for MWM in data streams, achieving a 5.828-approximation. This improves the 7.75-approximation from Chakrabarti and Kale [16] after almost a decade. We also note that Devanur, Jain, and Kleinberg [20] (SODA’13) first introduced a primal-dual analysis for online matching in bipartite graphs.

3 Preliminaries

Set notation.

For two natural numbers x<y, we define [x,y]:={x,x+1,,y} and [x]:={1,2,,x}. For a set A and an element e, we denote by A+e, the set that is the union of two sets A and {e}. Similarly, for a set A and an element eA, we denote by Ae or A\e, the set A from which the element e is deleted. Given two ordered sets A and B such that AB=, we will use A+B to denote the set obtained by concatenating the two sets. For instance, if A=[a,b] and B=[c,d,e], then A+B=[a,b,c,d,e]. Throughout our results, we will find it convenient to group together closely related values using the notation (X,Y). In such cases, we often abbreviate indexing notation by using (X,Y)0 instead of (X0,Y0).

Probability notations.

We will use the notations [X] and 𝔼[X] to denote the probability and the expectation of a random variable X, and use 𝔼[X|Y] to denote the conditional expectation of X given Y. In addition, for an event A, we define 𝟙[A] as the indicator function of A. That is, 𝟙[A] is set to one if A holds and is set to zero otherwise.

Graph notations.

Let G(V,E) be an unweighted graph with n=|V| vertices and m=|E| edges. The neighbors of an edge e are all edges eE such that e and e share an endpoint. A matching M of G is a set of edges such that no two share an endpoint. We denote the number of edges in M (i.e., the size of M) by |M|.

Submodular function.

Given a ground set N, a function f:2N+ is called submodular if it satisfies f(A{e})f(A)f(B{e})f(B), for all ABN and eB. In this paper, we assume that f is normalized, i.e., f()=0, and that it is monotone, i.e., it satisfies the additional property f(A{e})f(A)0 for all A and eA. For a subset AV and an element eV\A, we denote by f(e|A)=f(A+e)f(A) the marginal gain of adding eV to A.

Oracle model.

The conventional approach in the submodular literature [39, 23, 43, 26] is to assume the existence of an external oracle. This oracle takes an arbitrary set S as input and computes f(S). As is standard submodular optimization, we measure the running time of the algorithm by the number of queries made to this oracle.

Oblivious adversarial model.

We work in the oblivious adversarial model as is common for analysis of dynamic algorithms [58, 10, 56, 38], as well as other randomized data structures such as universal hashing [15]. In this model, the sequence of insertions and deletions is determined by an adversary who knows the dynamic algorithm used, but crucially does not know the random bits used by the algorithm. This is equivalent to assuming that the adversary prepares the full sequence of insertions and deletions before the algorithm runs.

4 Dynamic algorithm for submodular matching

In this section, we present our dynamic algorithm for the submodular matching problem. We first introduce the notions of admissibility and extension in Section 4.1. Next, in Section 4.2, we overview our data structure which uses caches, and explain how caches are constructed. We then describe Estimate Sample Size operation, the building block we develop for cache construction to calculate the number of edges sampled for each cache in Section 4.3. Finally, in Section 4.4, we show how to handle insertions and deletions. Throughout these sections, we assume a given parameter MAX approximating maxeEf(e) within a constant factor, that is, maxeEf(e)MAX2maxeEf(e). We also assume that deleted edges are not reinserted. In Section 4.5, we relax these assumptions to obtain a fully general algorithm.

4.1 Admissibility and extension

In this section, we define the notions of admissibility and extension that will be used in our algorithm. Given an edge e, let Neighbor(e,M)={eM:ee} denote the set of e’s neighbors in M, i.e., the edges in M that share an endpoint with e. As outlined in the introduction, an edge is admissible with respect to a matching if its marginal gain is sufficiently high compared to its neighbors in the matching. In order to formalize this argument, our algorithm will keep track of a matching M and two additional quantities for each cache; a matching union set UM and a weight function W:U0. Together, these three quantities are referred to as a matching triple, as we formally define here.

Definition 2 (Matching triple).

We say the triple (M,U,W) is a matching triple if

  • M is a matching

  • U is a superset of M

  • W:U0 is a weight function

We can now formally define the notion of admissibility, and a closely related notion of filtering, which will be used throughout our algorithm.

Definition 3 (Admissibility and filtering).

Let (M,U,W) be a matching triple (Definition 2), and let τ>0 be a threshold parameter. We say that edge e is admissible with respect to triple (M,U,W) and threshold τ if its marginal gain is larger than both τ and twice the weight of its neighbors in M. Formally,

Admissible(e,(M,U,W),τ)={True if f(e|U)max{τ,2W(Neighbor(e,M))}Falseotherwise.

Given a set of edges R, the filtering of R is the subset of R that are admissible. Formally,

Filter(R,(M,U,W),τ)={eR:Admissible(e,(M,U,W),τ)=True}.

While admissibility determines when an edge should be added to the matching, it does not specify how the triple (M,U,W) should change upon edge additions. To address this, we introduce the extension operator. Given a matching (M,U,W), a threshold τ>0, and an ordered set of edges S, we iterate through the edges eS in the given order. Whenever e is deemed admissible, we perform the following operations: We set the weight of e as its marginal gain with respect to U, i.e., f(e|U), add e to M, remove its neighbors from M, and finally, add e to U.

Algorithm 1 Extension of a matching triple.

We call the matching triple obtained by this procedure the extension of (M,U,W) along S and τ, denoted by Extend(S,(M,U,W),τ). Formal pseudocode is provided in Algorithm 1.

4.2 Cache construction

Let MAX denote the maximum submodular value of a single edge in the graph and set τmin:=ϵMAXn4. As outlined in the introduction, our dynamic algorithm will maintain a step-by-step construction of the final matching, where each step is represented by a cache. Letting T denote the total number of steps in our construction, we will maintain T+1 caches C0,,CT, where C0 is the first cache and CT is the final cache. Each cache C consists of the following:

  • A matching triple (M,U,W).

  • A remaining set R of edges that are admissible with respect to (M,U,W) and τmin.

  • A “snapshot” Rsnap stores the value of R at the last time the caches were constructed.

  • An (ordered) sample set S. The edges in S are sampled from R1 during constructing C and are used to build the matching M. Cache C0 will not contain any sample set.

We will additionally use a set D to keep track of all deleted edges.

Next, we explain the cache construction process. Given a graph G=(V,E), we initialize cache C0 with an empty matching triple and set its remaining set to contain all the edges admissible with respect to this triple and τmin. Since (M,U,W)0 is empty, R0 is the set of all the edges that have submodular value at least τmin. We will additionally initialize R0snap to R0 and D to . For each 1, we build cache in the following four steps.

  1. 1.

    Bucketing. We first bucket the edges eR1 based on the value of f(e|U1). Each bucket is linked to a range with a maximum to minimum ratio of 1+ϵ, and contains the edges whose marginal gain falls within this range. We let B1max denote the largest (maximum cardinality) bucket and let [τ,(1+ϵ)τ) denote its corresponding range. We note that the number of buckets is logarithmic as all of the edges eR1 satisfy f(e|U1)τmin since they are admissible with respect to cache 1, and f(e|U1)f(e)MAX since f is submodular.

  2. 2.

    Sampling. We choose a suitable sample size s1 and sample s edges S:=[e,1,,e,s] uniformly at random from B1max. 999 We use the notation [.] instead of {.} to denote S because fixing the ordering of the edges S will later be important in our analysis. We note that from an algorithmic perspective, iterating through S in any random order would not change our algorithm as S is sampled uniformly at random. We will discuss the choice of s in Section 4.3.

  3. 3.

    Matching. We will build the matching triple by extending the triple (M,U,W)1 along S to obtain the triple (M,U,W):=Extend(S,(M,U,W)1,τ).

  4. 4.

    Filter. We filter the edges in R1 with respect to the new triple (M,U,W) to obtain the remaining set R:=Filter(R1,(M,U,W),τmin).

After completing the above steps, we increment by one and repeat the process as long as R1. Once this condition fails, we terminate the construction and let T denote the index of the last non-empty cache. During the construction, we also update the set Rsnap for each cache C by setting it to R. Formal pseudocode is provided in Algorithm 2.

Algorithm 2 Initialization and cache construction.

4.3 Estimate sample size

A key part of the cache construction algorithm is the choice of the sample size s, which we explain in this section. Intuitively, choosing a small sample size ensures that a larger fraction of the edges in S will be added to the matching and thus appear in U. In particular, the first sampled edge e,1 is always added to the matching because all edges in R1 are admissible with respect to (M,U,W)1. On the other hand, choosing a large s results in more edges being removed during the filtering step. This reduces the number of caches in the algorithm, which is important for bounding the algorithm’s running time. To formalize these concepts, we define the notion of a suitable sample size as follows.

Definition 4 (Suitable sample size).

Let (M,U,W) be a matching triple, BE be a set of edges, and let τ>0 be a threshold. Let s[|B|] be an integer and let S be a uniformly random subset of size s from B. We denote the output of Extend(S,(M,U,W),τ) by triple (M,U,W).

We say s is a suitable sample size if

𝔼[|SU|](12ϵ)s,and𝔼[|Filter(B,(M,U,W),τ)|]( 1ϵ2)|B|.

We will denote by Suit(B,(M,U,W),τ) the set of all such s, and in particular, denote by Suit the set Suit(B1max,(M,U,W)1,τ).

In order to choose a suitable sample size, we will find the largest sample size s such that each of the samples S[1],,S[s] will be added to the matching with probability at least 1ϵ. This firstly ensures that, in expectation, at least (1ϵ)s edges are added to the matching. Additionally, since we are choosing the largest sample size, the probability that the (s+1)-th edge will be added to the output matching is strictly less than 1ϵ, which will allow us to bound the size of the filtered set.

Since calculating the probability that each S[i] will be added to the matching is inefficient, we instead estimate these probabilities by simulating the extension of (M,U,W). Letting t be a parameter to be specified later, we perform t runs, each time randomly permuting the elements of B to obtain B and running the extension process on it. In each run, for every j[|B|], we check whether B[j] is added to the matching union set. We then average over the t runs to estimate the probability that each S[j] is added to the matching. Using a polylogarithmic value for t, we ensure the estimation error is bounded by ϵ. Finally, we use these estimates to find a suitable s. Formal pseudocode is provided in Algorithm 3. We prove that the algorithm finds a suitable sample size with high probability.

Algorithm 3 Estimate Sample Size.

We note that our approach is crucially relying on the fact that extending a matching along a set S+S is equivalent to first extending it along S, and then extending it along S. This implies that as long as si, the exact value of s does not affect the probability of S[i] will be added to the matching. This allows us to use a single run of the extension operator to gather statistics about all of the values i[|B|], rather than using a separate run for each value. This ensures that the running time of EstimateSampleSize is (almost) linear in |B|, rather than quadratic, which will be important when we analyze the amortized update time of our algorithm.

4.4 Insertions and deletions

In this section, we describe how we handle insertions and deletions of edges in our algorithm. After deleting an edge, we remove it from all remaining sets R. Next, we check each cache C to see if a significant proportion of its latest snapshot has been deleted (i.e., ϵlog(1+ϵ)(n4) of its size). If this is the case, we reconstruct the caches starting from +1. You can find the formal pseudocode in Algorithm 4.

Regarding insertions, when we insert an edge e, we start from =0 and add e to the remaining set R for each cache until we reach a point where e is no longer admissible, or we detect that too many edges have been added to R since its last snapshot. In the latter case, which occurs when |R\Rsnap| is greater than ϵlog(1+ϵ)(n4) of |Rsnap|, we reconstruct all the caches starting from +1. You can find the formal pseudocode in Algorithm 4.

Algorithm 4 Insert and Delete.

4.5 Relaxing simplifying assumptions

Up to now, we have assumed that we are given a parameter MAX such that MAXmaxeEf(e)[1,2], and assumed that deleted edges are not reinserted. With these assumptions, our dynamic algorithm provides an (8+ϵ)-approximation ratio using polylogarithmic update time as formalized by the following theorem.

Theorem 5.

Given a (multi)graph G such that the number of edges present (i.e., inserted but not yet deleted) in the graph in G is always bounded by n2, the dynamic algorithm for submodular maximization specified by Algorithms 2and 4 has an expected amortized poly(logn,1ϵ) update time. In addition, whenever MAXmaxeEf(e)[1,2], the output MT\D provides an 8+cϵ expected approximation factor for some universal constant c, i.e., 𝔼[f(MT\D)]18+cϵf(M) where M denotes the optimal matching.

The above result considers a multigraph instead of a simple graph, as we handle re-insertions by inserting parallel copies of the reinserted edges.

In order to relax the assumption that MAX is known, we maintain parallel runs of the algorithm with different values of the MAX parameter. Formally, consider a run of the above algorithm with MAX=2i for an integer i, though we do not actually instantiate most of these runs as no element will be inserted into them. Whenever an edge e is inserted, we insert into the runs whose MAX parameter satisfies f(e)[ϵn4MAX,MAX]. If the run was not initialized before, we initialize it at this point. Whenever an edge is deleted, we delete it from the same runs as before. Formal pseudocode is provided in Algorithm 5.

At any time, i.e., after any insertion or deletion, our overall output is the set with the maximum submodular value across all parallel runs. Since after each update, at least one of the runs uses a valid guess for MAX, the output satisfies the same (8+ϵ) approximation as before. Furthermore, because each update affects only a polylogarithmic number of runs, the time complexity remains polylogarithmic.

Algorithm 5 Unknown MAX.

References

  • [1] Abhash Anand, Surender Baswana, Manoj Gupta, and Sandeep Sen. Maintaining approximate maximum weighted matching in fully dynamic graphs. In IARCS Annual Conference on Foundations of Software Technology and Theoretical Computer Science, FSTTCS 2012, December 15-17, 2012, Hyderabad, India, volume 18 of LIPIcs, pages 257–266. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2012. doi:10.4230/LIPIcs.FSTTCS.2012.257.
  • [2] Moshe Babaioff, Nicole Immorlica, David Kempe, and Robert Kleinberg. Matroid secretary problems. J. ACM, 65(6):35:1–35:26, 2018. doi:10.1145/3212512.
  • [3] Ramakrishna Bairi, Rishabh K. Iyer, Ganesh Ramakrishnan, and Jeff A. Bilmes. Summarization of multi-document topic hierarchies using submodular mixtures. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing, ACL 2015, July 26-31, 2015, Beijing, China, Volume 1: Long Papers, pages 553–563. The Association for Computer Linguistics, 2015. doi:10.3115/v1/p15-1054.
  • [4] Kiarash Banihashem, Leyla Biabani, Samira Goudarzi, MohammadTaghi Hajiaghayi, Peyman Jabbarzade, and Morteza Monemizadeh. Dynamic constrained submodular optimization with polylogarithmic update time. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 1660–1691. PMLR, 2023. URL: https://proceedings.mlr.press/v202/banihashem23a.html.
  • [5] Kiarash Banihashem, Leyla Biabani, Samira Goudarzi, MohammadTaghi Hajiaghayi, Peyman Jabbarzade, and Morteza Monemizadeh. Dynamic non-monotone submodular maximization. In Alice Oh, Tristan Naumann, Amir Globerson, Kate Saenko, Moritz Hardt, and Sergey Levine, editors, Advances in Neural Information Processing Systems 36: Annual Conference on Neural Information Processing Systems 2023, NeurIPS 2023, New Orleans, LA, USA, December 10 - 16, 2023, 2023. URL: http://papers.nips.cc/paper_files/paper/2023/hash/387982dbf23d9975c7fc45813dd3dabc-Abstract-Conference.html.
  • [6] Kiarash Banihashem, Leyla Biabani, Samira Goudarzi, MohammadTaghi Hajiaghayi, Peyman Jabbarzade, and Morteza Monemizadeh. Dynamic algorithms for matroid submodular maximization. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 3485–3533. SIAM, 2024. doi:10.1137/1.9781611977912.125.
  • [7] Kiarash Banihashem, Samira Goudarzi, MohammadTaghi Hajiaghayi, Peyman Jabbarzade, and Morteza Monemizadeh. A dynamic algorithm for weighted submodular cover problem. In Proceedings of the 41st International Conference on Machine Learning, ICML’24. JMLR.org, 2024.
  • [8] Reuven Bar-Yehuda and Dror Rawitz. On the equivalence between the primal-dual schema and the local ratio technique. SIAM J. Discret. Math., 19(3):762–797, 2005. doi:10.1137/050625382.
  • [9] S. Baswana, M. Gupta, and S. Sen. Fully dynamic maximal matching in O(logn) update time. In Proceedings of the 52nd IEEE Symposium on Foundations of Computer Science (FOCS), pages 383–392, 2011.
  • [10] Surender Baswana, Manoj Gupta, and Sandeep Sen. Fully dynamic maximal matching in o(log n) update time. SIAM J. Comput., 44(1):88–113, 2015. doi:10.1137/130914140.
  • [11] Soheil Behnezhad, Jakub Łącki, and Vahab S. Mirrokni. Fully dynamic matching: Beating 2-approximation in δϵ update time. In Proceedings of the 2020 ACM-SIAM Symposium on Discrete Algorithms, SODA 2020, Salt Lake City, UT, USA, January 5-8, 2020, pages 2492–2508. SIAM, 2020. doi:10.1137/1.9781611975994.152.
  • [12] Aaron Bernstein, Aditi Dudeja, and Zachary Langley. A framework for dynamic matching in weighted graphs. In STOC ’21: 53rd Annual ACM SIGACT Symposium on Theory of Computing, Virtual Event, Italy, June 21-25, 2021, pages 668–681. ACM, 2021. doi:10.1145/3406325.3451113.
  • [13] Sayan Bhattacharya, Monika Henzinger, and Giuseppe F. Italiano. Deterministic fully dynamic data structures for vertex cover and matching. In Proceedings of the Twenty-Sixth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2015, San Diego, CA, USA, January 4-6, 2015, pages 785–804, 2015. doi:10.1137/1.9781611973730.54.
  • [14] Gruia Călinescu, Chandra Chekuri, Martin Pál, and Jan Vondrák. Maximizing a monotone submodular function subject to a matroid constraint. SIAM J. Comput., 40(6):1740–1766, 2011. doi:10.1137/080733991.
  • [15] Larry Carter and Mark N. Wegman. Universal classes of hash functions (extended abstract). In Proceedings of the 9th Annual ACM Symposium on Theory of Computing, May 4-6, 1977, Boulder, Colorado, USA, pages 106–112, 1977. doi:10.1145/800105.803400.
  • [16] Amit Chakrabarti and Sagar Kale. Submodular maximization meets streaming: Matchings, matroids, and more. Mathematical Programming, 154(1):225–247, 2015. doi:10.1007/S10107-015-0900-7.
  • [17] Chandra Chekuri, Jan Vondrák, and Rico Zenklusen. Multi-budgeted matchings and matroid intersection via dependent rounding. In Proceedings of the Twenty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2011, San Francisco, California, USA, January 23-25, 2011, pages 1080–1097. SIAM, 2011. doi:10.1137/1.9781611973082.82.
  • [18] Xi Chen and Binghui Peng. On the complexity of dynamic submodular maximization. In Proceedings of the Fifty-Fourth Annual ACM on Symposium on Theory of Computing, STOC 2022, to appear, 2022. arXiv:2111.03198.
  • [19] Michael S. Crouch and Daniel M. Stubbs. Improved streaming algorithms for weighted matching, via unweighted matching. In Klaus Jansen, José D. P. Rolim, Nikhil R. Devanur, and Cristopher Moore, editors, Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2014, September 4-6, 2014, Barcelona, Spain, volume 28 of LIPIcs, pages 96–104. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2014. doi:10.4230/LIPIcs.APPROX-RANDOM.2014.96.
  • [20] Nikhil R. Devanur, Kamal Jain, and Robert D. Kleinberg. Randomized primal-dual analysis of RANKING for online bipartite matching. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2013, New Orleans, Louisiana, USA, January 6-8, 2013, pages 101–107. SIAM, 2013. doi:10.1137/1.9781611973105.7.
  • [21] John P. Dickerson, Karthik Abinav Sankararaman, Aravind Srinivasan, and Pan Xu. Balancing relevance and diversity in online bipartite matching via submodularity. In The Thirty-Third AAAI Conference on Artificial Intelligence, AAAI 2019, The Thirty-First Innovative Applications of Artificial Intelligence Conference, IAAI 2019, The Ninth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019, Honolulu, Hawaii, USA, January 27 - February 1, 2019, pages 1877–1884. AAAI Press, 2019. doi:10.1609/aaai.v33i01.33011877.
  • [22] Ran Duan and Seth Pettie. Linear-time approximation for maximum weight matching. J. ACM, 61(1):1:1–1:23, 2014. doi:10.1145/2529989.
  • [23] Paul Duetting, Federico Fusco, Silvio Lattanzi, Ashkan Norouzi-Fard, and Morteza Zadimoghaddam. Fully dynamic submodular maximization over matroids. In International Conference on Machine Learning, ICML 2023, 23-29 July 2023, Honolulu, Hawaii, USA, volume 202 of Proceedings of Machine Learning Research, pages 8821–8835. PMLR, 2023. URL: https://proceedings.mlr.press/v202/duetting23a.html.
  • [24] Leah Epstein, Asaf Levin, Julián Mestre, and Danny Segev. Improved approximation guarantees for weighted matching in the semi-streaming model. SIAM J. Discret. Math., 25(3):1251–1265, 2011. doi:10.1137/100801901.
  • [25] Matthew Fahrbach, Vahab S. Mirrokni, and Morteza Zadimoghaddam. Submodular maximization with nearly optimal approximation, adaptivity and query complexity. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019, San Diego, California, USA, January 6-9, 2019, pages 255–273. SIAM, 2019. doi:10.1137/1.9781611975482.17.
  • [26] Uriel Feige, Vahab S. Mirrokni, and Jan Vondrák. Maximizing non-monotone submodular functions. SIAM J. Comput., 40(4):1133–1153, 2011. doi:10.1137/090779346.
  • [27] Moran Feldman, Joseph Naor, Roy Schwartz, and Justin Ward. Improved approximations for k-exchange systems. In Algorithms–ESA 2011: 19th Annual European Symposium, Saarbrücken, Germany, September 5-9, 2011. Proceedings 19, pages 784–798. Springer, 2011.
  • [28] Marshall L Fisher, George L Nemhauser, and Laurence A Wolsey. An analysis of approximations for maximizing submodular set functions—II. Springer, 1978.
  • [29] Harold N. Gabow and Robert Endre Tarjan. Faster scaling algorithms for general graph-matching problems. J. ACM, 38(4):815–853, 1991. doi:10.1145/115234.115366.
  • [30] Mohsen Ghaffari and David Wajc. Simplified and space-optimal semi-streaming (2+epsilon)-approximate matching. In 2nd Symposium on Simplicity in Algorithms, SOSA 2019, January 8-9, 2019, San Diego, CA, USA, volume 69 of OASIcs, pages 13:1–13:8. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/OASIcs.SOSA.2019.13.
  • [31] Anupam Gupta, Aaron Roth, Grant Schoenebeck, and Kunal Talwar. Constrained non-monotone submodular maximization: Offline and secretary algorithms. In Internet and Network Economics - 6th International Workshop, WINE 2010, Stanford, CA, USA, December 13-17, 2010. Proceedings, volume 6484 of Lecture Notes in Computer Science, pages 246–257. Springer, 2010. doi:10.1007/978-3-642-17572-5_20.
  • [32] Manoj Gupta and Richard Peng. Fully dynamic (1+ e)-approximate matchings. In 54th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2013, 26-29 October, 2013, Berkeley, CA, USA, pages 548–557. IEEE Computer Society, 2013. doi:10.1109/FOCS.2013.65.
  • [33] Kathrin Hanauer, Monika Henzinger, and Christian Schulz. Recent advances in fully dynamic graph algorithms. arXiv preprint arXiv:2102.11169, 2021. arXiv:2102.11169.
  • [34] Kathrin Hanauer, Monika Henzinger, and Christian Schulz. Recent advances in fully dynamic graph algorithms (invited talk). In 1st Symposium on Algorithmic Foundations of Dynamic Networks, SAND 2022, March 28-30, 2022, Virtual Conference, volume 221 of LIPIcs, pages 1:1–1:47. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPIcs.SAND.2022.1.
  • [35] Monika Henzinger. Modern dynamic data structures (invited talk). In 47th International Symposium on Mathematical Foundations of Computer Science, MFCS 2022, August 22-26, 2022, Vienna, Austria, volume 241 of LIPIcs, pages 2:1–2:2. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPIcs.MFCS.2022.2.
  • [36] Monika Rauch Henzinger and Valerie King. Randomized fully dynamic graph algorithms with polylogarithmic time per operation. J. ACM, 46(4):502–516, 1999. doi:10.1145/320211.320215.
  • [37] John E. Hopcroft and Richard M. Karp. An n5/2 algorithm for maximum matchings in bipartite graphs. SIAM J. Comput., 2(4):225–231, 1973. doi:10.1137/0202019.
  • [38] Bruce M. Kapron, Valerie King, and Ben Mountjoy. Dynamic graph connectivity in polylogarithmic worst case time. In Proceedings of the Twenty-Fourth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2013, New Orleans, Louisiana, USA, January 6-8, 2013, pages 1131–1142. SIAM, 2013. doi:10.1137/1.9781611973105.81.
  • [39] Ehsan Kazemi, Marko Mitrovic, Morteza Zadimoghaddam, Silvio Lattanzi, and Amin Karbasi. Submodular streaming in all its glory: Tight approximation, minimum memory and low adaptive complexity. In Proceedings of the 36th International Conference on Machine Learning, ICML 2019, 9-15 June 2019, Long Beach, California, USA, volume 97 of Proceedings of Machine Learning Research, pages 3311–3320. PMLR, 2019. URL: http://proceedings.mlr.press/v97/kazemi19a.html.
  • [40] Robert Kleinberg and S. Matthew Weinberg. Matroid prophet inequalities. In Proceedings of the 44th Symposium on Theory of Computing Conference, STOC 2012, New York, NY, USA, May 19 - 22, 2012, pages 123–136. ACM, 2012. doi:10.1145/2213977.2213991.
  • [41] Nitish Korula, Vahab S. Mirrokni, and Morteza Zadimoghaddam. Online submodular welfare maximization: Greedy beats 1/2 in random order. SIAM J. Comput., 47(3):1056–1086, 2018. doi:10.1137/15M1051142.
  • [42] Andreas Krause. Submodularity in machine learning and vision. In Tilo Burghardt, Dima Damen, Walterio W. Mayol-Cuevas, and Majid Mirmehdi, editors, British Machine Vision Conference, BMVC 2013, Bristol, UK, September 9-13, 2013. BMVA Press, 2013. doi:10.5244/C.27.2.
  • [43] Silvio Lattanzi, Slobodan Mitrovic, Ashkan Norouzi-Fard, Jakub Tarnawski, and Morteza Zadimoghaddam. Fully dynamic algorithm for constrained submodular optimization. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL: https://proceedings.neurips.cc/paper/2020/hash/9715d04413f296eaf3c30c47cec3daa6-Abstract.html.
  • [44] Jon Lee, Vahab S Mirrokni, Viswanath Nagarajan, and Maxim Sviridenko. Non-monotone submodular maximization under matroid and knapsack constraints. In Proceedings of the forty-first annual ACM symposium on Theory of computing, pages 323–332, 2009. doi:10.1145/1536414.1536459.
  • [45] Jon Lee, Maxim Sviridenko, and Jan Vondrák. Matroid matching: the power of local search. In Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA, 5-8 June 2010, pages 369–378. ACM, 2010. doi:10.1145/1806689.1806741.
  • [46] Jon Lee, Maxim Sviridenko, and Jan Vondrák. Submodular maximization over multiple matroids via generalized exchange properties. Mathematics of Operations Research, 35(4):795–806, 2010. doi:10.1287/MOOR.1100.0463.
  • [47] Benny Lehmann, Daniel Lehmann, and Noam Nisan. Combinatorial auctions with decreasing marginal utilities. Games and Economic Behavior, 55(2):270–296, 2006. Mini Special Issue: Electronic Market Design. doi:10.1016/j.geb.2005.02.006.
  • [48] Roie Levin and David Wajc. Streaming submodular matching meets the primal-dual method. In Dániel Marx, editor, Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms, SODA 2021, Virtual Conference, January 10 - 13, 2021, pages 1914–1933. SIAM, 2021. doi:10.1137/1.9781611976465.114.
  • [49] Hui Lin and Jeff Bilmes. Word alignment via submodular maximization over matroids. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 170–175, Portland, Oregon, USA, June 2011. Association for Computational Linguistics. URL: https://aclanthology.org/P11-2030.
  • [50] Hui Lin and Jeff A. Bilmes. A class of submodular functions for document summarization. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA, pages 510–520. The Association for Computer Linguistics, 2011. URL: https://www.aclweb.org/anthology/P11-1052/.
  • [51] Hui Lin and Jeff A. Bilmes. Word alignment via submodular maximization over matroids. In The 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference, 19-24 June, 2011, Portland, Oregon, USA - Short Papers, pages 170–175. The Association for Computer Linguistics, 2011. URL: https://aclanthology.org/P11-2030/.
  • [52] L. Lovasz and M.D. Plummer. Matching theory. In North-Holland, Amsterdam-New York, 1986.
  • [53] Aleksander Madry. Navigating central path with electrical flows: From flows to matchings, and back. In 54th Annual IEEE Symposium on Foundations of Computer Science, FOCS 2013, 26-29 October, 2013, Berkeley, CA, USA, pages 253–262. IEEE Computer Society, 2013. doi:10.1109/FOCS.2013.35.
  • [54] Dimitris Magos, Ioannis Mourtos, and Leonidas S. Pitsoulis. The matching predicate and a filtering scheme based on matroids. J. Comput., 1(6):37–42, 2006. doi:10.4304/jcp.1.6.37-42.
  • [55] Morteza Monemizadeh. Dynamic submodular maximization. In Hugo Larochelle, Marc’Aurelio Ranzato, Raia Hadsell, Maria-Florina Balcan, and Hsuan-Tien Lin, editors, Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual, 2020. URL: https://proceedings.neurips.cc/paper/2020/hash/6fbd841e2e4b2938351a4f9b68f12e6b-Abstract.html.
  • [56] Ofer Neiman and Shay Solomon. Simple deterministic algorithms for fully dynamic maximal matching. In Symposium on Theory of Computing Conference, STOC’13, Palo Alto, CA, USA, June 1-4, 2013, pages 745–754, 2013. doi:10.1145/2488608.2488703.
  • [57] K. Onak and R. Rubinfeld. Maintaining a large matching and a small vertex cover. In Proceedings of the 42nd Annual ACM Symposium on Theory of Computing (STOC), pages 457–464, 2010.
  • [58] Krzysztof Onak and Ronitt Rubinfeld. Maintaining a large matching and a small vertex cover. In Proceedings of the 42nd ACM Symposium on Theory of Computing, STOC 2010, Cambridge, Massachusetts, USA, 5-8 June 2010, pages 457–464. ACM, 2010. doi:10.1145/1806689.1806753.
  • [59] Ami Paz and Gregory Schwartzman. A (2+ϵ)-approximation for maximum weight matching in the semi-streaming model. In Proceedings of the Twenty-Eighth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2017, Barcelona, Spain, Hotel Porta Fira, January 16-19, pages 2153–2161. SIAM, 2017. doi:10.1137/1.9781611974782.140.
  • [60] Binghui Peng and Aviad Rubinstein. Fully-dynamic-to-incremental reductions with known deletion order (e.g. sliding window). In 2023 Symposium on Simplicity in Algorithms, SOSA 2023, Florence, Italy, January 23-25, 2023, pages 261–271. SIAM, 2023. doi:10.1137/1.9781611977585.ch24.
  • [61] Seth Pettie and Peter Sanders. A simpler linear time 2/3-epsilon approximation for maximum weight matching. Inf. Process. Lett., 91(6):271–276, 2004. doi:10.1016/j.ipl.2004.05.007.
  • [62] Robert Preis. Linear time 1/2-approximation algorithm for maximum weighted matching in general graphs. In STACS 99, 16th Annual Symposium on Theoretical Aspects of Computer Science, Trier, Germany, March 4-6, 1999, Proceedings, volume 1563 of Lecture Notes in Computer Science, pages 259–269. Springer, 1999. doi:10.1007/3-540-49116-3_24.
  • [63] Monika Rauch. Fully dynamic biconnectivity in graphs. In 33rd Annual Symposium on Foundations of Computer Science, Pittsburgh, Pennsylvania, USA, 24-27 October 1992, pages 50–59. IEEE Computer Society, 1992. doi:10.1109/SFCS.1992.267819.
  • [64] Shay Solomon. Fully dynamic maximal matching in constant update time. In IEEE 57th Annual Symposium on Foundations of Computer Science, FOCS 2016, 9-11 October 2016, Hyatt Regency, New Brunswick, New Jersey, USA, pages 325–334. IEEE Computer Society, 2016. doi:10.1109/FOCS.2016.43.
  • [65] Daniel Stubbs and Virginia Vassilevska Williams. Metatheorems for dynamic weighted matching. In 8th Innovations in Theoretical Computer Science Conference, ITCS 2017, January 9-11, 2017, Berkeley, CA, USA, volume 67 of LIPIcs, pages 58:1–58:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2017. doi:10.4230/LIPIcs.ITCS.2017.58.
  • [66] Daisuke Takafuji, Satoshi Taoka, and Toshimasa Watanabe. Efficient approximation algorithms for the maximum weight matching problem. In Proceedings of the 2002 International Symposium on Circuits and Systems, ISCAS 2002, Scottsdale, Arizona, USA, May 26-29, 2002, pages 457–460. IEEE, 2002. doi:10.1109/ISCAS.2002.1010491.
  • [67] Jan Vondrák. Optimal approximation for the submodular welfare problem in the value oracle model. In Proceedings of the 40th Annual ACM Symposium on Theory of Computing, Victoria, British Columbia, Canada, May 17-20, 2008, pages 67–74. ACM, 2008. doi:10.1145/1374376.1374389.
  • [68] Kai Wei, Rishabh K. Iyer, and Jeff A. Bilmes. Submodularity in data subset selection and active learning. In Proceedings of the 32nd International Conference on Machine Learning, ICML 2015, Lille, France, 6-11 July 2015, volume 37 of JMLR Workshop and Conference Proceedings, pages 1954–1963. JMLR.org, 2015. URL: http://proceedings.mlr.press/v37/wei15.html.
  • [69] Mariano Zelke. Weighted matching in the semi-streaming model. In STACS 2008, 25th Annual Symposium on Theoretical Aspects of Computer Science, Bordeaux, France, February 21-23, 2008, Proceedings, volume 1 of LIPIcs, pages 669–680. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, Germany, 2008. doi:10.4230/LIPIcs.STACS.2008.1312.