Abstract 1 Introduction 2 Preliminaries 3 The Push-Pull-Relabel Framework 4 Directed Expander Decompositions via the Push-Pull-Relabel Framework 5 Maintaining Out-Expanders References

Near-Optimal Algorithm for Directed Expander Decompositions

Aurelio L. Sulser ETH Zurich, Switzerland Maximilian Probst Gutenberg ORCID ETH Zurich, Switzerland
Abstract

In this work, we present the first algorithm to compute expander decompositions in an m-edge directed graph with near-optimal time O~(m)111In this article, we use O~() notation to suppress factors logarithmic in m, i.e. O(mlogcm)=O~(m) for every constant c>0.. Further, our algorithm can maintain such a decomposition in a dynamic graph and again obtains near-optimal update times. Our result improves over previous algorithms [2, 18] that only obtained algorithms optimal up to subpolynomial factors.

In order to obtain our new algorithm, we present a new push-pull-relabel flow framework that generalizes the classic push-relabel flow algorithm [14] which was later dynamized for computing expander decompositions in undirected graphs [16, 31]. We then show that the flow problems formulated in recent work [18] to decompose directed graphs can be solved much more efficiently in the push-pull-relabel flow framework.

Recently, our algorithm has already been employed to obtain the currently fastest algorithm to compute min-cost flows [33]. We further believe that our algorithm can be used to speed-up and simplify recent breakthroughs in combinatorial graph algorithms towards fast maximum flow algorithms [11, 12, 1].

Keywords and phrases:
Directed Expander Decomposition, Push-Pull-Relabel Algorithm
Category:
Track A: Algorithms, Complexity and Games
Copyright and License:
[Uncaptioned image] © Aurelio L. Sulser and Maximilian Probst Gutenberg; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Dynamic graph algorithms
Acknowledgements:
The authors are thankful for many fruitful discussions and feedback on an earlier version of the paper from Rasmus Kyng, Simon Meierhans and Thatchaphol Saranurak.
Funding:
The research leading to these results has received funding from the starting grant “A New Paradigm for Flow and Cut Algorithms” (no. TMSGI2_218022) of the Swiss National Science Foundation.
Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis

1 Introduction

Over the past two decades, expanders and expander decompositions have been pivotal in advancing on fundamental algorithmic graph problems. The development and application of the first fast algorithm to compute near-expander decompositions was given in the development of the first near-linear time Laplacian solvers [32], a breakthrough in modern graph algorithms. Subsequently, a line of research [16, 36, 28, 29] has focused on strengthening this result by developing fast flow-based pruning techniques that refine near-expander decompositions into expander decompositions. This line of research culminated in [31] where a new, faster, simpler, and more user-friendly expander decomposition framework was presented. This advancement has catalyzed the widespread use of expander decompositions as a tool in graph algorithms and was instrumental in the recent surge of applications of expander decompositions in both static and dynamic graph settings for various cut, flow, and shortest path problems [32, 21, 16, 36, 28, 29, 10, 2, 26, 5, 35, 30, 6, 25, 13, 15, 9, 3, 3, 24, 19, 34, 23, 7, 20, 12, 1, 33]. In this work, we study the problem of computing and maintaining expander decompositions in directed graphs, defined as follows.

Definition 1 (Directed Expander Decomposition).

Given an m-edge directed graph G, we say a partition 𝒳 of the vertex set of G and a subset of edges ErE forms an (β,ϕ)-expander decomposition if

  1. 1.

    X𝒳, G[X] is a ϕ-expander meaning for all cuts (S,S¯):min{eG(S,S¯),eG(S¯,S)}min{volG(S),volG(S¯)}ϕ, and

  2. 2.

    |Er|βϕm, and

  3. 3.

    the graph (GEr)/𝒳, that is the graph G minus the edges in Er where expander components in 𝒳 are contracted into supernodes, is a directed acyclic graph (DAG).

In our algorithm, we implicitly maintain an ordering of the partition sets in 𝒳 and let Er be the edges that go ’backward’ in this ordering of expander components. Note that we can only obtain a meaningful bound on the number of such ’backward’ edges since a bound on all edges between expander components cannot be achieved as can be seen from any graph G that is acyclic (which implies that 𝒳 has to be a collection of singletons by Item 1 forcing all edges to be between components). Directed expanders and expander decompositions have been introduced in [2] in an attempt to derandomize algorithms to maintain strongly connected components and single-source shortest paths in directed graphs undergoing edge deletions. Recently, [18] gave an alternative algorithm to compute and maintain directed expander decomposition that refines the framework from [2] and heavily improves subpolynomial factors. Besides working on directed graphs, this algorithm also yields additional properties for expander decompositions that cannot be achieved with existing techniques - even in undirected graphs. In this article, we further refine these techniques to obtain an algorithm that is optimal up to logarithmic factors in m - as opposed to subpolynomial factors. Since their invention, directed expander decompositions and the techniques to maintain such decompositions have been pivotal in the design of fast dynamic graph data structures and in ongoing research for fast ’combinatorial’ maximum flow algorithms. We discuss in Section 1.2 these recent lines of research and how our algorithm benefits recent breakthrough results.

1.1 Our Contribution

In this article, we finally give a simple algorithm that generalizes the algorithm from [31] in a clean way. Further, our algorithm is the first to obtain near-optimal runtimes for both static and dynamic expander decompositions in directed graphs. Our result is summarized in the theorem below.

Theorem 2.

Given a parameter ϕc/log12m for a fixed constant c>0, and a directed m-edge graph G undergoing a sequence of edge deletions, there is a randomized data structure that constructs and maintains a (O(log19m),Ω(ϕ/log12m),O(1/log8m))-expander decomposition 222Here we use the augmented expander decomposition definition 7. (𝒳,Er) of G. The initialization of the data structure takes time O(mlog20(m)/ϕ) and the amortized time to process each edge deletion is O(log28(m)/ϕ2).

Further, our algorithm has the property that it is refining for up to O~(ϕm) edge deletions meaning that 𝒳 is a refinement of its earlier versions (every expander component in 𝒳 is a subset of an expander component in any earlier expander decomposition) and the size of Er does never exceed O~(ϕm). Our algorithms are deterministic, however, they rely on calling a fast randomized algorithm to find balanced sparse cuts or certify that no such cut exists (see [22, 27]). Our new techniques are much simpler and more accessible than previous work, besides also being much faster. We hope that by giving a simpler algorithm for directed expander decompositions, we can help to make this tool more accessible to other researchers in the field with the hope that this can further accelerate recent advances in dynamic and static graph algorithms.

1.2 Applications

Our new algorithms have direct applications to the currently fastest approaches for bipartite matching/ maximum flow/ min-cost flows and the data structures that are employed to obtain these results:

  1. 1.

    Our algorithm is already used in the fastest min-cost flow algorithm that is known to-date [33] which achieves runtime meO(log3/4(m)loglog(m)) yielding the first improvement over the recent breakthrough in [8] achieving the first near-linear time algorithm for min-cost flows. In the framework of [33], our algorithmic techniques are used to maintain an expander decomposition of an undirected graph where it is heavily exploited that our algorithm maintains the expander decomposition such that it refines over time. This guarantee is pivotal in the construction of a fully-dynamic algorithm to maintain dynamic expander hierarchies which is the key data structure in the paper. While [33] could also have relied solely on the techniques [18] to obtain such a data structure with subpolynomial update and query times, these subpolynomial factors would have been substantially larger and would thus not have yielded a faster min-cost flow algorithm overall.

  2. 2.

    In [2], directed expander decompositions for decremental graphs were used to obtain the first algorithm to maintain (1+ϵ)-approximate Single-Source Shortest-Paths (SSSPs) in a decremental graphs in time o(mn), though only for the special case of dense graphs. This problem is well-motivated as a simple reduction based on the Multiplicative Weights Update (MWU) framework implies that the maximum flow problem can be solved approximately by solving approximate decremental SSSP. By standard refinement of flows this yields an exact maximum flow algorithm. Very recently, Chuzhoy and Khanna [11, 12] showed that for the update sequence generated by the MWU to solve the bipartite matching problem - a special case of the maximum flow problem - decremental SSSP can be maintained in n2+o(1) time by refining the techniques from [4, 2]. This yields the first near-optimal ’combinatorial’333We refrain from defining the scope of combinatorial algorithms here and refer the reader to [11, 12] for a discussion. algorithm for the bipartite matching problem for very dense graphs. While the above algorithms are ’combinatorial’, they are still very intricate. Most of these complications stem from the maintenance of the directed expander decomposition used internally by the decremental SSSP data structure. We hope that our technique can help to simplify and speed-up these components to yield a simpler algorithm overall.

  3. 3.

    In independent work [1], an alternative ’combinatorial’ maximum flow that runs in n2+o(1) time was given. This algorithm cleverly extends push-relabel algorithms to run more efficiently when given an ordering of vertices that roughly aligns with the topological order induced by the acyclic graph formed from the support of an optimal maximum flow solution. To obtain this approximate ordering they compute a static expander hierarchy of the directed input graph. This generalizes the notion of directed expander decompositions further. In their work, they heavily build on the techniques from [18] to obtain the expander hierarchy. Unfortunately, the algorithm to obtain this hierarchy is very involved. We hope that our new techniques can help to simplify and speed-up their algorithms.

1.3 Our Techniques

High-Level Strategy

We obtain our result by following the high-level strategy of [31] for undirected graphs: we draw on existing literature (specifically [22, 27]) for an algorithm that either outputs a balanced sparse cut which allows us to recurse on both sides; or outputs a witness that no such cut exists. This witness can be represented as an expander graph W that embeds into GF with low congestion where F is a set of few fake edges. In the second case, we set up a flow problem to extract a large expander (the first algorithm only finds balanced sparse cuts, so many unbalanced sparse cuts might remain) which suffices to again recurse efficiently.

The (Dynamic) Flow Problem in [31]

To outline our algorithm, we first sketch the techniques of [31]. In [31], the following sequence of flow problems is formulated: initially, we add 12ϕ units of source commodity to each endpoint of an edge in F and then ask to route the commodity in G where each vertex v is a sink of value degG(v) and each edge has capacity 12ϕ. It then runs an (approximate) maximum flow algorithm on the flow problem. Whenever the algorithm detects that no feasible flow exists444Technically, the algorithm might already output cuts when some cut has capacity less than a constant times the amount of flow that is required to be routed through the cut., it finds a cut (A,A¯) where A is the smaller side of the cut and then poses the same problem for the network G[A¯] where this time the source commodity is assigned for each edge in EG(A,A¯)F. The algorithm terminates once the flow problem can be solved and outputs the final induced graph. In [31], it is shown that once a feasible flow exists then the (induced) graph is a Ω(ϕ)-expander. Further, it is shown that in the sequence of flow problems, each problem can be warm-started by re-using the flow computed in the previous instance to detect a cut induced on the remaining vertex set. This result is obtained by two main insights:

  1. 1.

    if the flow 𝒇 to find the cut (A,A¯) is a pre-flow, that is a flow that respects capacities and has no negative excess at any vertex (i.e. it does not route away more flow from a vertex than is inputted by the source), then injecting additional source flow for any edge EG(A,A¯) guarantees that the induced flow 𝒇|A¯ is a pre-flow in the flow problem formulated for G[A¯]. That is because the amount of flow that was routed via such a cut edge is at most 2/ϕ and thus placing 2/ϕ new source commodity at the endpoint ensures that no negative excess exists in the induced flow 𝒇|A¯,

  2. 2.

    the classic push-relabel framework can naturally be extended to warm-start on such a flow 𝒇|A¯ as it is built to just further refine pre-flows at every step.

This dynamization of the push-relabel framework allows to bound the cost of computation of all flow problems linearly in the amount of source commodity which in term is bounded by the number of edges that appear in either F or one of the identified min-cuts. Finally, [31] shows that the amount of source commodity remaining in A¯ decreases over the sequence of flow problems proportional to the volume of the set A of vertices that are removed at each step. Indeed, they observe that at each vertex v in A at least deg(v) many commodity units are absorbed and that the total amount of source injected due to the cut edges EG(A,A¯) is bounded by 12ϕeG(A,A¯)volG(A)/2. This yields that the final induced graph is still large. Thus, the final graph outputted is a large expander, as desired.

Refer to caption
(a) In directed graphs, cuts are asymmetric. While (A,A¯) might be a sparse cut, the cut (A¯,A) might contain many edges. A straightforward extension of [31] would inject 2/ϕ units of commodity to each endpoint of EG(A,A¯)F. However, it is not clear with this approach how to bound the total amount of flow injected throughout the algorithm.
Refer to caption
(b) We inject Θ(1/ψ) units of commodity at the end points of any witness embedding path (green) going through an edge of EG(A,A¯)F. We can bound the total amount by O(vol(A)/poly(ψ)). But the injection might well be in the interior of A,A¯ possibly leaving negative excess at the endpoints of the cut edges.
Figure 1: Injection of Commodity due to edges in EG(A,A¯)F.

The (Dynamic) Flow Problem in Directed Graphs

In directed graphs, while the above flow problem upon becoming feasible also certifies that the remaining graph is a Ω(ϕ)-expander, the argument that the sequence of flow problems terminates does not work: the asymmetry of cuts might force us for a small cut EG(A,A¯) to induce on A¯ while having many edges in EG(A¯,A) each of which would add 2/ϕ source flow to the new flow problem (see Figure 1(a)). Hence, we might end up injecting up to Ω(volG(A)/ϕ) more flow due to the cut edges. This makes it seemingly impossible to argue that the amount of source commodity in the next flow problem is smaller. To recover the argument that the sequence of flow problems terminates (with the remaining expander graph being large) both [2, 18] suggest setting up the flow problems more carefully such that each cut (A,A¯) that is found in this sequence and induced upon is a sparse cut. Here, we only describe the less lossy flow problem formulation developed in [18]. To ensure that each cut (A,A¯) that is found is a sparse cut, [18] proposes a slightly different flow problem: instead of adding source commodity 12ϕ per endpoint of an edge that is fake or not fully contained in the induced graph, it tailors the amount of new source commodity using the witness graph W, possibly injecting much less source commodity in the process. Concretely, we have that W is a ψ-expander over the same vertex set as G with degrees similar to degrees in G up to a factor of Θ(poly(ψ)) and an embedding Π into GF with congestion O(ϕ/ψ), for ϕ=Θ~(1). To set-up the flow problem, we inject Θ(1/ψ) units at the endpoints of any edge e in the witness W whose embedding path Π(e) goes through an edge in EG(A,A¯)F. We note that any such witness embedding path Π(e) either crosses the sparse cut (A,A¯) or has an endpoint in A. This allows to bound the additional source injected by O(volG(A)/poly(ψ))=O~(volG(A)) where we use that ψ=Θ~(1). But while correctness and termination of the flow problem sequence are now ensured, this leaves a significant problem: the current flow 𝒇 that was used to find the cut (A,A¯) no longer has the property that 𝒇|A¯ is a pre-flow in the flow problem formulated on network G[A¯] even if 𝒇 is a pre-flow. While capacity constraints are still enforced, i.e. 𝒇|A¯ still is a pseudo-flow (see [17] for reference on pseudo-flow), some vertices might now have negative excess since the additional commodity might be injected in the interior of A¯ far from the cut (A,A¯) (see Figure 1(b)). Indeed, for an edge (u,v) in EG(A,A¯) a lot of flow might have been routed through (u,v) but no additional commodity might be injected at v since all witness embedding paths passing through (u,v) might not end in v. Thus, dynamizing the push-relabel framework does not appear natural for this sequence of problems as it crucially requires that the maintained flow is a pre-flow at all times. In [18], an involved batching technique is used instead (based on the technique in [29]) that does not use dynamic flow problems but instead reduces to few static flow problems, however, at the loss of quality and runtime by subpolynomial factors.

The Push-Pull-Relabel Framework

The main technical contribution of this paper is a new framework that refines pseudo-flows as efficiently as the push-relabel framework refines pre-flows. Thus, we give a generalization of the latter widely-used and well-studied framework that we believe might have applications well beyond our problem. Recall that the classic push-relabel framework maintains labels for all vertices and a pre-flow 𝒇. In each iteration, it ’pushes’ positive excess flow at a vertex v to a vertex at a lower label (to be precise to a vertex at level (v)1); or if no ’push’ is possible, it increases the labels of some vertices: it ’relabels’. Using a clever potential-based analysis, one can show that it suffices to only increase the labels to a certain threshold before all flow is settled. In our framework, we allow deleting edges, without compensating by adding source commodity at the endpoints, which might create negative excess leaving 𝒇 a pseudo-flow (instead of a pre-flow). Now, while our framework applies the same strategy for ’pushes’ and ’relabels’, we also need a new operation ’pull’. Intuitively, our algorithm tries to ’pull’ back the source commodity that now causes the negative excess (this unit of commodity was ’pushed’ earlier to some other vertices). To do so, a vertex v with negative excess can ’pull’ commodity from vertices at a higher level (again it can only pull from a vertex at level (v)+1). But it is not difficult to construct an example where this strategy does not suffice: therefore, we also need to sometimes decrease the label of a vertex to ensure correctness. However, the latter change to the ’relabel’ operation breaks the property that labels are non-decreasing over time. A property that is crucial in the existing efficiency analysis. Instead, we give a much more careful argument to analyze the potentials that bound the number of push, pull, and relabeling operations that deal with the non-monotonicity of the levels over time. The argument is sketched below. Combining this framework with the above-discussed set-up of dynamic flow problems as proposed in [18] then yields the first near-optimal algorithm to compute an expander decomposition in a directed graph. Further, our technique extends seamlessly to also deal with edge deletions to G, yielding an algorithm to prune expander graphs that undergo edge deletions.

A Sketch of the Runtime of Push-Pull-Relabel

The run-time analysis in standard push-relabel considers the run-time contribution of the push and relabel operations separately. Using a potential argument, one can then relate the contribution of the push operations to the relabeling operations. A similar argument albeit much more delicate also allows to relate the contribution of the push and the pull operations to the contribution of the relabeling operations in the extended push-pull-relabel algorithm. What might seem more daunting is to bound the run-time contribution of the relabelings given that the level function is no longer point-wise non-decreasing. Let us revisit the argument to bound the run-time contribution of the relabelings in the push-relabel of [31]. Any relabeling of v, incurs a cost of O(deg(v)) as each incident edge has to be checked before a relabeling. At such a relabeling of v, the sink of v, which has by choice capacity degG(v), must be full and any commodity unit in it remains there till termination. Since the label of v does not decrease and is bounded by h, we may thus charge for the run-time contribution of the relabelings of v each unit in the sink of v exactly h. Over all vertices, we conclude that any commodity unit was charged at most h units. In the push-pull-relabel algorithm, a commodity unit might end up in various sinks. So a more clever charging argument needs to be devised. To guide intuition an analogy to flows of protons and electrons is useful. The protons correspond to commodity units and the electrons to the lack of commodity units. We are now moving protons and electrons around in the network and whenever a proton and an electron meet at a vertex they form a neutron which stays put at the vertex indefinitely. The neutrons will provide a means to charge work in the analysis. The sink of each vertex v can absorb deg(v) commodity units or put differently deg(v) protons. Electrons only form when we delete an edge (u,v), i.e. exactly 𝒇+(u,v) new electrons form at v. By choice, we now say that if we perform a push from u to v of one unit then exactly one proton moves from u to v, while if we perform a pull from u to v then exactly one electron moves from v to u. Since we only perform a push away from v if there are more than degG(v) free protons (not part of a neutron) at v and a pull towards v if there are free electrons at v, we find that min(n(v)+degG(v)/2,p(v)), where p(v) denotes the number of protons at v (including the ones in a neutron) and n(v) denotes the number of electrons at v, is non-decreasing. This fact allows us to conclude that whenever we change from increasing (v) to decreasing (v) at least deg(v)/2 new neutrons have formed at v. Since any neutron at v stays put indefinitely, we can charge the run-time contributions of the relabelings of v to the number of neutrons at v.

Roadmap

In the remainder of the article, we first give preliminaries in Section 2, then present our new push-pull-relabel framework in Section 3 and finally show how to obtain our result in Theorem 2 using the new framework in Section 4.

2 Preliminaries

Graphs

We let degGV(G) denote the degree vector of graph G. For all vertex vV, we have degG(v) equal to the number of edges incident to v (both incoming and outgoing are counted). Moreover, for any subset of edges DE(G) we denote by GD the graph with vertex set V(G) and edge set D and by degD the degree vector of the subgraph GD. We denote by volG(S) for any SV, the sum of degrees of vertices in S. We denote by EG(A,B) for any A,BV the set of directed edges in E(G) with tail in A and head in B. We define e(G)=|E(G)| and eG(A,B)=|EG(A,B)|. For any partition 𝒫 of V(G), we denote the graph obtained by contracting each partition class to a single vertex by G/𝒫. Two vertices in G/𝒫 are adjacent if there is an edge between the corresponding partition classes in G. Moreover, for any vertex vV(G) we denote by 𝟏vV(G) the vector with all entries equal to zero apart from the entry at v equaling one.

Flows

We call a tuple (G,𝒄,Δ,) a flow problem, if G is a directed graph, the capacity function 𝒄:V(G)×V(G)0 is such that for all (u,v)E we have 𝒄(u,v)=0, and Δ,:V(G)0 denote the source and the sink capacities. We denote flows on G as functions 𝒇:V(G)×V(G) such that 𝒇 is anti-symmetric, i.e. 𝒇(u,v)=𝒇(v,u). Given a vertex vV(G) we introduce the notation 𝒇(v)=u𝒇(v,u) and likewise 𝒄(v)=u𝒄(v,u)+𝒄(u,v). Moreover, we write 𝒇+(u,v)=max(𝒇(u,v),0). Given a flow 𝒇, we say a vertex vV(G) has Γ(v)=Δ(v)𝒇(v) excess. We say that it has positive excess if Γ(v)>(v) and v has negative excess if Γ(v)<(v)/2. For a subset V~V(G), we induce the flow 𝒇, and the sink , source Δ and edge capacities 𝒄 onto G[V~] in a function sense and write 𝒇|V~,|V~,Δ|V~,𝒄|V~. Moreover, for any subset EE(G), we write 𝒇|E for the flow induced by 𝒇 onto the subgraph of G consisting only of the edges E. We say that a flow 𝒇 is a pseudo-flow if it satisfies the capacity constraints:

(u,v)V(G)×V(G):𝒄(v,u)𝒇(u,v)𝒄(u,v).

We say 𝒇 is a pre-flow if 𝒇 is a pseudo-flow and has no negative excess at any vertex. We say a flow 𝒇 is feasible if it is a pre-flow and additionally no vertex has positive excess. Moreover, for any subset SV we denote by Δ(S),(S) the sum vSΔ(v),vS(v) respectively and for any subset DE(G) we denote by 𝒄(D) the sum dD𝒄(d).

Expanders

Given graph G=(V,E), we say a cut (S,S¯) is ϕ-out sparse if volG(S)e(G) and eG(S,VS)<ϕvolG(S). We say G is a ϕ-out expander if it has no ϕ-out-sparse cut. We say G is a ϕ-expander if G and Grev, the graph where all edges of G are reversed, are both ϕ-out expander. The next lemma that is folklore and crucial in our expander pruning argument.

Lemma 3.

Given a ϕ-expander G=(V,E), then take SV and a set of edge deletions D. We have that eGD(S,VS)<ϕ4volG(S) implies min(volG(S),volG(VS))<4|D|3ϕ.

Proof.

If min(volG(S),volG(VS))4|D|3ϕ, then eGD(S,VS)ϕmin(volG(S),volG(VS))|D|ϕ4min(volG(S),volG(VS)).

Graph Embeddings

Given two graphs H and G over the same vertex set V, we say that Π is an embedding of H into G if for every edge e=(u,v)E(H), Π(e) is a simple uv-path in G. We define the congestion of an edge eE(G) induced by the embedding Π to be the maximum number of paths in the image of Π that contain e. We define the congestion of Π to be the maximum congestion achieved by any such edge eE(G). Moreover, for any edge eE(G) we will denote by Π1(e) the set of edges fE(H) such that e is an edge on the path Π(f). Given an entire set of edges DE(G), we denote by Π1(D) the set of edges fE(H) such that some edge of the path Π(f) is in D. Given two graphs H1,H2 on the same vertex set and embeddings Π1:H1G,Π2:H2G then we denote by Π1Π2:H1H2G the embedding of the graph H1H2=(V(H1),E(H1)E(H2)). We denote by Grev the graph obtained from G by reversing all edges. Given an embedding Π:HG, we denote by Πrev:HrevGrev the embedding obtained by reversing the direction of all embedding paths.

Expander Decompositions with Witnesses

For the rest of the article, we use a definition of expander decompositions that encodes much more structure than given in Definition 1. In particular, the definition below incorporates the use of witness graphs which are instrumental to our algorithm.

Definition 4.

Given a directed graph G, we say (W,Π) is a (ϕ,ψ)-out-witness for G if 1) W is a ψ-out-expander, and 2) Π embeds W into G with congestion at most ψϕ, and 3) vV(W):degG(v)degW(v)degG(v)ψ. If (W,Π) is a (ϕ,ψ)-out-witness for G and (Wrev,Πrev) for Grev, then we say that (W,Π) is a (ϕ,ψ)-witness for G.

Fact 5.

If (W1,Π1) is a (ϕ,ψ)-out-witness for G and (W2rev,Π2rev) is a (ϕ,ψ)-out-witness for Grev, then (W1W2rev,Π1Π2rev) is a (ψϕ/4,ψ2/2)-witness for G.

Proof.

We observe that for any cut (S,VS):

eW1W2rev(S,VS) ψmin(volW1(S),volW2rev(S),volW1(VS),volW2rev(VS))
ψ22min(volW1W2rev(S),volW1W2rev(VS)).

Item 2, 3 are immediate.

The next fact establishes that a (ϕ,ψ)-(out-)witness for G certifies that G is a ϕ-(out-) expander, justifying the name witness.

Fact 6 (see [18], Claim 2.1).

If (W,Π) is a (ϕ,ψ)-witness for G, then G is a ϕ-expander.

Proof.

For any set SV(G), we have in the witness that

eW(S,VS)ψmin(volW(S),volW(VS))ψmin(volG(S),volG(VS)).

Since for any one of these cut edges there is an embedding path crossing the cut and since the congestion of the embedding is at most ψϕ, we have eG(S,VS)ϕψeW(S,VS)ϕmin(volG(S),volG(VS)).

Definition 7 (Augmented Expander Decomposition).

We call a collection 𝒳 and a subset ErE a (β,ϕ,ψ)-expander decomposition of a graph G, if

  1. 1.

    (X,W,Π)𝒳, G[X] has a (ϕ,ψ)-witness (W,Π),

  2. 2.

    |Er|βϕe(G),

  3. 3.

    (GEr)/𝒫 is a DAG, where 𝒫={X(X,W,Π)𝒳}.

Given two expander decompositions (𝒳1,E1r) of the graph G and (𝒳2,E2r) of the graph G𝒟, where 𝒟E(G), we say (𝒳2,E2r) refines (𝒳1,E1r) if 1) for all partition classes X2, where (X2,W2,Π2)𝒳2, there is a class X1, where (X1,W1,Π1)𝒳1, such that X2X1 and 2) E1rE2r𝒟.

3 The Push-Pull-Relabel Framework

In the push-relabel framework as presented in [14], we are trying to compute a feasible flow for a flow problem (G,𝒄,Δ,) by maintaining a pre-flow 𝒇 together with a level function . The algorithm then runs in iterations terminating once 𝒇 has no positive excess at any vertex. In each iteration of the algorithm, the algorithm identifies a vertex v that still has positive excess at a vertex v. This positive excess is then pushed to neighbors on lower levels such that the capacity constraint is still enforced. If this is not possible v is relabeled meaning that its label (v) is increased. In this section, we are extending the push-relabel framework to the dynamic setting, where we allow for increasing the source function Δ and the deletion of edges from G. To do so, we need to introduce the notion of negative excess. Because for a flow 𝒇 of the flow problem, it might happen that once we delete an edge (u,v) there is more flow leaving the vertex v than is entering or sourced, i.e. Γt(v)<0. Hence, we need to extend the discussion to include negative excess in the dynamic version. Similar to the standard push-relabel algorithm, we maintain a pseudo-flow 𝒇 and a vertex labeling in so-called valid states (𝒇,). The only difference is that in the standard push-relabel algorithm, 𝒇 is a pre-flow (not only a pseudo-flow).

Definition 8.

Given a level function :V(G)[h] and a pseudo-flow 𝐟, we call a tuple (𝐟,) a state for (G,𝐜,Δ,) if for all edges e=(u,v) having (u)>(v)+1 implies 𝐟(e)=𝐜(e).
We call the state (𝐟,) valid if for all vertices vV(G): 1) Γ(v)<(v)/2 implies (v)=0, 2) (v)<Γ(v) implies (v)=h.

For the remainder of the paper, we have h=O(log(n)ϕ). To provide the reader with some intuition about the definitions, we remark that the pseudo-flow of a valid state is not feasible in the usual sense. Indeed, some vertices might have positive or negative excess. But these vertices are guaranteed to either be at level h or at level 0. Moreover, we point out that if a valid state (𝒇,) has no vertices at level h, then 𝒇 might still not be a feasible flow since there might be a vertex v with Γt(v)<0. But it is straightforward to obtain from 𝒇 a feasible flow: extract from 𝒇 at each vertex v exactly Δ(v) unit flow paths (possibly empty paths starting and ending in v). Let us briefly outline how we will use Lemma 9 in the directed expander pruning algorithm 3. In the directed expander pruning algorithm, we start with an out-expander and an adversary performs a number of vertex or edge deletions. We aim to find a small pruning set of vertices such that if we prune away this small pruning set from the remaining graph, then we can certify that the obtained graph is still an expander. The classical way (see [31]) to either certify expansion of the remaining graph or to find a pruning set is by setting up a flow problem, where we inject for any deletion some additional source flow and where each vertex has sink capacity equal to its degree (this is the reason for the condition deg in Lemma 9). We will solve this flow problem using the flow provided by our ValidState algorithm (see Algorithm 1). We maintain this flow problem using the interface functions IncreaseSource, RemoveEdges under both adversarial deletions and under necessary pruning deletions. If the valid state computed after any such update has an additional vertex on level h, this will indicate that more vertices need to be pruned away. If on the other hand all vertices are on levels strictly below h, we will certify that the remaining graph is an expander.

Lemma 9.

Given a flow problem (G=(V,E),𝐜,Δ,), where deg. Then, there is a deterministic data structure ValidState(G,𝐜,Δ,) (see Algorithm 1) that initially computes a valid state (𝐟,) and after every update of the form

  • IncreaseSource(δ): where δ0n, we set Δ to Δ+δ,

  • RemoveEdges(D): where DE(V)𝒟, sets 𝒟 to 𝒟D (initially 𝒟=),

the algorithm explicitly updates the tuple (𝐟,) such that thereafter (𝐟,) is a valid state for the current flow instance (G𝒟,𝐜|E𝒟,Δ|E𝒟,|E𝒟). The run-time is O(h(Δ1+d𝒟(1+|𝐟td(d)|))), where 𝐟td(d) is equal to 𝐟(d) at the time d is deleted and Δ is the variable at the end of the algorithm.

Algorithm 1 ValidState(G=(V,E),𝒄,Δ,,h).

The user interface of the data structure ValidState are the functions Init, IncreaseSource and RemoveEdges. These functions can be used to initialize or update the flow problem. After any such update the internal state (𝒇,) has to be updated to remain valid for the new flow problem. To facilitate these necessary updates to (𝒇,) these functions make calls to the internal functions PushRelabel and PullRelabel. As described before, the function PushRelabel performs push and relabel operations to handle any positive excess, while the function PullRelabel performs pull and relabel operations to handle any negative excess. Let us take a closer look at the individual functions: using Init we set up the flow problem. It makes an internal call to PushRelabel to compute the first valid state using the standard push-relabel algorithm. Note that at this point no negative excess has been introduced and thus we do not need to resort to any pull operations. The function IncreaseSource can be used to increase the source of the current flow problem and again this does not introduce any negative excess and hence we can again update the valid state using the standard push-relabel algorithm. Using the function RemoveEdges, we can restrict the flow problem to a subgraph. The function induces the capacities 𝒄,Δ, and the flow 𝒇 to the subgraph and then makes calls to both PushRelabel and PullRelabel to update the state. The function PushRelabel is just the standard push-relabel algorithm. We look for a vertex v with positive excess Γ(v)>(v) at level below h. We pick a vertex on the lowest level possible. We try to push flow along some unsaturated outgoing edge to a vertex on a lower level. If it is not possible to push flow, we increase the label of the vertex and repeat. The function PullRelabel is the analog of the push-relabel algorithm for negative excess. Here, we look for a vertex v with negative excess Γ(v)<(v)/2 at level above 0. We pick a vertex on the highest level possible. We try to pull flow along some unsaturated incoming edge from a vertex on a higher level. If it is not possible to pull flow then all edges (u,v) where (u)>(v) must be saturated and it is safe to decrease the label of the vertex and repeat.

Proof of Lemma 9..

Since we only decrease the node label (v) when all edges (u,v), where (u)>(v), are saturated and since we only increase the node label when all edges (v,w), where (v)>(w), are saturated, we maintain that (𝒇,) is a state. Since we keep on performing push, pull or relabel operations as long as there is a vertex v with Γ(v)>(v),(v)<h or Γ(v)<(v)/2,(v)>0, it is guaranteed that the algorithm computes a valid state eventually. It thus suffices to bound the run-time.
Note that by maintaining at each vertex vV, a linked-list L[v] containing all non-saturated edges (v,u) where (v)=(u)+1, we can implement a push in time O(1). We can maintain such a linked-list for every vertex by spending time O(deg(v)) every time we relabel a vertex v plus O(1) time for each push. We maintain a corresponding list L[v] for all pull operations. It thus suffices to bound the contribution of the push, pull, and relabel operations to the run-time. Let us index by t starting with 0 the push, pull, relabel operations as well as the calls to IncreaseSource,RemoveEdges in the order they occur. We denote by E~t,𝒇t,t,Γt the state of the variables at index t. We note that 𝒇t is supported on the edges E~t.
Let us introduce functions pt,nt where pt(v),nt(v) can be interpreted as the amount of positive/ negative units present at vertex vV at time t. These functions will satisfy

vV,Γt(v)=pt(v)nt(v). (1)

Initially, we have p0=Δ0,n0=𝟎, where Δ0 denotes the variable Δ at initialization of the data structure. We describe how pt,nt evolve over time. If the t-th action is

  1. 1.

    a push of 12 units from u to v, then pt+1=pt12𝟏u+12𝟏v

  2. 2.

    a pull of 12 units from u to v, then nt+1=nt+12𝟏u12𝟏v

  3. 3.

    IncreaseSource(δ), then pt+1=pt+δ

  4. 4.

    RemoveEdges(D), then

    vV:pt+1(v)=pt(v)+(v,w)D(𝒇t)+(v,w),nt+1(v)=nt(v)+(u,v)D(𝒇t)+(u,v)

Clearly pt1 and nt1 are non-decreasing, we denote the final values by π,ν. Let us verify that indeed the transitions above preserve the property in (1). For type 1/2/3 actions, this is immediate. For type 4, we observe that for any vertex vV when inducing the flow 𝒇t onto E~t+1 every unit of (v,w)D(𝒇t)+(v,w) increases the commodity Γt(v) at v by one and every unit of (u,v)D(𝒇t)+(u,v) decreases the commodity Γt(v) at v by one.

Claim 10.

We can bound the size of π,ν by

O(Δ1+d𝒟|𝒇td(d)|),

where Δ denotes the final state of the variable.

Proof.

Only IncreaseSource and RemoveEdges can increase pt1,nt1. The bound follows immediately from 3, 4 above.

Claim 11.

The contribution to the run-time of the relabeling operations is bounded by

O(vV(v)0t|t+1(v)t(v)|)O(hπ).
Proof.

Since deg(v)(v), we can bound the run-time contribution of the relabeling operations by O(vV(v)0t|t+1(v)t(v)|). It thus suffices to bound this expression. We treat each vV(G) individually. W.l.o.g. t(v) increases at some time t, otherwise the contribution of v to the sum is trivially zero. Let us consider the following time stamps s0<s1<s2<<sl such that s0 is the first time Γt(v)(v) and

i0:s2i+1 =min{ts2iΓt(v)(v)/2},
i1:s2i =min{ts2i1Γt(v)(v)}.

Additionally, we define the time stamps t0<t1<t2< such that

i0:t2i =max{t<s2i+1Γt(v)(v)},
i1:t2i1 =max{t<s2iΓt(v)(v)/2}.

Let us introduce the vector zt(v)=min(pt(v),nt(v)+(v)/2) (this corresponds to the neutrons introduced in the introduction). We observe that zt(v) is non-decreasing. Indeed, if we push away from v there must be commodity excess, i.e. Γt(v)(v)+1/2 or equivalently pt(v)nt(v)+(v)+1/2, and if we pull towards v there must be negative commodity excess, i.e. Γt(v)(v)/21/2 or equivalently pt(v)nt(v)+(v)/21/2. We will now establish that zs2(i+1)(v)zs2i(v)+(v)/2. Consider the time stamps t2it<s2i+1: since Γt+1(v)<(v) the t-th operation cannot be a push away from v and thus pt2i(v)ps2i+1(v). Combining it with the definition of the time stamp s2i+1, we find

ns2i+1(v)+(v)/2ps2i+1(v)pt2i(v).

This in turn implies that

zs2(i+1)(v)zs2i+1(v)pt2i(v)zt2i(v)+(v)/2zs2i(v)+(v)/2,

where we used in the first and the last inequality that zt(v) is non-decreasing and in the third the definition of the time stamp t2i. By induction, we have zs2i(v)zs0(v)+i(v)/2zs0(v)+(i+1)(v)/2(i+1)(v)/2. This allows us to bound

(v)s0t|t+1(v)t(v)| (v)(l+1)2h
2h(2l/2+2)(v)4hzs2l/2(v),

where the h is due to the fact that k(v) is non-decreasing between s2i,s2i+1 and non-increasing between s2i+1,s2(i+1). Summing over v and bounding zt(v)pt(v) concludes the argument.

Claim 12.

The contribution to the run-time of the push operations and the pull operations is bounded by

O(h(Δ1+d𝒟|𝒇td(d)|)).
Proof.

Let us introduce the function Φ(t)=vVΓt(v)t(v). We remark that any push or pull operation of 12 units from u to v, where t(u)>t(v), decreases Γt(u) by 12 and increases Γt(v) by at most 12 while preserving all other Γt entries. Hence, any such push or pull operation decreases the function Φ(t). This allows us to bound the run-time contribution of the push and pull operations by 0tmax(Φ(t)Φ(t+1),0). Since at termination the state is valid and thus all vertices with negative excess are on level 0, we have limtΦ(t)0 for all t. Together with Φ(0)=0, this implies that it suffices to bound the increases to Φ. As argued before push and pull operations can only decrease the potential. So let 𝒯 be the subset of indices that do not correspond to a push or a pull. Let us denote by t1,t2,,tk all indices at which we perform a call to IncreaseSource or RemoveEdges. For any i0, we denote by 𝒯ti,ti+1 all time indices t in 𝒯 where tit<ti+1 (where tk+1= for notational convenience). To ease notation, we write

R=h(Δ1+d𝒟|𝒇td(d)|).

As a first step, we observe

t𝒯ti,ti+1max(Φ(t+1)Φ(t),0) =t𝒯ti+1,ti+1vV|Γt(v)||t+1(v)t(v)|
+vV(Γti+1(v)Γti(v))ti+1(v),

where the first term of the expression is due to the relabeling operations in 𝒯ti,ti+1 and the second term is due to the function call at index ti. Since Γti(v) can only increase if pti(v) increases, we may thus bound the contribution of a vertex of the second term by

(pti+1(v)pti(v))h. (2)

Combining Claim 10, Claim 11, we find that it𝒯ti+1,ti+1vV2(v)|t+1(v)t(v)|O(R). We may thus subtract t𝒯ti+1,ti+1vV2(v)|t+1(v)t(v)| and reduce the discussion to bounding the expression

t𝒯ti+1,ti+1vV(|Γt(v)|2(v))|t+1(v)t(v)| +vV(Γti+1(v)Γti(v))ti+1(v). (3)

We treat each vertex vV individually. Let τi(v)0 be maximum such that tit<ti+τi(v):Γt(v)>2(v) or Γt(v)<2(v). Let us consider the first term of the expression (3). Note that if |Γt(v)|2(v) then after any push or pull operation at time t, we still have that |Γt+1(v)|2(v). This implies that for all t𝒯ti+τi,ti+1:|Γt(v)|2(v). It thus suffices to consider for each vV the expression

t𝒯ti+1,ti+τi(v)(|Γt(v)|2(v))|t+1(v)t(v)|
(|Γti+1(v)|2(v))t𝒯ti+1,ti+τi(v)|t+1(v)t(v)|. (4)

Let us consider the vertices v such that the quantity above is strictly positive. Clearly, Γti+1(v)>2(v) or Γti+1(v)<2(v). Let us consider the first case.
Since at time ti the tuple (𝒇ti,ti) is valid, we must have that Γti(v)<2(v) otherwise ti(v)=h and hence t𝒯ti+1,ti+τi(v)|t+1(v)t(v)|=0. Since Γti(v) can only increase if pti(v) increases, we can bound

(|Γti+1(v)|2(v))(Γti+1(v)Γti(v))pti+1(v)pti(v).

Moreover, the label can only increase during 𝒯ti+1,ti+τi(v) and hence t𝒯ti+1,ti+τi(v)|t+1(v)t(v)|h. We may thus bound the contribution of such a vertex to (3) by

(pti+1(v)pti(v))h. (5)

If on the other hand Γti+1(v)<2(v), then since at time ti the tuple (𝒇ti,ti) is valid we must have that Γti(v)0 otherwise ti(v)=0 and hence t𝒯ti+1,ti+τi(v)|t+1(v)t(v)|=0. Since Γti(v) can only decrease if nti(v) increases, we can bound

(|Γti+1(v)|2(v))(Γti(v)Γti+1(v))nti+1(v)nti(v).

Moreover, the label can only decrease during 𝒯ti+1,ti+τi(v) and hence t𝒯ti+1,ti+τi(v)|t+1(v)t(v)|h. We may thus bound the contribution of such a vertex to (3) by

(nti+1(v)nti+1(v))h. (6)

Summing over all time indices t1,t2, and combining Claim 10, Claim 11, we can bound both equation 2, 5, 6 by O(R). This yields the conclusion.

4 Directed Expander Decompositions via the Push-Pull-Relabel Framework

In this section, we discuss our main technical result that states that if G is initially a ϕ-expander, then an expander decomposition can be maintained efficiently. The algorithm behind this theorem heavily relies on the new push-pull-relabel algorithm presented in the previous section.

Theorem 13.

Given a ϕ-expander G=(V,E) with (ϕ,ψ)-witness (W,Π), there is a deterministic data structure BidirectedExpanderPruning(G,W,Π) (see Algorithm 2). After every update of the form

  • RemoveEdges(D): where DE, sets 𝒟 to 𝒟D (initially 𝒟=),

the algorithm explicitly updates V~V and the tuple (Er,𝒫), where ErE(G) and 𝒫 is a partition of VV~ (initially Er,𝒫=), such that both Er and 𝒫 are non-decreasing and such that after the update

  1. 1.

    the graph G~=(G𝒟)[V~] has a (ϕψ632000,ψ4800)-witness (W~,Π),

  2. 2.

    |Er|ϕ4P𝒫volG(P)

  3. 3.

    P𝒫volW~(P)8ψ|Π1(𝒟)|

  4. 4.

    (G(𝒟Er))/(𝒫{V~}) is a directed acyclic graph (DAG),

provided 200ψ2|Π1(𝒟)|<e(G). The run-time is O(hψ2|Π1(𝒟)|+hd𝒟E𝒫(1+|𝐟td(d)|))=O(h|𝒟|ψ2ϕ), where 𝒟 denotes the variable at the end of all updates and E𝒫 denotes all intercluster edges of the partition 𝒫{V~}.

Following the high-level approach of [31], we obtain our main result Theorem 2 for the special case where G is static by running a generalization of the cut-matching game to directed graphs [22, 27] and then apply Theorem 13 to handle unbalanced cuts. Combining this result again with Theorem 13, we obtain our main result Theorem 2 in full generality. Both of these reductions have been known from previous work [2, 18] and are omitted.

4.1 Reduction to Out-Expanders

In this subsection, we show that the task of maintaining an expander decomposition as described in Theorem 13 can be reduced to a simpler problem that only requires maintaining out-expanders (however under edge and vertex deletions). In particular, we reduce to the following statement whose algorithm is deferred to Section 5 and the proof is omitted.

Lemma 14.

For every ϕ-out-expander G=(V,E) with (ϕ,ψ)-witness (W,Π), there is a deterministic data structure DirectedExpanderPruning(G,W,Π) (see Algorithm 3). After every update of the form (initially V~=V,𝒮=,E𝒮=,E𝒮+=,E𝒮=,𝒟=)

  • RemoveEdges(D): where DE(V~)𝒟, sets 𝒟 to 𝒟D,

  • RemoveVertices(S): where SV~, sets V~ to V~S, 𝒮 to 𝒮S, E𝒮+ to E𝒮+EG(S,V~S),E𝒮 to E𝒮EG(V~S,S) and E𝒮 to E𝒮+E𝒮

the algorithm explicitly updates V~V and the tuple (Er,𝒫), where ErE and 𝒫 is a partition of V(𝒮V~) (initially Er=,𝒫=), such that both Er and 𝒫 are non-decreasing and such that after the update

  1. 1.

    the graph (G𝒟)[V~] has a (ϕψ4400,ψ220)-out-witness (W~,Π~),

  2. 2.

    |Er|ϕ4P𝒫volG(P),

  3. 3.

    P𝒫volW~(P)43ψ|Π1(𝒟E𝒮)|,

  4. 4.

    Er=PEG(𝒟E𝒮)(P,VP), where VP is equal to V~ at the time P is added to 𝒫,

provided 20ψ|Π1(𝒟E𝒮E𝒫)|<e(G), where 𝒟 denotes the variable at the end of all updates and E𝒫 denotes all intercluster edges of the partition 𝒫{V~}. The algorithm runs in total time O(h|Π1(𝒟E𝒮E𝒫)|ψ2+hd𝒟E𝒮E𝒫(1+|𝐟td(d)|)).

Algorithm 2 BiDirectedExpanderPruning(G,W,Π).

We present BiDirectedExpanderPruning (see Algorithm 2), which implements this reduction. The Init function initializes two out-expander pruning data structure. One for the graph G with witness (W,Π) with variables V~1,𝒫1,E1r and one for the graph Grev with witness (W,Π) with variables V~2,𝒫2,E2r. From the guarantees of the one directional DirectedExpanderPruning algorithm, we will have that after each update G[V~1] and Grev[V~2] are out-expanders. To conclude that the G[V~] is in fact an expander in both directions, we will ensure using the function AdjustPartition that V~1,V~2 agree. If they do not agree, we will remove the vertices V~1V~2 from V~1 using the function RemoveVertices of the algorithm DirectedExpanderPruning (see Algorithm 3). Using the function RemoveEdges of BiDirectedExpanderPruning, we can remove edges from G[V~]. This is again accomplish by calling the analogous function RemoveEdges of DirectedExpanderPruning for both directions. Again we have to adjust the partition such that V~1,V~2 agree afterwards. The algorithm and the proof are omitted.

5 Maintaining Out-Expanders

It remains to prove Lemma 14. Algorithm 3 implements of DirectedExpanderPruning. The Init function initializes for an out-expander (G,W,Π) the expander pruning variables V~,𝒫,Er,𝒟 as well as a first valid state (𝒇,) of the ValidState algorithm (see Algorithm 1).

Algorithm 3 DirectedExpanderPruning(G,W,Π).

This valid state (𝒇,), we will use to find the pruning cuts in function Prune. Using RemoveEdges the user can remove edges D from G[V~]. Removing edges might force us to prune away some part of the G[V~] and add the pruned part to the collection of pruning sets 𝒫. To find the pruning set P, we adopt a similar strategy as in [31]. We inject additional source flow into the flow problem: for any witness edge e=(u,v)E(W), where an edge on the embedding path Π(e) is in D, we increase the sources Δ(u),Δ(v) by 4ψ. Since we updated the flow problem by increasing the source capacities and removing edges, we need to update the valid state (𝒇,). This new valid state (𝒇,), then allows us to locate the pruning set in the function AdjustPartition. In AdjustPartition, we check whether there is a vertex in the remaining graph G[V~] on level h. If there is no such vertex, it will certify that in fact G[V~] has already a good out-witness and we don’t need to prune away any subgraph. If on the other hand, there is a vertex on level h we will find a pruning set P using the function Prune. This set P is then added to the collection of pruning sets 𝒫 and the vertices in P are removed from V~. The cut edges EG𝒟(P,V~) are added to the remove edges Er. Since V~ has become smaller, we need to update the valid state (𝒇,) again. We do this once more by injecting additional source at the endpoints of any witness edge eE(W), where some edge on the path Π(e) is in EG𝒟(V~,P). Thereafter we again check whether in the new valid state (𝒇,) there is a vertex in V~ on level h. We keep on doing this procedure until there no longer is a vertex on level h. We will prove that the volume of the pruning sets can be related to the size of the set of edges D initially removed by the user. So far we have not explained how to find the pruning set in function Prune. This is accomplished by a standard level cutting procedure. We start with P being all vertices on level h and then check whether the vertices on the next level have volume at least ϕvolG(P). If it is the case, we add vertices on the next level to P and otherwise return P. Through the function RemoveVertices the user can remove vertices S from V~. Similar to the function RemoveEdges, we will have to inject additional source at the endpoints of the witness edges eE(W), where some edge of Π(e) is in EG𝒟(V~,S), and update the valid state (𝒇,) accordingly. This might again leave some vertices on level h and will again force us to prune some vertices. This is again accomplished by a call to AdjustPartition. And similar to RemoveEdges, we will again prove that the volume of the pruning sets can be related to the volume of the set S initially removed by the user.

References

  • [1] Aaron Bernstein, Joakim Blikstad, Thatchaphol Saranurak, and Ta-Wei Tu. Maximum flow by augmenting paths in n2+o(1) time. to appear at FOCS’24, 2024.
  • [2] Aaron Bernstein, Maximilian Probst Gutenberg, and Thatchaphol Saranurak. Deterministic decremental reachability, scc, and shortest paths via directed expanders and congestion balancing. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), pages 1123–1134. IEEE, 2020. doi:10.1109/FOCS46700.2020.00108.
  • [3] Aaron Bernstein, Maximilian Probst Gutenberg, and Thatchaphol Saranurak. Deterministic decremental sssp and approximate min-cost flow in almost-linear time. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 1000–1008. IEEE, 2022.
  • [4] Aaron Bernstein, Maximilian Probst Gutenberg, and Christian Wulff-Nilsen. Near-optimal decremental sssp in dense weighted digraphs. In 2020 IEEE 61st Annual Symposium on Foundations of Computer Science (FOCS), pages 1112–1122. IEEE, 2020. doi:10.1109/FOCS46700.2020.00107.
  • [5] Aaron Bernstein, Jan van den Brand, Maximilian Probst Gutenberg, Danupon Nanongkai, Thatchaphol Saranurak, Aaron Sidford, and He Sun. Fully-dynamic graph sparsifiers against an adaptive adversary. In 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022). Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022.
  • [6] Parinya Chalermsook, Syamantak Das, Yunbum Kook, Bundit Laekhanukit, Yang P Liu, Richard Peng, Mark Sellke, and Daniel Vaz. Vertex sparsification for edge connectivity. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1206–1225. SIAM, 2021. doi:10.1137/1.9781611976465.74.
  • [7] Li Chen, Rasmus Kyng, Yang P Liu, Simon Meierhans, and Maximilian Probst Gutenberg. Almost-linear time algorithms for incremental graphs: Cycle detection, sccs, s-t shortest path, and minimum-cost flow. to appear at STOC’24, 2024.
  • [8] Li Chen, Rasmus Kyng, Yang P Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva. Maximum flow and minimum-cost flow in almost-linear time. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 612–623. IEEE, 2022. doi:10.1109/FOCS54457.2022.00064.
  • [9] Julia Chuzhoy. Decremental all-pairs shortest paths in deterministic near-linear time. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 626–639, 2021. doi:10.1145/3406325.3451025.
  • [10] Julia Chuzhoy and Sanjeev Khanna. A new algorithm for decremental single-source shortest paths with applications to vertex-capacitated flow and cut problems. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, pages 389–400, 2019. doi:10.1145/3313276.3316320.
  • [11] Julia Chuzhoy and Sanjeev Khanna. A faster combinatorial algorithm for maximum bipartite matching. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2185–2235. SIAM, 2024. doi:10.1137/1.9781611977912.79.
  • [12] Julia Chuzhoy and Sanjeev Khanna. Maximum bipartite matching in n2+o(1) time via a combinatorial algorithm. to appear at STOC’24, 2024.
  • [13] Julia Chuzhoy and Thatchaphol Saranurak. Deterministic algorithms for decremental shortest paths via layered core decomposition. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2478–2496. SIAM, 2021. doi:10.1137/1.9781611976465.147.
  • [14] Andrew V Goldberg and Robert E Tarjan. A new approach to the maximum-flow problem. Journal of the ACM (JACM), 35(4):921–940, 1988. doi:10.1145/48014.61051.
  • [15] Gramoz Goranci, Harald Räcke, Thatchaphol Saranurak, and Zihan Tan. The expander hierarchy and its applications to dynamic graph algorithms. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2212–2228. SIAM, 2021. doi:10.1137/1.9781611976465.132.
  • [16] Monika Henzinger, Satish Rao, and Di Wang. Local flow partitioning for faster edge connectivity. SIAM Journal on Computing, 49(1):1–36, 2020. doi:10.1137/18M1180335.
  • [17] Dorit S Hochbaum. The pseudoflow algorithm: A new algorithm for the maximum-flow problem. Operations research, 56(4):992–1009, 2008. doi:10.1287/OPRE.1080.0524.
  • [18] Yiding Hua, Rasmus Kyng, Maximilian Probst Gutenberg, and Zihang Wu. Maintaining expander decompositions via sparse cuts. In Proceedings of the 2023 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 48–69. SIAM, 2023. doi:10.1137/1.9781611977554.CH2.
  • [19] Wenyu Jin and Xiaorui Sun. Fully dynamic st edge connectivity in subpolynomial time. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 861–872. IEEE, 2022.
  • [20] Wenyu Jin, Xiaorui Sun, and Mikkel Thorup. Fully dynamic min-cut of superconstant size in subpolynomial time. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2999–3026. SIAM, 2024. doi:10.1137/1.9781611977912.107.
  • [21] Jonathan A Kelner, Yin Tat Lee, Lorenzo Orecchia, and Aaron Sidford. An almost-linear-time algorithm for approximate max flow in undirected graphs, and its multicommodity generalizations. In Proceedings of the twenty-fifth annual ACM-SIAM symposium on Discrete algorithms, pages 217–226. SIAM, 2014. doi:10.1137/1.9781611973402.16.
  • [22] Rohit Khandekar, Satish Rao, and Umesh Vazirani. Graph partitioning using single commodity flows. Journal of the ACM (JACM), 56(4):1–15, 2009. doi:10.1145/1538902.1538903.
  • [23] Rasmus Kyng, Simon Meierhans, and Maximilian Probst Gutenberg. A dynamic shortest paths toolbox: Low-congestion vertex sparsifiers and their applications. to appear at STOC’24, 2024.
  • [24] Rasmus Kyng, Simon Meierhans, and Maximilian Probst. Derandomizing directed random walks in almost-linear time. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 407–418. IEEE, 2022. doi:10.1109/FOCS54457.2022.00046.
  • [25] Jason Li. Deterministic mincut in almost-linear time. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 384–395, 2021. doi:10.1145/3406325.3451114.
  • [26] Yang P Liu. Vertex sparsification for edge connectivity in polynomial time. In 14th Innovations in Theoretical Computer Science Conference (ITCS 2023). Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2023.
  • [27] Anand Louis. Cut-matching games on directed graphs. arXiv preprint arXiv:1010.1047, 2010. arXiv:1010.1047.
  • [28] Danupon Nanongkai and Thatchaphol Saranurak. Dynamic spanning forest with worst-case update time: adaptive, las vegas, and o (n1/2-ε)-time. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 1122–1129, 2017. doi:10.1145/3055399.3055447.
  • [29] Danupon Nanongkai, Thatchaphol Saranurak, and Christian Wulff-Nilsen. Dynamic minimum spanning forest with subpolynomial worst-case update time. In 2017 IEEE 58th Annual Symposium on Foundations of Computer Science (FOCS), pages 950–961. IEEE, 2017. doi:10.1109/FOCS.2017.92.
  • [30] Thatchaphol Saranurak. A simple deterministic algorithm for edge connectivity. In Symposium on Simplicity in Algorithms (SOSA), pages 80–85. SIAM, 2021. doi:10.1137/1.9781611976496.9.
  • [31] Thatchaphol Saranurak and Di Wang. Expander decomposition and pruning: Faster, stronger, and simpler. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 2616–2635. SIAM, 2019. doi:10.1137/1.9781611975482.162.
  • [32] Daniel A Spielman and Shang-Hua Teng. Nearly-linear time algorithms for graph partitioning, graph sparsification, and solving linear systems. In Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, pages 81–90, 2004. doi:10.1145/1007352.1007372.
  • [33] Jan Van Den Brand, Li Chen, Rasmus Kyng, Yang P Liu, Simon Meierhans, Maximilian Probst Gutenberg, and Sushant Sachdeva. Almost-linear time algorithms for decremental graphs: Min-cost flow and more via duality. to appear at FOCS’24, 2024.
  • [34] Jan Van Den Brand, Li Chen, Richard Peng, Rasmus Kyng, Yang P Liu, Maximilian Probst Gutenberg, Sushant Sachdeva, and Aaron Sidford. A deterministic almost-linear time algorithm for minimum-cost flow. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 503–514. IEEE, 2023. doi:10.1109/FOCS57990.2023.00037.
  • [35] Jan Van Den Brand, Yin Tat Lee, Yang P Liu, Thatchaphol Saranurak, Aaron Sidford, Zhao Song, and Di Wang. Minimum cost flows, mdps, and l1-regression in nearly linear time for dense instances. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, pages 859–869, 2021. doi:10.1145/3406325.3451108.
  • [36] Christian Wulff-Nilsen. Fully-dynamic minimum spanning forest with improved worst-case update time. In Proceedings of the 49th Annual ACM SIGACT Symposium on Theory of Computing, pages 1130–1143, 2017. doi:10.1145/3055399.3055415.