Abstract 1 Introduction 2 Preliminaries 3 Dynamic All-Pairs Shortest Paths 4 A Dynamic Greedy Spanner 5 Lower Stretch & More Deletions References

A Simple Dynamic Spanner via APSP

Rasmus Kyng ORCID ETH Zurich, Switzerland Simon Meierhans ORCID ETH Zurich, Switzerland Gernot Zöcklein ORCID ETH Zurich, Switzerland
Abstract

We give a simple algorithm for maintaining a no(1)-approximate spanner H of a graph G with n vertices as G receives edge updates by reduction to the dynamic All-Pairs Shortest Paths (APSP) problem. Given an initially empty graph G, our algorithm processes m insertions and n deletions in total time m1+o(1) and maintains an initially empty spanner H with total recourse n1+o(1). When the number of insertions is much larger than the number of deletions, this notably yields recourse sub-linear in the total number of updates.

Our simple algorithm can be extended to maintain a δω(1)-approximate spanner with n1+o(1) edges throughout a sequence of m insertions and D deletions with amortized update time no(1) and total recourse n1+o(1)+no(1)D via batching.

Keywords and phrases:
Dynamic graph algorithms, Spanner, Dynamic Greedy Spanner
Category:
Track A: Algorithms, Complexity and Games
Funding:
Rasmus Kyng: The research leading to these results has received funding from grant no. 200021 204787 and from the starting grant “A New Paradigm for Flow and Cut Algorithms” (no. TMSGI2_218022) of the Swiss National Science Foundation.
Simon Meierhans: The research leading to these results has received funding from grant no. 200021 204787 of the Swiss National Science Foundation. Simon Meierhans is supported by a Google PhD Fellowship.
Copyright and License:
[Uncaptioned image] © Rasmus Kyng, Simon Meierhans, and Gernot Zöcklein; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Dynamic graph algorithms
; Theory of computation Sparsification and spanners
Related Version:
Full Version: https://arxiv.org/abs/2408.11368
Acknowledgements:
The authors are very grateful for feedback from Maximilian Probst Gutenberg and Pratyai Mazumder on a draft of this article that improved the presentation.
Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis

1 Introduction

Given a graph G=(V,E), a δ-approximate spanner H is a subgraph of G with the same vertex set such that distH(u,v)δdistG(u,v) for every pair of vertices u,vV.111δ is called the stretch of H. Typically, the spanner H has much fewer edges than G and can therefore be used as a sparse proxy for computing approximate distances in downstream applications. Althöfer, Das, Dobkin, Joseph and Soares developed the now-classical greedy spanner [1].

Given an integer δ, their algorithm incrementally constructs a (2δ1)-approximate spanner H of G=(V,E) as follows.

  1. 1)

    Initialize H=(V,).

  2. 2)

    Consider the edges (u,v)E in arbitrary order. If distH(u,v)2δ, add (u,v) to H.

When this process finishes, every edge on the shortest path between two arbitrary vertices in G can be replaced by a detour of length at most 2δ1 in H. The claimed approximation distH(u,v)(2δ1)distG(u,v) directly follows.

Crucially, they show that the graph H is also sparse. This follows from the folklore observation that high-girth graphs are sparse. The girth of a graph is the length of the shortest cycle. By construction, the girth of H is 2δ+1, which implies total edge count is bounded by 2|V|1+1δ. Whenever δ grows with n, this leads to an almost-linearly sized spanner.

Observation 1 (Folklore).

Every graph H=(V,EH) with girth 2δ+1 contains at most 2|V|1+1δ edges.

Proof.

Repeatedly remove vertices of degree at most d=|V|1δ+1 from H. This process removes at most d|V| edges. Assume for the sake of a contradiction that there is some remaining graph H~H after this process terminates. Fix an arbitrary remaining vertex r, and consider the breath-first search tree TH~ to depth δ from r. Since the girth of H~ is at least 2δ+1, no two vertices at depth <δ in T share an edge in H~T. Thus, the number of vertices in T is at least i=0δ(d1)i>(d1)δ=|V| which is a contradiction.

Despite the simplicity of this framework, previous deterministic dynamic algorithms for maintaining a spanner HG as G undergoes updates were based on expander decompositions [7, 5].

However, In [4] the authors observed that the greedy spanner construction can be used to maintain a spanner HG as G receives edge deletions with low amortized recourse. As part of their almost-linear time incremental min-cost flow algorithm, [6] presented a computationally efficient algorithm for maintaining a spanner of a fully-dynamic graph G based on the greedy spanner.222They crucially exploit that the recourse is sub-linear in the number of insertions. Expander decomposition based algorithms seem less suitable for achieving such a result. Their algorithm is complicated by the fact that they need to process vertex splits in addition to edge updates and the total number of decremental updates is limited by the number of vertices.

1.1 A simple dynamic spanner for graphs with few deletions

We present a simplified algorithm for maintaining a spanner of an edge-dynamic graph by reduction to an All-Pairs Shortest Paths (APSP) oracle. The underlying APSP oracle is a dynamic algorithm providing query access to approximate pairwise distances in a dynamic graph [9, 12, 11, 13].

Theorem 2.

Given an initially empty edge-dynamic graph G=(V,E) on n=|V| vertices receiving m edge insertions and up to n edge deletions where mn, there is an algorithm maintaining a no(1)-approximate spanner H with total update time m1+o(1) and total recourse n1+o(1).

We note in particular that the total recourse is almost-optimal and only depends on the number of vertices and deletions, and is therefore completely independent of the number of insertions.

Our simple algorithm yielding Theorem 2 is presented in Section 4. It can use any path-reporting approximate APSP data structure and only adds an O(logn) factor overhead in runtime and approximation. We refer the reader to Definition 5 and Theorem 12 for a more fine grained analysis of the dependency on the underlying APSP data structure.

 Remark 3.

Theorem 2 can be extended to graphs with polynomially bounded edge lengths via standard length bucketing. The number of edges in the final spanner increases by a logarithmic factor.

1.2 More deletions and lower stretch dynamic spanners

Batching techniques allow us to extend our spanner to handle an arbitrary number of deletions, where the recourse becomes almost-linear in the number of deletions and vertices.

An originally independent work [8] recently introduced the idea of obtaining a dynamic spanner with stretch δ=o(logn).333The first version of this article on arXiv precedes the announcement of [8], but their article prompted us to add Theorem 4 and Section 5. They remark that prior to their work, no algorithm with update time o(n), n1+o(1) edges and stretch o(logn) existed. This motivated us to extend our spanner to handle arbitrary non-constant stretch as in Theorem 18. They heavily use ideas from [11] directly, which we completely black-box. We remark that our algorithm loses large sub-polynomial factors in update time and density compared to theirs.

Theorem 4.

Given an edge-dynamic graph G=(V,E) with n=|V| vertices receiving m insertions and D deletions, and a parameter δω(1), there is an algorithm maintaining a δ-approximate spanner H containing at most n1+o(1) edges with total recourse n1+o(1)+no(1)D and amortized update time no(1).

Our algorithm yielding Theorem 4 is presented in Section 5. It depends on the APSP data structure of [11], which can get arbitrarily good approximation by trading it off for update time. We point out that the recourse achieved by our algorithm is again almost-optimal, since scaling linearly in the number of deletions is unavoidable.

1.3 Prior Work

In [2], the authors present a randomized dynamic spanner with poly-logarithmic stretch and amortized update time against an oblivious adversary.444Oblivious adversaries generate the sequence of updates at the start, i.e. independently from the random output of the algorithm. Adaptive adversaries can use the previous output to fool a randomized algorithm. Then [10] significantly reduced the poly-logarithmic overhead in both size and update time. In [3], the authors gave the first non-trivial algorithm for maintaining a dynamic spanner against an adaptive adversary. They maintain a spanner of near linear size with poly-logarithmic amortized update time and stretch. Deterministic algorithms for maintaining spanners of dynamic graphs with bounded degree that additionally undergo vertex splits were developed in the context of algorithms for minimum cost flow [7, 5, 6]. These have sub-polynomial overhead.

2 Preliminaries

2.1 Static and Dynamic Graphs

We let G=(V,E) denote unweighted and undirected graphs with vertex set V and edge set E. When we want to refer to the vertex/edge set of some graph G, we sometimes use V(G) and E(G) respectively.

We refer to |E(G)|/|V(G)| as the density of a graph, and say a graph is sparse whenever its density is no(1).

For a graph G=(V,E) and a pair of vertices u,vV we let distG(u,v) denote the length of a shortest uv-path in G. We call a graph edge-dynamic if it receives arbitrary updates to E (i.e. edge insertions and deletions). We refer to the total number of such edge updates a graph receives as the recourse.

2.2 Edge Embeddings

Given two graph H and G on the same vertex set such that HG, we let ΠGH be a function that maps edges in E(G) to paths in H. Given an embedding ΠGH, we define the edge congestion econg(ΠGH,e)=def|{fE(G):eΠGH(f)}| for eE(H).

3 Dynamic All-Pairs Shortest Paths

We first define approximate All-Pairs Shortest Paths (APSP) data structures for edge-dynamic graphs.

Definition 5 (APSP).

For an initially empty edge-dynamic graph G=(V,E) a γ-approximate α-update β-query APSP data structure supports the following operations.

  • AddEdge(u,v) / RemoveEdge(u,v): Add/Remove edge (u,v) from/to G in (amortized) time α.

  • ApxDist(u,v): Returns a distance estimate dist~(u,v) such that

    dist(u,v)dist~(u,v)γdist(u,v).

    in (amortized) time β.

  • Path(u,v): Returns a simple uv-path P in G of length |P|ApxDist(u,v) in time β|P|.

We refer the reader to [9, 12, 11, 13] for recent results on path-reporting dynamic APSP. The parameters γ,α and β should currently be thought of as sub-polynomial factors no(1) in the number of vertices n of type eO(logτn) for some constant 0<τ<1.

In [13] the authors present the shortest APSP algorithm yet by simplifying the algorithm of [12] at the expense of a larger sub-polynomial overhead in runtime, i.e. α=e1/(loglogm)c for some c>0. Their algorithm internally bootstraps a more complicated version of the spanner presented in this article that is tailored to their application.

The algorithm of [11] can obtain a better approximation γ=logϵn at the cost of large sub-polynomial update times α=n1/(loglogn)Θ(ϵ).

4 A Dynamic Greedy Spanner

4.1 A Simple Low-Recourse Algorithm

We first present a very simple dynamic variant of the greedy spanner that merely limits the number of changes to H, i.e. its recourse. Then we describe the necessary changes for obtaining an efficient version using a dynamic APSP datastructure.

Let G=(V,E) be an initially empty graph on n vertices. We maintain a O(logn)-spanner HG as follows.

  • When an edge (u,v) is inserted to G, we check if dist(u,v)>2logn. If so, we add it to H.

  • When an edge (u,v) is deleted from G, we remove it from H if it is present and re-insert all edges in E(G)E(H) using the procedure described above. If (u,v) is not in H, we do not need to update our spanner.

This algorithm still maintains that the girth of H is at least 2logn+1, and therefore the graph H contains at most O(n) edges at any point. Edges only leave the spanner H when they are deleted. If at most D edges get deleted throughout, a total recourse bound of O(n+D) follows directly since the total recourse is bounded by (maximum # edges in H)+(# edges leaving H).

4.2 A Simple Efficient Algorithm

To turn the above framework into an efficient algorithm, we maintain an embedding ΠGH that maps each edge in G to a short path in H and thus certifies that such a path still exists. We then only have to re-insert all edges whose embedding paths use the deleted edge. To bound the number of re-insertions, we carefully manage the edge congestion of ΠGH. Finally, we use an APSP data structure (See Definition 5) to obtain distances and paths. In the following, we assume there are at most m insertions, n deletions and that |V|=n.

Our algorithm internally maintains two graphs: The current spanner H and a not yet congested sub-graph H^H. We maintain that the edge congestion is strictly less than K for every edge in H^, and maintain access to distances and paths in H^ via a γ-approximate α-update β-query APSP data structure 𝒟H^.

  • When an edge e is inserted, we check if 𝒟H^ returns distance at most 2γlogn between the endpoints. If this is the case, we query it for a witness path P and store P as the embedding path of e. Then we check if any of the edges on the path now have K embedding paths using them, and if so we remove them from H^. Otherwise, we add the edge e to H and H^.

  • When an edge e is deleted, we remove it from H^ and H and re-insert all edges that now have a broken embedding path using the procedure above.

Choosing the threshold K=m/n allows us to tolerate up to n deletions which yield m additional calls to the insertion procedure. See Algorithm 1 for detailed pseudocode.

Algorithm 1 DynamicSpanner().

4.3 Analysis

We first show that H is a (2γlogn)-approximate spanner.

Lemma 6.

The graph H maintained by DynamicSpanner() (Algorithm 1) is a (2γlogn)-approximate spanner throughout.

Proof.

When an edge eE(G) is inserted, it afterwards has a valid embedding path of length at most 2γlogn by the description of InsertEdge(e). Therefore the edge has a valid embedding path at this point. However, if any edge on this path is deleted, the edge e is re-inserted by the description of DeleteEdge(e). This establishes the claim.

We then prove that the total number of times the routine InsertEdge(e) is called is bounded by 2m.

Claim 7.

For all fE(H), econg(ΠGH,f)K throughout.

Proof.

Whenever the edge congestion of an edge in H^ reaches K it is removed from H^ and 𝒟H^. By the description of InsertEdge() such an edge is never used in embedding paths again. Therefore the edge congestion of econg(ΠGH,f) is bounded by K for every edge.

Corollary 8.

A sequence of at most m external insertions and m/K deletions causes at most 2m calls to InsertEdge() in total.

Proof.

By Claim 7, the edge congestion is at most K throughout. Thus, there are at most m additional calls to InsertEdge() issued by the DeleteEdge() routine since the algorithm re-inserts every edge with a broken embedding path. The corollary follows.

Given the previous corollary, we are ready bound the total recourse of H.

Claim 9.

|E(H)|O(γmlognK+n) throughout a sequence of m insertions and m/K deletions.

Proof.

We bound the edges in E(H)E(H^) and in E(H^) separately, starting with the former. These edges were congested by K paths when they were removed from H^. Since the total number of edges ever added is 2m, the total cumulative congestion that all the edges ever experience is at most 4mγlogn by the description of InsertEdge(). Therefore, the total number of edges in E(H)E(H^) is at most (4mγlogn)/K. Finally, by Definition 5 the graph H^ has girth at least 2logn+1, and therefore at most O(n) edges by Observation 1.

Corollary 10.

The recourse of H is O(γmlognK+n).

Proof.

Follows from Claim 9 since at most m/K edges get removed from H.

It remains to bound the total time spent processing updates.

Lemma 11.

The algorithm processes m insertions and m/K deletions in total time O(βγmlogn+αγmKlogn+αn).

Proof.

There are at most 2m calls to InsertEdge(). Each such call causes a distance and possibly a path reporting query to 𝒟H^. Whenever a path reporting query is called, the resulting path P has length at most 2γlogn. This causes a total runtime of O(βγmlogn).

Then, by Corollary 10, at most O(γmlognK+n) edges get added and therefore also removed from H^. These cause O(γmlognK+n) update calls to 𝒟H^. This causes total runtime O(αγmKlogn+αn).

All other operations can be subsumed in the times above.

Our first theorem then follows directly after setting K=m/n.

Theorem 12 (Simple Spanner to APSP Reduction).

Given a γ-approximate α-update β-query APSP data structure (Definition 5), there is an algorithm that maintains a 2γlogn-approximate spanner H of an initially empty graph G on n vertices throughout a sequence up to m insertions and n deletions in total time O(βγmlogn+αγnlogn). The total recourse of H is O(αγnlogn).

Proof.

Follows from Corollary 10 and Lemma 11 with K=m/n.

Then, Theorem 2 follows from Theorem 12 in conjunction with any of the APSP data structures of [9, 12, 11]. We point out that the quality of the spanner could be improved to O(log1+ϵ(n)) at the expense of a loss of a rather large sub-polynomial factor n1/(loglogn)Θ(ϵ) in update time and recourse using the APSP data structure of [11] (See Theorem 19). In the next section, we exploit their data structure to obtain lower stretch spanners.

5 Lower Stretch & More Deletions

The advantage of the data structure from Theorem 12 is that it has sublinear recourse if the number of insertions is much larger than the number of deletions. On the flip side, processing many deletions is expensive if the input graph is dense. In some applications, spanners are used to further sparsify graphs that are already pretty sparse [7, 5, 12, 13], but in others this may cause significant overhead.

We thus want to modify our spanner to allow for (1) an arbitrary amount of edge deletions instead of O(n) as before, and (2) arbitrary stretch δ=ω(1) instead of no(1) as before. We start by noting that replacing the factor 2logn in Line 6 of Algorithm 1 with 2κ for some arbitrary κ>0 directly yields the following.

Lemma 13.

Let κ>0. Then if we replace the factor 2logn in Line 6 of Algorithm 1 by 2κ, the following properties hold for a sequence of m insertions and m/K deletions:

  1. 1.

    The graph H maintained is a 2γκ spanner throughout.

  2. 2.

    |E(H)|O(γmκK+n1+1κ) throughout a sequence of m insertions and m/K deletions.

  3. 3.

    The recourse of H is O(γmκK+n1+1κ).

The algorithm runs in total time O(βγmκ+αγ(mKκ+n1+1κ)).

Proof.

The proof of the first item is completely analogous to the proof of Lemma 6. The second item follows from Observation 1 and the bound on the number of congested edges as in Claim 7 applied analogously as in the proof of Claim 9. Finally, the last item follows from the bound on the number of edges and by the fact that edges only leave the spanner when they get deleted.

5.1 Stepwise sparsification for processing more deletions

We now show that we can allow for an arbitrary amount of deletions at the cost of a subpolynomial overhead in both stretch and update time.

Previously, we chose the density reduction parameter K=m/n, which corresponds to completely sparsifying the graph in one step. To handle more deletions, we choose K much smaller, but then recursively compute spanners of spanners until we reach the desired sparsity. We call each such round of sparsification a layer.

Concretely, we let the density reduction K be a tunable parameter and assume m=KΛn1+1/κ for some integer number of layers Λ=O(logK(m/n1+1/κ)). Reducing the density by K for Λ layers will ensure that we reach the desired sparsity via the algorithm outlined below.

5.2 Layered algorithm

We initialize H0=G, and recursively apply Algorithm 1 to obtain Hi from Hi1 for i=1,,Λ. We let HΛ be the desired spanner. We then dynamically maintain this hierarchy throughout a sequence of up to m insertions and m/K deletions to H0=G. Concretely, we go over the layers in ascending order and at layer i feed all the changes caused to Hi1 to the spanner data structure maintaining Hi. This may lead to updates of Hi, which then get passed to the next layer i+1 as long as i<Λ.

Whenever the graph G has experienced some multiple of m/Ki+1 deletions, we re-initialize layers i,,Λ. We call this a rebuild. The reason rebuilds are required is to maintain sparsity, as otherwise the number of congested edges would grow too much on layers with higher indices due to re-insertions. We stress that the timing of rebuilds only depends on the update sequence to the original graph G.

The key insight that enables this algorithm is that edge deletions in Hi always correspond to edge deletions in G, i.e. deletions to Hi might cause some extra insertions to Hi+1, but never more deletions. This is crucial since deletions cause a large amount of recourse, but recourse due to insertions is effectively sparsified.

We first show that the final spanner HΛ has the desired sparsity.

Claim 14 (Sparsity).

At any time, |E(Hi)|=O((γκ)im/Ki).

Proof.

We show the claim by induction on i. For H0=G, the claim follows directly. Then, for i>0, the claim follows by Lemma 13 and the fact that the layers with indices i get re-started after an interval of m/Ki+1 deletions happened.

We bound the number of rebuilds of layer i directly.

Claim 15 (Rebuilds).

After m edge insertions and m/K edge deletions, the total number of rebuilds of layer i is Ki.

Proof.

The claim directly follows from the description of our algorithm, i.e. a rebuild of layer i happens whenever a multiple of m/Ki+1 deletions happened since the last rebuild.

We then bound the recourse of the graph HΛ in a more fine grained manner.

Claim 16.

The recourse of HΛ after D deletions is bounded by O((γκ)Λn1+1/κ+D(γκ)Λ).

Proof.

In between two rebuilds the recourse of HΛ is at most O((γκ)Λn1+1/κ). The total number of rebuilds is bounded by D/n1+1/κ. Therefore, the claim follows.

Finally, we prove that the stretch of the final spanner is sufficiently bounded as long as Λ is very small.

Claim 17 (Stretch).

The total stretch of the spanner computed at layer Λ with respect to G is at most (2γκ)Λ.

Proof.

The claim follows from applying Item 1 in Lemma 13 Λ times.

We conclude with the main theorem of this section.

Theorem 18.

Given a γ-approximate α-update β-query APSP data structure (Definition 5) and parameters κ>0,1Km/n, there is an algorithm that maintains a (2γκ)O(logK(m/n1+1/κ))-approximate spanner H of an initially empty graph G on n vertices throughout an arbitrary sequence of edge insertions and deletions such that |E(H)|=O((γκ)O(logK(m/n1+1/κ))n1+1/κ) with amortized update time O((β+α)K(2γκ)O(logK(m/n1+1/κ))).

Proof.

Sparsity and stretch follow from Claim 14 and Claim 17, respectively. It remains to show the statement about run-time. Suppose we received m edge-insertions and m edge-deletions. Each layer i handles up to (γκ)im/Ki insertions and m/Ki+1 deletions before being restarted, which takes time O((β+α)(γκ)im/Ki) by Lemma 13. The total number of restarts of layer i is Ki+1, so that the total amount of time spent for layer i is O(Ki+1(β+α)(γκ)im/Ki)=O(m(β+α)(γκ)iK). Summing the costs over all Λ layers yields the statement.

The following theorem from [11] allows us to instantiate our parameters in a suitable way. Since our approximation factor is larger than γΛ, we rely on the ability to choose a very small approximation factor γ in their APSP algorithm. Then, we choose K to be a very large sub-polynomial factor to obtain very few layers Λ.

Theorem 19 (Theorem 1.3 in [11]).

For some small constant c>0, given a parameter 1/logcnϵ1, there is a 2poly(1/ϵ)-approximate, O(nϵ)-update and O(loglogn/ϵ4)-query APSP data structure.

Theorem 20.

Given parameters 1/logcnϵ1, κ>0 and 1Km/n1+1/κ, there is an algorithm that maintains a (2poly(1/ϵ)κ)O(logK(m/n1+1/κ))-approximate spanner H of an initially empty graph G on n vertices throughout an arbitrary sequence of edge insertions and deletions such that |E(H)|=O((2poly(1/ϵ)κ)O(logK(m/n1+1/κ)n1+1/κκ) with amortized update time O(nϵK(2poly(1/ϵ)κ)O(logK(m/n1+1/κ))).

Now Theorem 4 follows from the previous theorem by a specific choice of parameters.

Proof of Theorem 4.

We choose κ=logδ, ϵ=1/loglogδ and K=n1/loglogδ. Note that then Λ=O(logK(m/n1+1/κ))=O(loglogδ).

So for this choice of parameters, the stretch is

(2γκ)Λ=(2poly(1/ϵ)κ)Λ=2poly(loglogδ)=O(δ).

The sparsity is

|E(H)|=O((2poly(1/ϵ)κ)Λn1+1/κκ)=O(2poly(loglogδ)n1+1/logδlogδ)=n1+o(1),

and the amortized update time is

O(nϵK(2poly(1/ϵ)κ)Λ)=O(n2/loglogδ2poly(loglogδ))=no(1).

Finally, the desired recourse bound follows from Claim 16.

References

  • [1] Ingo Althöfer, Gautam Das, David Dobkin, Deborah Joseph, and José Soares. On sparse spanners of weighted graphs. Discrete & Computational Geometry, 9(1):81–100, 1993. doi:10.1007/BF02189308.
  • [2] Surender Baswana, Sumeet Khurana, and Soumojit Sarkar. Fully dynamic randomized algorithms for graph spanners. ACM Trans. Algorithms, 8(4), 2012. doi:10.1145/2344422.2344425.
  • [3] Aaron Bernstein, Jan van den Brand, Maximilian Probst Gutenberg, Danupon Nanongkai, Thatchaphol Saranurak, Aaron Sidford, and He Sun. Fully-Dynamic Graph Sparsifiers Against an Adaptive Adversary. In Mikołaj Bojańczyk, Emanuela Merelli, and David P. Woodruff, editors, 49th International Colloquium on Automata, Languages, and Programming (ICALP 2022), volume 229 of Leibniz International Proceedings in Informatics (LIPIcs), pages 20:1–20:20, Dagstuhl, Germany, 2022. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ICALP.2022.20.
  • [4] Sayan Bhattacharya, Thatchaphol Saranurak, and Pattara Sukprasert. Simple Dynamic Spanners with Near-Optimal Recourse Against an Adaptive Adversary. In Shiri Chechik, Gonzalo Navarro, Eva Rotenberg, and Grzegorz Herman, editors, 30th Annual European Symposium on Algorithms (ESA 2022), volume 244 of Leibniz International Proceedings in Informatics (LIPIcs), pages 17:1–17:19, Dagstuhl, Germany, 2022. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ESA.2022.17.
  • [5] J. Brand, L. Chen, R. Kyng, Y. P. Liu, R. Peng, M. Gutenberg, S. Sachdeva, and A. Sidford. A deterministic almost-linear time algorithm for minimum-cost flow. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 503–514, Los Alamitos, CA, USA, November 2023. IEEE Computer Society. doi:10.1109/FOCS57990.2023.00037.
  • [6] Li Chen, Rasmus Kyng, Yang P. Liu, Simon Meierhans, and Maximilian Probst Gutenberg. Almost-linear time algorithms for incremental graphs: Cycle detection, sccs, s-t shortest path, and minimum-cost flow. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, pages 1165–1173, New York, NY, USA, 2024. Association for Computing Machinery. doi:10.1145/3618260.3649745.
  • [7] Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva. Maximum flow and minimum-cost flow in almost-linear time. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 612–623, 2022. doi:10.1109/FOCS54457.2022.00064.
  • [8] Julia Chuzhoy and Merav Parter. Fully dynamic algorithms for graph spanners via low-diameter router decomposition. In Proceedings of the 2025 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 785–823, 2025. doi:10.1137/1.9781611978322.23.
  • [9] Julia Chuzhoy and Ruimin Zhang. A new deterministic algorithm for fully dynamic all-pairs shortest paths. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, pages 1159–1172, New York, NY, USA, 2023. Association for Computing Machinery. doi:10.1145/3564246.3585196.
  • [10] Sebastian Forster and Gramoz Goranci. Dynamic low-stretch trees via dynamic low-diameter decompositions. In Proceedings of the 51st Annual ACM SIGACT Symposium on Theory of Computing, STOC 2019, pages 377–388, New York, NY, USA, 2019. Association for Computing Machinery. doi:10.1145/3313276.3316381.
  • [11] Bernhard Haeupler, Yaowei Long, and Thatchaphol Saranurak. Dynamic deterministic constant-approximate distance oracles with nϵ worst-case update time, 2024. doi:10.48550/arXiv.2402.18541.
  • [12] Rasmus Kyng, Simon Meierhans, and Maximilian Probst Gutenberg. A dynamic shortest paths toolbox: Low-congestion vertex sparsifiers and their applications. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, pages 1174–1183, New York, NY, USA, 2024. Association for Computing Machinery. doi:10.1145/3618260.3649767.
  • [13] Rasmus Kyng, Simon Meierhans, and Gernot Zöcklein. Bootstrapping dynamic apsp via sparsification, 2024. doi:10.48550/arXiv.2408.11375.