Bootstrapping Dynamic APSP via Sparsification
Abstract
We give a simple algorithm for the dynamic approximate All-Pairs Shortest Paths (APSP) problem. Given a graph with polynomially bounded edge lengths, our data structure processes edge insertions and deletions in total time and provides query access to -approximate distances in time per query.
We produce a data structure that mimics Thorup-Zwick distance oracles [17], but is dynamic and deterministic. Our algorithm selects a small number of pivot vertices. Then, for every other vertex, it reduces distance computation to maintaining distances to a small neighborhood around that vertex and to the nearest pivot. We maintain distances between pivots efficiently by representing them in a smaller graph and recursing. We maintain these smaller graphs by (a) reducing vertex count using the dynamic distance-preserving core graphs of Kyng-Meierhans-Probst Gutenberg [16] in a black-box manner and (b) reducing edge-count using a dynamic spanner akin to Chen-Kyng-Liu-Meierhans-Probst Gutenberg [6]. Our dynamic spanner internally uses an APSP data structure. Choosing a large enough size reduction factor in the first step allows us to simultaneously bootstrap a spanner and a dynamic APSP data structure. Notably, our approach does not need expander graphs, an otherwise ubiquitous tool in derandomization.
Keywords and phrases:
Dynamic Graph Algorithms, Spanners, Vertex Sparsification, BootstrappingFunding:
Rasmus Kyng: The research leading to these results has received funding from grant no. 200021 204787 and from the starting grant “A New Paradigm for Flow and Cut Algorithms” (no. TMSGI2_218022) of the Swiss National Science Foundation.Copyright and License:
2012 ACM Subject Classification:
Theory of computation Dynamic graph algorithms ; Theory of computation Sparsification and spannersEditors:
Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz HermanSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
The dynamic All-Pairs Shortest Paths (APSP) problem asks to maintain query access to pairwise distances for a dynamic graph . Recently, the setting of maintaining crude distance estimates at low computational cost has received significant attention [15, 5, 11, 9, 8, 2, 12, 13] culminating in deterministic algorithms with sub-polynomial update and query time for graphs receiving both edge insertions and deletions [10, 16, 14]. In contrast to other deterministic approaches, [16] allows efficient implicit access to approximate shortest paths via a small collection of dynamic trees. This is crucial for applications to decremental single source shortest paths and incremental minimum cost flows [6].
All existing dynamic approximate APSP data structures that work against adaptive adversaries are highly complex [10, 16, 14].
The algorithm [16] establishes the following Thorup-Zwick-distance oracle-inspired framework: To maintain approximate APSP in the graph, we select a small (dynamic) set of pivot vertices. For every other vertex, we can reduce distance computation to (a) computing distances in a small neighborhood around the vertex and (b) computing the distance to the nearest pivot vertex. Then, [16] maintains a smaller graph where distances between the pivot vertices can be computed, and recursively maintains these with the same approach. To maintain this smaller graph, [16] designs a dynamic vertex sparsification procedure and performs edge sparsification using the dynamic spanner of [7, 4]. This spanner internally relies on complex data structures for decremental APSP on expanders [9] and uses expander embedding techniques in a non-trivial way. We will use the overall dynamic Thorup-Zwick approach of [16], and we will maintain a small graph for computing distances between pivots in a simpler way.
A key ingredient for us is a new, more powerful fully-dynamic spanner from [6]. This spanner requires both dynamic APSP and expander embedding techniques to maintain. In this work, we observe that this spanner can be simplified so that it only requires dynamic APSP to maintain. Thus, given a dynamic APSP data structure, we can maintain a fully-dynamic spanner. But, [16] also showed that given a fully-dynamic spanner, we can obtain dynamic APSP. This presents a striking opportunity: we can simultaneously bootstrap a dynamic APSP data structure and spanner. This gives our main result, a dynamic approximate APSP data structure.
Theorem 1.
For a graph with polynomially bounded integral edge lengths undergoing up to edge updates (insertions and deletions), there is an algorithm with total update time that maintains query access to approximate distances such that
The query time is .
While we present our theorem in the setting of processing updates in time to simplify the presentation, we believe that this algorithm can be extended directly to the worst-case update time setting of [16]. Furthermore, the witness paths can be implicitly represented as forest paths as in [16], which enables the cheap flow maintenance via dynamic tree data structures necessary for applications.
1.1 Roadmap
2 Preliminaries
2.1 Graph Notation
We use to refer to a graph with vertex set , edge set and length vector . We will restrict our attention to polynomially bounded integral lengths, and will denote their upper bound by .
2.2 Paths, Distances and Balls
For two vertices , we let denote the shortest path from to in (lowest sum of edge lengths). Furthermore, we let be the length of said path.
We let and . Then, for a set we let and respectively.
Finally, we define the neighborhood of a vertex as and of a set as . For a vertex , we let .
When it is clear from the context, we sometimes omit the subscript/superscript .
Definition 2 (Spanner).
Given a graph , we say that a subgraph is an -spanner of if for any we have .
2.3 Dynamic Graphs
Vertex splits are a crucial operation in our algorithms. They arise since we contract graph components and represent them as vertices. When such a component needs to be divided, this naturally corresponds to a vertex split defined as follows.
Definition 3 (Vertex Split).
For a graph , a vertex split takes a vertex and a set , and replaces the vertex with two vertices such that and .
Definition 4 (Master Nodes).
Initially, each is associated with a unique master node . Then, at any time , if some is split into and , we set .
We can also define our notion of fully-dynamic graphs, which includes vertex splits as an operation.
Definition 5 (Fully-Dynamic Graph).
A fully-dynamic graph is a graph undergoing edge insertions/deletions, vertex splits and isolated vertex insertions.
Whenever we disallow vertex splits, we refer to edge-dynamic graphs instead.
Definition 6 (Edge-Dynamic Graph).
An edge-dynamic graph is a graph undergoing edge insertions/deletions and isolated vertex insertions.
2.4 Dynamic Objects and Recourse
For a dynamic object defined on an index set , we refer to as the full sequence up to time . We can then apply functions that operate on sequences to , e.g. . This is for example very useful to keep track of how an object changes over time. By providing a difference function for a dynamic object , one may define the recourse as the sum of differences .
For functions that operate on a static object instead of on a sequence, we overload the notation by saying is simply applied to the last object in the sequence, that is, . For example, is defined to be the maximum degree of the graph at time .
Whenever we make a statements about a dynamic object without making reference to a specific time , we implicitly mean that the statement holds at all times. For example, formally means . Meanwhile, formally means , and by our convention for applying functions of static objects to a dynamic object, we have that means .
The time scales we use are fine-grained as we will consider multiple dynamic objects that change more frequently than the input graph does. We will choose an index set for the dynamic sequence which is fine-grained enough to capture changes to all the objects relevant to a given proof.
A dynamic object might not change in a time span in which another object changes multiple times. In this case, to have sequences that are indexed by the same set, we simply pad the sequence of with the same object, so that both dynamic objects would share the same index set .
For fully-dynamic graphs , we let denote the number of edge insertions, edge deletions, vertex splits and isolated vertex insertions that happened between and . Note that generally, vertex splits cannot be implemented in constant time, and thus recourse bounds are not directly relevant to bounding running time. Nevertheless, this convention for measuring recourse is useful, as it is tailored precisely to the type of recourse bounds we prove.
We can store anything that occurs at various times throughout our algorithm in a dynamic object and define what recourse means for it. For example, we will repeatedly call a procedure . If at time we call the function with input , we might want to track the quantity , so we simply let be the dynamic object such that . By defining the difference , we can find out how much the value changed during two points in time. A statement such as then means that at any time (suitably fine-grained), the total amount of changes that underwent are not more than the amount of , plus some extra number of updates determined by the calls to made up to that time.
By viewing the run-time of dynamic algorithm as evolving over a sequence of updates and thinking of it as a dynamic object, we also give meaning to the statement, say, that an algorithm runs in time .
2.5 Dynamic APSP
We introduce APSP data structures for edge-dynamic graphs.
Definition 7 (Dynamic APSP).
For an edge-dynamic graph , a -approximate -query APSP data structure supports the following operations.
-
/ : Add/Remove edge from/to .
-
: Returns in time a distance estimate such that at all times
-
: Returns a -path in of length in time .
Given an APSP data structure, we let denote the total run time of initialization and calls to AddEdge and RemoveEdge on an edge-dynamic graph with and initial edges.
We sometimes also call the (worst-case) stretch of the data structure.
2.6 Degree Reduction Through Binary Search Trees
Given an edge-dynamic graph , we can maintain in time another edge-dynamic graph in which we replace vertices by Binary Search Trees with one leaf node per edge of the vertex in the original graph. It is immediate that if we can find shortest paths in , we can translate them into shortest paths in . The advantage of this technique is that while , and , we have .
3 Warm up: Low-Recourse Thorup-Zwick’05
Before we describe our algorithm, we turn our attention to a much more modest goal that still showcases the main ideas used in our full construction: maintaining a relaxed version of Thorup-Zwick pivots [17] in a fully-dynamic graph with low recourse. This allows us to do away with the careful controlling of maximum degrees in intermediate graphs, which is the root cause of most of the technicalities when making the construction computationally efficient. Furthermore, we do not need to bootstrap our data structure in this setting.
3.1 Pivot Hierarchies
We first introduce pivot hierarchies.
Definition 8 (Pivot Hierarchy).
For a graph , we let a pivot hierarchy of depth and cluster size be
-
a collection of vertex sets with for all , and ,
-
pivot maps for all and
-
cluster maps for all such that for all and .
Pivot maps should be thought of as mapping vertices to smaller and smaller sets of representatives, and the cluster maps should be thought of as small neighborhoods of pivots for which (approximate) distances are stored. In classical Thorup-Zwick, the cluster consists of all vertices in that are closer to than . In our algorithms, these balls become somewhat distorted due to sparsification. We then define the approximate distances maintained by the hierarchy.
Definition 9 (Distances).
For a pivot hierarchy of depth , we define for and
We let for and call a pivot hierarchy distance preserving if for all .
We note that efficient distance queries can be implemented as long as the distances within clusters and the distances to pivots are efficiently maintained. In this warm up, we only focus on maintaining the collection with low recourse. However, our full algorithm will maintain this additional information.
Formally, the goal of this warm-up section is to prove the following theorem.
Theorem 10.
For an edge-dynamic graph with vertices, a -depth -cluster size pivot hierarchy can be maintained in polynomial time per update with total recourse on the pivot sets.
We emphasize that this theorem is not particularly useful, as it comes without meaningful guarantees on the update time, and one could just use Dijkstras algorithm to obtain exact distances in this regime. Even the recourse guarantee is also of limited use, as we do not control the recourse of the pivot maps or the cluster maps of the pivot hierarchy. Nevertheless, our algorithm for proving this theorem is instructive and closely mirrors our final dynamic APSP algorithm.
The illustrative algorithm for maintaining pivot hierarchies repeatedly decreases the vertex count via low-recourse core graphs, and the edge count via a low-recourse spanner. We present these two pieces separately.
3.2 Low-Recourse KMG-vertex sparsifier
In this section, we state a theorem about maintaining a KMG-vertex sparsifier with low recourse [16]. Although their algorithm only works for edge-dynamic graphs, it can be extended to fully-dynamic graphs rather straightforwardly in the recourse setting. Recall that we count a vertex split as a single operation when defining the recourse of a fully-dynamic graph.
Definition 11 (Vertex Sparsifier).
Given and , we call a graph with vertex set a -approximate vertex sparsifier of with respect to the set if for every two vertices we have and if we further have .
Remark 12.
We always associate a map from to with a vertex sparsifier, where multiple vertices in might map to the same vertex in . This clarifies what we mean with and in Definition 11. When we apply multiple vertex sparsifiers to a graph , we refer to recursively applying this map as mapping vertices back to .
Theorem 13 (See Theorem 3.1 in [16]).
Consider a size reduction parameter and a fully-dynamic graph with lengths in . Then, there is a deterministic algorithm that explicitly maintains,
-
1.
a monotonically growing pivot set and pivot function , such that and for all where .
-
2.
a fully-dynamic graph , such that is a -approximate vertex sparsifier of with respect to . We have that the initial graph has at most edges and vertices, where . The following properties hold:
-
(a)
has lengths in , and
-
(b)
given any edge , there exists a -path in with .
-
(c)
and where is the dynamic object containing the decremental operations (deletions and vertex splits) on .
-
(a)
The algorithm runs in polynomial time.
3.3 Low-Recourse Dynamic Spanner
In this section, we describe a simple algorithm for maintaining a spanner of a fully-dynamic graph with remarkably low recourse. The construction is based on the greedy spanner of [1]. The low-recourse property of a variant of this algorithm has been observed in [3].
Theorem 14 (Dynamic Spanner).
There exists a data structure that given a fully-dynamic graph with polynomially bounded edge lengths, maintains a fully-dynamic -spanner of such that and for , where is a dynamic object that contains all decremental updates to (deletions and vertex splits).
can be maintained in polynomial time per update.
Proof.
We assume unit lengths via maintaining separate spanners for length ranges and returning the direct product of the appropriately scaled spanners. Furthermore, we let and re-start after decremental updates.
Then, there can be at most vertex splits and therefore contains at most vertices before re-starting. We initialize to the greedy spanner of , i.e. we consider the edges in arbitrary order and add them to if . This ensure that the graph is a -spanner of , and directly follows from the fact that a girth graph contains at most edges.111In [1] this folklore bound was already used to obtain a spanner.
Whenever is updated, we perform the corresponding operation on in case of edge deletions and vertex splits such that . Then, we iteratively check for every edge if and insert the edges for which the condition is true. In particular, for the insertion of an edge , this corresponds to inserting the edge whenever .
Clearly is an spanner of throughout because every edge is stretched by at most . Furthermore, vertex splits and edge deletions only increase the girth of , and therefore contains at most edges throughout. Finally, the total recourse bound follows until the data structure is re-started because an edge only leaves the graph when it is deleted.
To conclude, it suffices to observe that the total number of restarts is bounded by and there are data structures in total, so .
3.4 Low-Recourse Pivot Hierarchies via Iterated Sparsification
Given the low-recourse vertex and edge sparsification routines described above, we are ready to give the algorithm for Theorem 10.
Our algorithm consists of layers where .222We remark that our full algorithm uses a much more drastic size reduction to enable bootstrapping, which results in only layers for some small constant . This is necessary because the stretch is powered every time we recurse. Each layer consists of two graphs and , where . We let . Then we obtain from and from as follows:
-
1.
is the output of the vertex sparsifier from Theorem 13 on the graph for size reduction parameter . For simplicity, we assume that is an integer without loss of generality.
-
2.
is the dynamic spanner from Theorem 14 on the graph .
Whenever an update to occurs, we pass the updates up the hierarchy until we reach the first layer that has recieved more than since it was initialized. Then, we re-initialize all the layers via the steps described above. Notice in particular that the recourse of each layer is much smaller than the size reduction, leading to manageable recourse over all.
We let the sets be the vertex sets of the graphs mapped back to , and we let contain an arbitrary vertex in (also mapped back to ). Furthermore, we let the pivot map be the pivot function maintained by the data structure from Theorem 13 at layer for , and the cluster map is the cluster map from vertex sparsifier data structure at layer . Finally, we define the cluster map and pivot map for every vertex .
Proof of Theorem 10.
We first bound the sizes and recourse of the sets by induction. The recourse of the set and the recourse of is by definition, and the recourse of is at most .
We then observe that recourse caused by non-decremental updates on can be ignored because they do not show up in the recourse of the subsequent spanner by Theorem 14. The recourse of the decremental updates to is . The recourse of is . Therefore, updates to cause at most vertices to be added to . Since we re-start every layer after updates have arrived and the total amount of updates that reach layer is , the total recourse of the layer is and the total recourse of all the sets is at most by Theorem 13 and Theorem 14.
Then, we show that the final set is of small enough size for bounding the size of the clusters . By the recourse bound above, there are at most updates to a level before it gets rebuilt. Therefore, the size of level is at most at all times. The bound on the other cluster sizes directly follows from Theorem 13 and the description of our algorithm.
It remains to show that the distance estimates are good. To do so, we first notice that for every we have since each layer only loses in distance approximation by Theorem 13 and Theorem 14 and there are levels. Then, the distance approximation follows by induction since the length of the detour to the pivots can lose at most a poly-logarithmic factor per level because the pivot is closer in the graph and .333The bounds on the detour are analogous to [17] after taking into account the extra distortion of the distances in as compared to the original graph . This extra distortion causes the sub-polynomial overhead.
4 The Algorithm
We solve the APSP problem on a large graph using a pivot hierarchy obtained by iterated vertex and edge sparsification, just like in the warm up in Section 3. Again, we will select a set of pivots for the first level of a pivot hierarchy, and perform edge and vertex sparsification to represent the distances between the pivots using a smaller dynamic graph, and then recurse on this graph. The main difference to the previous section is that (a) we want to bootstrap dynamic APSP along the way and (b) we want to develop and use a computationally efficient spanner. This spanner creates a few technical complications. In particular, to make the spanner efficient, we need all our graphs to have somewhat bounded maximum degree.
4.1 Vertex sparsification
First, our algorithm reduces the APSP problem to a graph with significantly fewer vertices using the KMG-vertex sparsifier (See Section 4.3). When compared to the low-recourse version presented in Section 3, our vertex sparsifier has an additional routine called which forces some of the vertices in the vertex sparsifier to be split and therefore have smaller degree. We further elaborate this point in the section below.
The vertex sparsification algorithm is presented in Section 4.3.
4.2 Edge sparsification
Although is supported on a much smaller vertex set, it may still contain most of the edges from . To achieve a proper size reduction, we additionally employ edge sparsification on top of the vertex sparsifier .
There are two key differences between the warm up and the efficient spanner algorithm.
-
To quickly check for detours in the spanner, the efficient algorithm recursively uses APSP datastructures on small instances (See Section 4.4).
-
To efficiently maintain the dynamic spanners, it is crucial that the degree of the graph remains bounded. Although we can assume that the degree of the initial graph is bounded by a constant, repeatedly sparsifying the graph could seriously increase its maximum degree. Fortunately, our dynamic spanner guarantees that the average degree after edge sparsification is low. However, the same cannot be achieved for the maximum degree, which becomes apparent when trying to sparsify a star graph.
To remedy this issue, we use that the underlying graph has bounded degree and call the routine of the KMG-vertex sparsifier to reduce the degree of high-degree vertices in the spanner. This operation can lead to changes in and its spanner, but we show that repeatedly splitting vertices with high degree in the spanner quickly converges because the progress of splits is larger than the recourse they cause to the spanner.
The edge sparsification algorithm is presented in Section 4.4.
4.3 Pivots and Vertex Sparsification
We use the KMG-vertex sparsifier algorithm [16], which is based on dynamic core graphs, which in turn are based on static low-stretch spanning trees.
We first introduce certified pivots. These are mapping vertices that get sparsified away to vertices in , and certify the distance for vertices that are closer to each other than to their respective pivots.
Definition 15 (Certified Pivot).
Consider a graph with edge lengths and a set . We say that a function is a pivot function for , if for all , we have that is the nearest vertex in (ties broken arbitrarily but consistently).
Additionally, we say that is a certified pivot function if for all vertices , for all we have computed the exact shortest paths and distances.
To motivate Definition 15, we show that pivots can be used to approximate the distance between sparsified vertices whenever their exact distance isn’t stored already.
Claim 16.
Consider a graph with pivot set and corresponding pivot function . Then for any , if , we have .
Proof.
By we directly obtain and since is (one of) the neareast vertices to in , we have as well. By using the triangle inequality we get .
Next, we state a version of the KMG-vertex sparsifier [16], which is based on dynamic core graphs. We refer the reader to the full version of our article for a more detailed discussion of their vertex sparsifier.
The following theorem should be thought of as an efficient analog to Theorem 13 with the extra operation . As mentioned at the start of this section, this routine comes into play when the spanner produces a high-degree vertex down the line.
Theorem 17 (KMG-vertex sparsifier, See Theorem 3.1 in [16] ).
Consider a size reduction parameter and an edge-dynamic graph , with and lengths in . Then, there is a deterministic algorithm that explicitly maintains,
-
1.
a monotonically growing pivot set and certified pivot function , such that .
-
2.
a fully-dynamic graph , such that is a -approximate vertex sparsifier of with respect to . We have that has at most edges and vertices, where . The following properties hold:
-
(a)
, and has lengths in , and
-
(b)
given any edge , the algorithm can return a -path in with in time .
-
(c)
a procedure , where for some , and is a positive integer. It updates the graph , performing at most vertex splits, such that afterwards any vertex with is adjacent to at most edges in . This procedure runs in time at most .
-
(d)
If is the dynamic object containing the total number of vertex splits performed by ReduceDegree, then .
-
(a)
The algorithm runs in time .
4.4 Edge Sparsification
We state our main dynamic spanner theorem whose proof is deferred to the full version of our article. The theorem assumes access to a -approximate, -query APSP data structure (See Definition 7.)
Theorem 18 (Dynamic Spanner).
There exists a data structure that given a fully-dynamic graph with unit lengths and , and a parameter , maintains a fully-dynamic -spanner of such that and , where and . It runs in time .
Remark 19.
We can directly extend Theorem 18 to graphs with edge lengths in by bucketing edges in the intervals for .
Note also that as at all times is a subgraph of , it does not undergo more vertex splits and isolated vertex insertions than , i.e., all extra updates to maintain are edge insertions/deletions.
4.5 Bootstrapping via Edge and Vertex Sparsification
We are given an edge-dynamic graph on initially vertices and edges, that at all times has maximum degree at most . Applying Theorem 17 to reduces the problem to finding short paths in , a graph of vertices. Although the number of vertices significantly decreases, the graph still contains up to edges, preventing efficient recursion. To address this, we apply Theorem 18 on to produce a graph on vertices and edges, where crucially depends only on the parameter of Theorem 18.
Note that the APSP data structure that Theorem 18 requires also only needs to run on a graph of at most vertices and edges, so by choosing a large enough size reduction factor , the problem size reduces by a factor of , while the approximation factor increases from to .444We only have a valid size reduction if also . We will see that this is indeed the case as we can assume that initially . Balancing both parameters and allows us to bootstrap an APSP data structure with the guarantees from Theorem 1.
While is sparse, it might still have high maximum degree, preventing efficient recursion. The sparsity still implies, however, that there cannot be too many high degree vertices. Hence, we carefully perform a few vertex splits in by leveraging the functionality from Theorem 17 to enforce that also has small maximum degree.
The recursive APSP instance that is running on expects an edge-dynamic graph, while the spanner from Theorem 18 is fully-dynamic. To circumvent this issue, we simulate the vertex splits by suitable edge insertions/removals and vertex insertions. As the maximum degree of gets controlled during the while loops, this simulation can lead to at most an extra factor of in the number of updates.
When queried for the distance between two vertices and , our algorithm checks if the certified pivot function stores the distance, and if this is not the case it recursively asks the smaller instance for the distance between and in .
4.6 Analysis
We first show that the while loops in Algorithm 1 and Algorithm 2 terminate sufficiently fast. This is the most crucial claim for the analysis.
Claim 20.
If is the dynamic object counting the total number of vertex splits performed by in Algorithm 1 and Algorithm 2, then . In particular, the while loops always terminate and immediately thereafter has maximum degree at most . The total runtime of all calls to is .
Proof.
For tracking progress achieved by vertex splits, we introduce the following potential function
Let be the time before we enter the while loop in Algorithm 1. Then by Theorem 17 we have that and thus by Theorem 18, we have that .
By the Handshake-Lemma, . Note that whenever an update is made to , the potential can increase by at most . This is as edge deletions and vertex splits can only decrease it, and an edge insertion can increase it by at most .
Let be the dynamic object counting the number of edges that were passed to ReduceDegree, and note that if is called, we have by the while loop condition.
By the properties of in Theorem 17, we have that , and that none of the vertices that result from the splits to will be adjacent to more than of the edges in . So if we just forwarded these splits to , we would have , and the potential decreases by at least .
However, forwarding these vertex splits to , Theorem 18 will cause further updates to to maintain the spanner. Fortunately by Item 2d of Theorem 17 we know that . Let . Putting everything together we show that the potential decreases as follows.
Thus, choosing , we get . As , this directly implies that . Remembering that then yields the bound on .
Since the total runtime is given by , this bound follows directly.
We then show a bound on the total runtime of our dynamic APSP algorithm in terms of costs for smaller instances, setting us up for our final recursion.
To do so, assume the recursive APSP instance that is running on and used in Theorem 18 is a -approximate, -query APSP. Let be the stretch parameter from Theorem 18.
Theorem 21.
The total combined runtime of Algorithm 1 and Algorithm 2 over a sequence of up to updates is for some .
Proof.
We first show that for at most updates, initializing and maintaining is efficient. Note that by the previous claim, we can bound the total costs incurred in the while loops by . The total costs of initializing and maintaining are by Theorem 17. Finally, by the previous lemma , so that we can further bound the runtime of Theorem 18 by . Note that here we also crucially used that by Theorem 17, .
It remains to consider the APSP data structure that we have to run on . First note that by the gurantees of Theorem 17 and Theorem 18, initially . Secondly, note that as , we in particular have . However, as is vertex dynamic, we also have to simulate all the vertex splits occuring in , which can lead to up to extra updates per split. Hence the total amount of updates that the APSP data structure receives are up to . By the while loop conditions, we do know that the maximum degree of the graph is always bounded by . Therefore, the total runtime bound follows.
We show that the stretch does not increase too much as the size of the graph increases.
Claim 22.
Algorithm 1 provides a worst case stretch of .
Proof.
If there is nothing to show as then by Theorem 17 we have already pre-computed the exact distance between and .
But if , then by Claim 16, we have that . As , we have .
Remark 23.
We can directly extend this to receive a procedure that provides a - path with and runs in time . This is because Theorem 17 also explicitly stores exact shortest paths between vertices and their pivots .
In order to prove Theorem 1, we need to carefully assemble the previous insights, which we defer to the full version of our article.
References
- [1] Ingo Althöfer, Gautam Das, David Dobkin, Deborah Joseph, and José Soares. On sparse spanners of weighted graphs. Discrete & Computational Geometry, 9(1):81–100, 1993. doi:10.1007/BF02189308.
- [2] A. Bernstein, M. Gutenberg, and T. Saranurak. Deterministic decremental sssp and approximate min-cost flow in almost-linear time. In 2021 IEEE 62nd Annual Symposium on Foundations of Computer Science (FOCS), pages 1000–1008, Los Alamitos, CA, USA, February 2022. IEEE Computer Society. doi:10.1109/FOCS52979.2021.00100.
- [3] Sayan Bhattacharya, Thatchaphol Saranurak, and Pattara Sukprasert. Simple Dynamic Spanners with Near-Optimal Recourse Against an Adaptive Adversary. In Shiri Chechik, Gonzalo Navarro, Eva Rotenberg, and Grzegorz Herman, editors, 30th Annual European Symposium on Algorithms (ESA 2022), volume 244 of Leibniz International Proceedings in Informatics (LIPIcs), pages 17:1–17:19, Dagstuhl, Germany, 2022. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ESA.2022.17.
- [4] J. Brand, L. Chen, R. Kyng, Y. P. Liu, R. Peng, M. Gutenberg, S. Sachdeva, and A. Sidford. A deterministic almost-linear time algorithm for minimum-cost flow. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 503–514, Los Alamitos, CA, USA, November 2023. IEEE Computer Society. doi:10.1109/FOCS57990.2023.00037.
- [5] Shiri Chechik. Near-optimal approximate decremental all pairs shortest paths. In 2018 IEEE 59th Annual Symposium on Foundations of Computer Science (FOCS), pages 170–181, 2018. doi:10.1109/FOCS.2018.00025.
- [6] Li Chen, Rasmus Kyng, Yang P. Liu, Simon Meierhans, and Maximilian Probst Gutenberg. Almost-linear time algorithms for incremental graphs: Cycle detection, sccs, s-t shortest path, and minimum-cost flow. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, pages 1165–1173, New York, NY, USA, 2024. Association for Computing Machinery. doi:10.1145/3618260.3649745.
- [7] Li Chen, Rasmus Kyng, Yang P. Liu, Richard Peng, Maximilian Probst Gutenberg, and Sushant Sachdeva. Maximum flow and minimum-cost flow in almost-linear time. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 612–623, 2022. doi:10.1109/FOCS54457.2022.00064.
- [8] Julia Chuzhoy. Decremental all-pairs shortest paths in deterministic near-linear time. In Proceedings of the 53rd Annual ACM SIGACT Symposium on Theory of Computing, STOC 2021, pages 626–639, New York, NY, USA, 2021. Association for Computing Machinery. doi:10.1145/3406325.3451025.
- [9] Julia Chuzhoy and Thatchaphol Saranurak. Deterministic Algorithms for Decremental Shortest Paths via Layered Core Decomposition. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), Proceedings, pages 2478–2496. Society for Industrial and Applied Mathematics, January 2021. doi:10.1137/1.9781611976465.147.
- [10] Julia Chuzhoy and Ruimin Zhang. A new deterministic algorithm for fully dynamic all-pairs shortest paths. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, pages 1159–1172, New York, NY, USA, 2023. Association for Computing Machinery. doi:10.1145/3564246.3585196.
- [11] Sebastian Forster, Gramoz Goranci, and Monika Henzinger. Dynamic maintenance of low-stretch probabilistic tree embeddings with applications. In Proceedings of the 2021 ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1226–1245, 2021. doi:10.1137/1.9781611976465.75.
- [12] Sebastian Forster, Gramoz Goranci, Yasamin Nazari, and Antonis Skarlatos. Bootstrapping Dynamic Distance Oracles. In Inge Li Gørtz, Martin Farach-Colton, Simon J. Puglisi, and Grzegorz Herman, editors, 31st Annual European Symposium on Algorithms (ESA 2023), volume 274 of Leibniz International Proceedings in Informatics (LIPIcs), pages 50:1–50:16, Dagstuhl, Germany, 2023. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ESA.2023.50.
- [13] Sebastian Forster, Yasamin Nazari, and Maximilian Probst Gutenberg. Deterministic incremental apsp with polylogarithmic update time and stretch. In Proceedings of the 55th Annual ACM Symposium on Theory of Computing, STOC 2023, pages 1173–1186, New York, NY, USA, 2023. Association for Computing Machinery. doi:10.1145/3564246.3585213.
- [14] Bernhard Haeupler, Yaowei Long, and Thatchaphol Saranurak. Dynamic deterministic constant-approximate distance oracles with worst-case update time, 2024. doi:10.48550/arXiv.2402.18541.
- [15] Monika Henzinger, Sebastian Krinninger, and Danupon Nanongkai. Decremental single-source shortest paths on undirected graphs in near-linear total update time. J. ACM, 65(6), November 2018. doi:10.1145/3218657.
- [16] Rasmus Kyng, Simon Meierhans, and Maximilian Probst Gutenberg. A dynamic shortest paths toolbox: Low-congestion vertex sparsifiers and their applications. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, pages 1174–1183, New York, NY, USA, 2024. Association for Computing Machinery. doi:10.1145/3618260.3649767.
- [17] Mikkel Thorup and Uri Zwick. Approximate distance oracles. J. ACM, 52(1):1–24, January 2005. doi:10.1145/1044731.1044732.
