Undirected 3-Fault Replacement Path in Nearly Cubic Time
Abstract
Given a graph (, ) and two vertices , the -fault replacement path (FRP) problem computes for every set of at most edges, the distance from to when edges in fail. A recent result shows that 2FRP in directed graphs can be solved in time [Vassilevska Williams, Woldeghebriel, Xu 2022]. In this paper, we show a 3FRP algorithm in deterministic time for undirected weighted graphs, which almost matches the size of the output. This implies that FRP in undirected graphs can be solved in nearly optimal time for all .
To construct our 3FRP algorithm, we introduce an incremental distance sensitivity oracle (DSO) for undirected graphs with worst-case update time, while preprocessing time, space, and query time are still , and , respectively, which match the static DSO [Bernstein and Karger 2009]. Here in a DSO, we can preprocess a graph so that the distance between any pair of vertices given any failed edge can be answered efficiently. From the recent result in [Peng and Rubinstein 2023], we can obtain an offline dynamic DSO from the incremental worst-case DSO, which makes the construction of our 3FRP algorithm more convenient. By the offline dynamic DSO, we can also construct a 2-fault single-source replacement path (2-fault SSRP) algorithm in time, that is, from a given vertex , we want to find the distance to any vertex when any pair of edges fail. Thus the time complexity for 2-fault SSRP is also nearly optimal.
Now we know that in undirected graphs 1FRP can be solved in time [Nardelli, Proietti, Widmayer 2001], and 2FRP and 3FRP in undirected graphs can be solved in time. In this paper, we also show that a truly subcubic algorithm for 2FRP in undirected weighted graphs does not exist under APSP hypothesis.
Keywords and phrases:
Graph Algorithm, Shortest Path, Replacement PathCategory:
Track A: Algorithms, Complexity and GamesCopyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Shortest paths ; Theory of computation Graph algorithms analysisEditors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele PuppisSeries and Publisher:

1 Introduction
The shortest path problem is one of the most fundamental problems in computer science, and the single-source shortest path problem is known to have an almost linear time algorithm [14, 15, 17]. In a failure-prone graph (, ), some edges may fail on the shortest path, so we want to find a replacement path. In the classical replacement path (RP) problem, given source and destination , we want to find the - shortest path when edge fails for every edge . Of course, if is not on the original - shortest path in , the replacement path is still the original shortest path, so in fact we just need to find an - replacement path for every edge on the - shortest path in , which counts for replacement paths.
We can generalize the RP problem to any number of failed edges , that is, finding - shortest paths for every failed edge set where . This problem is called -fault replacement path (FRP) [30], therefore 1FRP is just the original RP problem. As before, given the first failed edges and the corresponding - replacement shortest path, we still only need to consider the cases when the -th failed edge is on it. Because there are edges on each shortest path, the total number of replacement paths we need to find for FRP problem is . The current well-known results of FRP algorithms for real-weighted directed and undirected graphs are summarized as follows.
-
RP problem in directed graphs can be solved in time [21].
-
Under APSP hypothesis111APSP is an abbreviation for all-pair shortest path problem, and APSP hypothesis [31] suggests that APSP cannot be solved in truly subcubic time for any constant ., RP in directed graphs does not have time algorithm for any constant [31]. So the -time algorithm is (conditionally) nearly optimal.
-
RP problem in undirected graphs can be solved in time222Here hides polylogarithmic factors. [26], thus also almost optimal.
-
Recently, Vassilevska Williams, Woldeghebriel and Xu [30] gave a 2FRP algorithm with running time for directed graphs, almost matching the 1FRP case. (Although not formulated, it also works for undirected graphs with slight modifications.)
Thus, 1FRP and 2FRP problems both have -time algorithms, so it is natural to ask whether 3FRP still has a -time algorithm. In this paper, we give such a 3FRP algorithm for undirected graphs: (Note that all algorithms in this paper are deterministic.)
Theorem 1.
The 3FRP problem in undirected real-weighted graphs can be solved in time.
Denote the shortest path between and in graph by . Then in time, for every edge , we can find all edges of , then for every edge , we can find all edges of , then for every edge , we can find , so the algorithm outputs distances in total. One can see the difficulty of this problem since every answer only takes time on average.
Extend to FRP for .
For any , by Theorem 1, the FRP problem can be solved in time in undirected graphs (see the reduction in [30]). Since the size of the output of FRP can be , the running time is almost optimal. As in [30], we also take the 1-failure distance sensitivity oracle as an important subroutine, since it only takes query time. Here an -failure distance sensitivity oracle (DSO) is a data structure that supports distance queries between any pair of given any failed edges. It is widely known that 1-failure DSO takes space, query time, and construction time [11, 4]. Since current 2-failure DSOs are still hard to construct efficiently [18], we instead construct an incremental 1-failure DSO: (In the following DSO means 1-failure DSO for convenience.)
Theorem 2.
For a given undirected graph , there is an incremental DSO that can be constructed in time, so that when we insert an edge into , the DSO can be maintained in worst-case time. The DSO also has space and query time.
Recently Peng and Rubinstein [27] gave a reduction from offline fully dynamic structure to worst-case incremental structure, where “offline” means all updates are known ahead. So we can obtain an offline dynamic efficient DSO. (We also give a simple proof of the reduction we need by a different method as [27], see Theorem 11.)
Theorem 3 ([27]).
Let be the total number of updates. If there exists an incremental dynamic algorithm with query time and worst-case update time , then there is an offline dynamic algorithm with query time and worst-case update time .
Corollary 4.
Given an undirected graph and a sequence of edge insertions and deletions, there is an offline dynamic DSO which can be constructed in time, and the total update time is also . It can answer the distance between any pair of vertices when any edge fails in time.
We have known 1FRP is as hard as APSP in directed graphs [31], but in undirected graphs, 1FRP can be solved in time [26], and 2FRP in undirected graphs can be solved in time. One may wonder whether 2FRP in undirected graphs has a truly subcubic time algorithm. However, we show that it is not possible under the APSP hypothesis.
Theorem 5.
Assuming the APSP hypothesis that APSP cannot be solved in truly subcubic time for any constant , then 2FRP problem in undirected weighted graphs also cannot be solved in truly subcubic time.
We can also apply the offline dynamic DSO to get a 2-fault single-source replacement path (2-fault SSRP) algorithm for undirected graphs, that is, given a source , for every other vertex , we can find for all edges . The reduction is very simple: we can remove an edge from the shortest path tree from each time, and then put it back. By the offline dynamic DSO, we can answer the distance between and avoiding in the current graph without .
Theorem 6.
The 2-fault single-source replacement path problem can be solved in time for undirected graphs.
Note that Theorem 6 can be extended to undirected -fault SSRP algorithm in time for all , which is almost optimal. Previously there are 1-fault single-source replacement path algorithms of running time for unweighted undirected graphs [7, 12] and unweighted directed graphs [8].
Note that the 2FRP, 3FRP, and single-source 2FRP algorithms in this paper obtain every distance from DSOs, which are essentially similar to the one in [4], or the APSP table. By the properties of the DSOs and the construction of our algorithms, in these algorithms in fact we can obtain an oracle of size in which we can retrieve a shortest path under failures in time per edge.
Other related work.
The current fastest running time for the directed all-pair shortest path (APSP) problem is [29]. Finding truly subcubic time algorithms for APSP with arbitrary weights is considered as a major open problem in algorithm research, and its hardness is one of the major assumptions in proving conditional lower bounds.
For replacement path (RP) and single-source replacement path (SSRP) problems, there are also many subcubic time results for graphs with small integer edge weights. In unweighted directed graphs, the RP problem can be solved in time [2]. If the edge weights are integers in , Vassilevska Williams gave a time RP algorithm [29], where is the exponent of the complexity of matrix multiplication [1, 32, 16]. Moreover, [30] gave a 2FRP algorithm in small edge weight directed graphs in time. For SSRP problem, there is also a time algorithm for graphs with integer weights in and algorithms with running time and for graphs with integer weights in . [23, 22, 25].
After the breakthrough result of efficient 1-failure DSO by Demetrescu, Thorup, Chowdhury and Ramachandran [11], there are many efforts to improve the preprocessing time [3, 4, 22, 23, 7, 28, 24], space [20], and extend to more failures. However, keeping query time and space is difficult when generalized to any constant number of failures. The 2-failure DSO of space and query time [18] is much more complicated than the 1-failure case, so it seems impossible to generalize it to -failure DSO in arbitrary graphs. For undirected graphs, recently Dey and Gupta gave an -failure DSO with space and query time [13], improving the result in [19], but their construction time is still large, thus still not suitable for efficient FRP algorithms.
Organization.
In Section 2 we introduce basic notations, assumptions and concepts. In Section 3 we give a brief description of the 2FRP algorithm for undirected graphs, which is a little simpler than the one for directed graphs in [30], then give an overview on the ideas for our incremental DSO and 3FRP algorithm. We will also discuss the hardness of 2FRP and single-source replacement path under multiple edge failures in Section 4 and 5, respectively. Due to page limit, many details are omitted and can be found in the full version of this paper [9].
2 Preliminaries
2.1 Notations and Assumptions
We consider an undirected weighted graph and . For any two vertices , we use to denote the shortest path from to in . If the graph is clear in the context, then we use to be short for . For a path , we use to denote the length of and the number of edges in , respectively.
If the context is clear that a vertex is on the shortest path from to in a graph , we use to represent the vertex that is vertices after in the direction from to . Similarly, we define to represent the vertex that is vertices before . When vertices and and the shortest path are clear in the context, for two vertices and on , we use the notation to say that is farther than from , or . Similarly means , means , means .
If , will be the subpath from to on , and if , we use the interval to denote the subpath of in graph . Thus the interval between -th vertex and -th vertex after on is denoted as .
Let be two paths. If they share a same endpoint, we use to denote the concatenation of them. If they do not share any edge, we say avoids . Sometimes for brevity, we use to denote the shorter path of , which means if , and otherwise .
In this paper, for a path in , denotes the graph obtained by removing all edges of from . (We do not need to remove vertices in this paper.)
As in many distance oracle papers, we also use the unique shortest path assumption, which means the shortest path between any two vertices in any subgraph of is unique. For the incremental DSO, we also make the unique shortest path assumption at any time. So for example, we can check whether an edge is on or not by checking whether or . To assume unique shortest path, we can break the ties by adding small perturbations to the edge weights. (See [10].) One way to assume unique shortest path deterministically is to compare two paths of the same length by their number of edges, then by their vertices from the end node to start node.
2.2 Replacement Paths
We first consider the divergence and convergence points of two paths: (As before, although our graph is undirected, we may assume a path starts from a vertex and ends at .)
-
For two paths both starting from , we say they diverge at vertex if first gets into an edge out of from , that is, the subpaths of and before coincide, and we denote this divergence point by . If are both shortest paths, their divergence point is unique.
-
Symmetrically, for both ends at , we say they converges at vertex if gets into an edge in from , and the subpaths of and coincide after . The convergence point is denoted by .
-
For two shortest paths which do not share endpoints, if they intersect, the intersection part is also a shortest path, that is, they must first converge and then diverge.
The following theorem is well known for replacement paths.
Theorem 7 (Equivalent to Claim 1 in [11]).
Let be a 1-fault replacement path in , i.e. for an edge . Let be two subpaths in with no intersection, and be a subpath between them. If avoids both and , then it avoids
From this theorem, we can see for any -fault replacement path where only one failed edge in is on the original shortest path , only diverges and converges once on . And if two failed edges in are on , they will cut into three intervals . Because are all shortest paths in , may:
-
diverge at and converge at .
-
diverge at , converge at some point in , diverge from , then converge at .
In undirected weighted graphs, a notable theorem on the structure of replacement paths says that a 1-fault replacement path is a concatenation of a shortest path , an edge , and a shortest path , for some vertices and (Any part can degenerate to a point.) More generally,
Theorem 8 ([6]).
For any undirected weighted graph and any set of failed edges with , an -fault replacement path can be partitioned into a concatenation interleaving at most subpaths and edges, where each subpath is a shortest path in the graph .
From this theorem, we have:
Lemma 9 ([5]).
For a given pair of vertices , the union of all 1-fault - replacement paths has edges in total in undirected graphs.
Proof.
From Theorem 8, a 1-fault - replacement path in undirected graphs is composed of a shortest path , an edge , and a shortest path . The total number of the middle edges is bounded by since there are 1-fault replacement paths, and other edges are on the shortest path trees from and , thus there are also at most edges.
2.3 DSOs
The paper uses the 1-fault DSO given by Bernstein and Karger [4] to initialize the incremental DSO. While [4] gives a 1-fault DSO in directed weighted graphs, it also works for undirected weighted graphs by a simple reduction. Suppose is an undirected weighted graph, we introduce a directed graph by replacing each edge in by a pair of directed edges with the same weight. Suppose that in graph edge on the shortest path is removed ( is closer to ), then we remove the directed edge on the shortest path from to in We can see that the 1-fault replacement path cannot go through edge This is because that no edge on is removed, so does not go through Therefore,
There is a result in [27] to view worst-case incremental structures as offline fully dynamic structures.
Theorem 10 ([27]).
Let be the total number of updates. Suppose there exists an incremental dynamic algorithm with query time and worst-case update time , then there is a dynamic algorithm for deletions-look-ahead setting with query time and worst-case update time .
This theorem can be used to utilize our incremental DSO in Section 3.2 as an offline fully dynamic DSO, and here we also prove the reduction we need in a self-contained way:
Theorem 11.
Starting from a graph of vertices, if we have an incremental DSO of space with preprocessing time , worst-case update time and query time , then it can be transformed into an offline fully dynamic DSO, such that if the total number of updates is , the total preprocessing and update time is and the query time is still .
Proof.
In the offline fully dynamic DSO, let the initial graph be and the graph after the -th update be . We first list all graphs . For an interval of integers , define the graph . Then for an interval , we know that and .
Then we build a binary range tree on , in which the root contains the graph , the children of the root contain the graphs and , respectively, and each leaf contains a graph . So we can build the DSOs on this tree from the root using the incremental structure, since if is a child of . Then we can see the total construction time is , and we can use the DSOs on leaves to answer queries.
3 Technical Overview
In this section, we first give a brief description of 2FRP algorithm for undirected graphs, which can list all edges of all 2-fault replacement paths in time, so that we can find the third failed edge for our 3FRP algorithm. Then the high-level ideas for the incremental DSO and 3FRP algorithm are discussed.
3.1 2FRP Algorithm between
All 1-fault replacement paths are easy to find in time since we can delete each edge on and then run Dijkstra’s algorithm [14] from . Then for edges and , consider whether is on or not.
3.1.1 Only is on
By the methods in [30], we create a graph from by adding new vertices as follows. Let be a number that is large enough. (Namely, can be the sum of all edge weights in .)
-
First let be a copy of , that is, remove all edges on from .
-
For every edge and is before on , introduce two new vertices and .
-
–
For every vertex on , the node is connected with by an edge with weight .
-
–
For every vertex on , the node is connected with by an edge with weight .
-
–
The number of vertices in is still , so we construct a 1-failure DSO on with construction time, space and query time by [4]. We say the edge corresponds to the path since their length differs by a fixed number , similarly corresponds to the path . (Here guarantees that any shortest path between new vertices will not travel through other new vertices.) Since is not on , we can obtain by querying the DSO. We have:
Lemma 12.
Given and , , in the graph we defined,
Proof.
The proof is easy to see since the first edge and last edge on the path have:
Also is before and is after , so on the first edge and last edge can be replaced by the original shortest paths and , respectively, so it is a path in , and . Similarly the subpaths of before divergence point and after convergence point on can be replaced by edges and in , respectively (with value added), so . Since the 1-failure DSO supports path retrieving [11, 4] in time per edge, we can restore all paths in time.
3.1.2 Both are on
Our structure in this case is also similar to [30], but is simpler since is undirected here. If we use the graph before, the distance cannot capture since it only formulates the subpaths before divergence point and after convergence point on , but may go through some edges between and on , which are not included in .
W.l.o.g., assume is before on , so on the shortest path . For all pairs of , we want to find the paths: ( are all on .)
and also:
First we run the APSP algorithm on the graph , then find the paths for all edge and vertex on in time, since we need time to enumerate for every pair of and . Symmetrically also find for all edge and vertex on in time. Then for every pair of and , we can find:
That is, the path diverges before , converges with at some point between and , then goes through subpath of to . If , it is just . Then we can compute for from backward to . If with is an edge on , the path can either first go through or directly go to , so . So the total time to get all of is still .
Thus, given , we can get from by enumerating all possible between in time. can be obtained similarly. So the total time is , and it is also easy to list all edges on every .
3.2 Ideas of Incremental DSO
In 3FRP problem, consider the case that only is on but are not, the 2-failure DSO of space and query time [18] cannot be directly applied here since its construction time is too large. However, since the number of possible “second failures” is by Lemma 9, a dynamic 1-failure DSO with edge update time will help. We can delete every , query for every and as the method in Section 3.1.1, then add back.
Thus below we first introduce ideas toward Theorem 2 to build an incremental DSO with construction time , worst-case update time and query time . By the reduction in Theorem 11, we can construct an offline dynamic DSO with update time. In the incremental DSO, when inserting an edge , let be the new graph .
Inspired by APSP update.
It is not hard to maintain APSP incrementally in time. When inserting an edge , we determine whether provides a shorter path for any pair of vertices . That is, for each pair , we compute the updated shortest path:
Following previous works [11], our goal is to construct a DSO that maintains the following information:
-
A shortest path tree rooted at each vertex .
-
For all vertex pair , the detour , where are either or integer powers of , satisfying .
Here, denotes the number of edges in the shortest path . Note that in our definition, represents the graph obtained by removing the edges in rather than removing vertices, which slightly differs from previous works [11, 4]. However, under the weak subpath constraint (as will be formally defined later), we indeed only consider the detours avoiding one edge in the range which also avoids all the edges and vertices in the whole range . We want to maintain such detours after inserting a new edge in time per detour.
Weak subpath and proper form.
As in [4], to have an efficient construction time, instead of maintaining avoiding the whole range, we can alternatively maintain the longest shortest detour avoiding one edge in the range . Moreover, it is easy to observe that we only need to maintain the detour when the detour avoids the entire range .
So we define a subpath to be weak under , if there exists an edge , . Here is called a weak point of (under ). Note that by Theorem 8, the detour is a concatenation of a shortest path , an edge and a shortest path in . Therefore, we can store the replacement path for weak subpaths explicitly. We call such concatenation of any shortest path , any edge and a shortest path a proper form. (Any part can degenerate to a point.)
In the incremental DSO, for every pair of vertices and where are either or integer powers of , we maintain to be a null path, or of the proper form, such that:
-
If is weak under , we store the correct replacement path in proper form.
-
If is not weak under , can be either some path from to that avoids in proper form, or a null path of length .
Remark 13.
Note that in this structure we generally can hardly say whether a given is weak or not under , even if we know the value of . We also point out that even if is weak under we do not record the position of the weak point.
Consider the case that the shortest path does not change after inserting an edge , that is, . If and do not intersect with the range , then is a path between and in .
However, if is shorter than the original , we can hardly check whether is weak under in , and even if it is weak, we cannot efficiently find the weak point such that . So in this case, we just store , then it is the correct replacement path when is weak under .
Also note that the proper form for all stored makes it easier to locate vertices and compute intersections between paths.
3.2.1 Ideas of DSO Query
The correctness of DSO query is guaranteed by Theorem 15, showing that we can answer the shortest path between avoiding any subpath of if is weak under . Note that in the query stage, we in fact only care about the case where is a single edge (because it is an 1-DSO), and an interval consisting of a single edge is, of course, weak.
We sometimes need to check whether a path is in proper form, and if so, transform it to its proper form representation.
Definition 14.
Let be a path from to in and let be a subpath of in . We define a transform on to be
-
the path of the proper form, if can be transformed to the proper form and it avoids edges of
-
a null path with weight , otherwise.
When this transform is used in this section, is given by a concatenation of several shortest paths and edges in , and we want to transform it into the proper form in or the new graph . The transform can be done in time, while the proof refers to the full version of this paper [9].
Now we state the DSO query theorem formally:
Theorem 15.
Let be a subpath of By the information we maintain in the DSO, we can calculate the length of in time. W.l.o.g., suppose is closer to than is. Let be the largest integer power of 2 s.t. falls in Similarly is the largest integer power of 2 s.t. falls in Let (Note that if , and if , .) Then
(1) | ||||
Proof refers to the full version of this paper [9].
3.2.2 Ideas of Incremental Update
Now we see how the DSO update is implemented. For the detours on , where are zero or integer powers of 2, we consider two cases separately as below: either , or . In both cases, the algorithm further studies many subcases based on the relative positions between the divergent and convergent points of and , and .
Recall that in our incremental structure, the graph before introducing edge is called , and the graph is called Let be the last point that and shares, and similarly be the first point that and shares. Let . The incremental update requires handling multiple distinct scenarios, determined by the relative positions between nodes , , , , and edge . We present selected representative cases with corresponding update mechanisms from the full version of this paper [9].
In the case discussions, by default we assume that is weak under . This is because we only care about the path when is weak. If is not weak, because of the transform of , we can always make sure that the path given by the algorithm is a proper form or a null path.
We know in this case is on . W.l.o.g., suppose that is nearer to than . We can see that is a shortest path in , i.e. . Recall that , , , then and we want to find . So the points and are all on the new shortest path
CASE 1.
,
In this case, since , , , and avoids , so is weak with a weak point , and , which must pass the transform .
CASE 2.
,
-
If does not go through , similar as CASE 1, it is in , which equals Since is weak under , it is also weak under , so .
This means if is not a proper form, does not go through , so Otherwise, we should choose the minimum of both. Note that if intersects with , cannot be the shortest one and also cannot pass the transform .
CASE 6.
,333We keep the case numbers same as those in the full version of this paper [9] for consistency.
Assume is weak under and is a weak point of . avoids the whole interval , so it avoids As a result, is a path in , i.e. If is not in , . Since does not avoid the whole interval this is contradictory with the fact that is a weak point of , so and . As a result, , and avoids
If intersects , it can only intersects By Theorem 7, it is impossible that intersects but avoids and , so avoids , and is weak under with weak point , thus .
Also, if does not pass the transform , then is not weak under , so the null path is legal for .
As before, let , , , and we want to find . Also suppose that is weak under in the new graph , and is a weak point of it. The update algorithm goes as follows.
-
Set
-
Execute the following algorithm, then swap and repeat again
-
Let be the last point that and shares, and similarly be the first point that and shares, consider the following cases
-
–
If , set
-
–
If , set
-
–
If , set
-
–
If , do nothing
-
–
If , set
-
–
if , set
-
–
We list some lemmas we need. The proofs refer to the full version of this paper [9].
Lemma 16.
For any edge , if does not go through , then .
Lemma 17.
For any edge , if goes through and is closer to than to in , then .
Lemma 18.
If is weak under in and does not go through , then .
In Lemma 18 we can see that if does not pass the transform , is not weak or must pass . So in the algorithm we first set to be . Then we consider the cases that goes through (W.l.o.g., assume is closer to than to ):
CASE 1.
,
CASE 4.
,
We can prove that in this case if is weak then cannot go through .
Prove by contradiction. Suppose is weak under with weak point and goes through (first then ), then and since they cannot go through . We know is a 1-replacement-path in , so by Theorem 8, it equals a concatenation of a shortest path, an edge, and a shortest path in . However, means that is different from , so it is a concatenation of at least two shortest paths in . Same for . Therefore, equals the shortest path avoiding plus plus the shortest path avoiding , which will be a concatenation of at least three shortest paths in , contradicting to is weak.
CASE 5.
,
3.3 Ideas of 3FRP Algorithm
As in the 2FRP algorithm in Section 3.1, we also consider how many of are on the original shortest path , and use different methods to solve them.
3.3.1 Only is on
To solve the case that only is on but are not, we use a similar structure as Section 3.1.1, that also starts from which introduces new vertices to . We further note that from Lemma 9, for all possible , the union of possible “second failures” such that is .
By the incremental 1-failure DSO in Section 3.2 and the reduction from offline dynamic structure to worst-case incremental structure in Theorem 11, we can get an offline dynamic 1-failure DSO with update time. Then for every possible we can delete it from “temporarily” and query
for every and , which equals . After that, we add back to .
3.3.2 Only are on
To solve the case that are on but is not, we apply the incremental DSO on the binary partition structure as in the 2-failure DSO of [18], that is, we build a binary range tree on the path , and define two graphs and (based on the graph in Section 3.1.1) in each level of the range tree, where in we remove all odd ranges on level and in we remove all even ranges. Given and , we find the first level in which and are in different ranges, then is in an even range and is in an adjacent odd range, and let be the point that separates these two ranges. Consider the following cases for the intersection of and the edges between and on :
-
If does not travel through edges between and , we can use DSO on to give the answer.
-
If only goes through edges between and , we can use which does not contain edges between and . So the distance between and in avoiding can give the answer. (Note that edges between and in will not affect the answer, similarly for edges between and .)
-
If only goes through edges between and , symmetrically we can use .
-
The only case left is that travels through . In this case, we can capture the replacement path using the midpoint . Namely, we compute its length by:
Using the offline dynamic DSO on and , we can answer these two subpaths separately.
So we need to build offline dynamic DSOs on and in each level . Since there are levels, the total time is still .
3.3.3 All of are on
We assume the order of them on is just , then they will cut into four ranges in order. The difficulty will come from the fact that the path can go through both of and , in any order. To solve this case, we first build a structure in time which can answer the following queries in time:
-
Given two vertex-disjoint ranges of consecutive edges on the original shortest path ( can be single vertices), answer the shortest path that starts from the leftmost (rightmost) vertex of , diverges from converges in , and finally ends at the leftmost (rightmost) vertex of .
-
Given an edge on and two vertex-disjoint ranges of consecutive edges on after , answer the shortest path that starts from , diverges before , converges in , and diverges from then converges in and finally ends at the leftmost (rightmost) point of .
In the structure we store the paths for all ranges in the binary range tree on , then given any in the query, they can be split into ranges in the binary range tree, so the queries for can be answered in time.
Similar to Section 3.1.2, by this structure, when given and a vertex on , it takes time to find the shortest path goes through avoiding . So it will take time to find the shortest path goes through both of and avoiding , but we need the time bound to be on average. For every pair of and , the main ideas are as follows.
-
First find the middle edge on the range between and , and use time to find the shortest path .
-
For other edges between and , if is not on , then either is equal to or it goes through , so this case can be solved in time.
-
The intersection of and between and consists of at most two ranges, then find the middle edge of the larger range, and find the shortest path avoiding and goes through two ranges between them. This can also be done in time.
-
As before, for not on the new path we can solve it by querying the paths that go through or , so next we only need to consider the edges in the intersection of all paths so far, and we can argue that the size of its intersection with the range between and will shrink by a constant factor in every two iterations, so only iterations are needed. Thus the time for every pair of and is only .
4 Hardness of 2FRP
In this section we show that the 2FRP problem in undirected graphs is hard, assuming the APSP hypothesis. We note that in directed graphs, 1FRP is as hard as APSP [31]. However, in undirected graphs, 1FRP can be solved in almost-optimal time [26].
Theorem 19.
If 2FRP in undirected weighted graphs can be solved in time for any , then in undirected weighted graphs, the all-pairs shortest path problem can also be solved in time.
We remark that by [31], the hardness of the all-pairs shortest path problem in undirected weighted graphs is equivalent to the hardness of APSP. Therefore, any algorithm in time for 2FRP will refute the APSP hypothesis. So we immediately get the corollary:
Corollary 20.
Assuming the APSP hypothesis that APSP cannot be solved in time for any , 2FRP problem in undirected weighted graphs cannot be solved in time for any .
Suppose toward contradiction that we can solve 2FRP in time. Consider any undirected weighted all-pairs shortest path instance with nodes , we will construct an undirected weighted 2FRP instance by adding vertices and edges on , solving which will result in solving all-pairs shortest path in .
Let be a number that is larger than the sum of all weights in . Let be a path with nodes with weight 0 edges for all . Similarly, let be a path with nodes with weight 0 edges for all . Moreover, we construct two matchings with weight for all , and with weight for all .
Let . By inspection, the to shortest path in is precisely the path goes from to in , from to to , and then from to in . Consider any pair of failed edges, one on and the other on . We will show the following relationship for 2FRP in and all-pairs shortest paths in :
Lemma 21.
For any , let be a set of two edge failures with one in and one in . Then regarding the failure set :
The proof refers to the full version of this paper [9]. With this lemma, we can prove Theorem 19. Considering any undirected weighted all-pairs shortest path instance , we construct an undirected weighted graph as above. Now we check all failure set for all pairs , . By Lemma 21, we can obtain from , which means by solving 2FRP in we can solve all-pairs shortest paths in .
Since we know nodes in , if we have a truly subcubic time algorithm that solving 2FRP in time for , then using this algorithm we can solve all-pairs shortest paths in time for . Therefore, Theorem 19 is true.
5 Almost Optimal Single-Source Replacement Path under Multiple Edge Failures
For a graph and a vertex the 2-fault single-source replacement path problem is that for every tuple of two edges and a vertex answer the distance between and in after removing edges and We show how to solve it in almost optimal time for undirected weighted graphs, via the incremental DSO.
Let be the shortest path tree of in with edges . Our algorithm goes as follows: we maintain an offline dynamic DSO on For each we remove in maintain the dynamic DSO. For every vertex in the subforest of in and every edge on the replacement path save the distance in via querying the dynamic DSO. At the end of each step, add back into
Correctness.
For each tuple if none of is in we know the shortest path is the same after removing W.l.o.g., suppose and is in the subtree of in If is not in we know that and this value can be queried in the static DSO. If we know the value equals from the dynamic DSO in graph which is obtained in our algorithm.
Time analysis.
The incremental DSO can be viewed as an offline fully dynamic DSO by Theorem 10. Therefore, the update time is for each edge The number of edges on a shortest path tree is so the total update time is The number of distances we query is each with query time , so the total query time is also We also retrieve all 1-fault single-source replacement paths in the static DSO, and the time is also bounded by
Extend to -fault SSRP for .
As the FRP reduction in [30], when we have an -fault SSRP algorithm in time, we can construct an -fault SSRP algorithm in time, by computing the -fault SSRP in graph for every in the shortest path tree from . Thus, the undirected 2-fault SSRP algorithm in time can be extended to undirected -fault SSRP algorithm in time for all , which is almost optimal.
References
- [1] Josh Alman, Ran Duan, Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou. More asymmetry yields faster matrix multiplication. In Proceedings of the 2025 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2005–2039, 2025. doi:10.1137/1.9781611978322.63.
- [2] Noga Alon, Shiri Chechik, and Sarel Cohen. Deterministic combinatorial replacement paths and distance sensitivity oracles. In International Colloquium on Automata, Languages and Programming, 2019. URL: https://api.semanticscholar.org/CorpusID:159041702.
- [3] Aaron Bernstein and David Karger. Improved distance sensitivity oracles via random sampling. In Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’08, pages 34–43, USA, 2008. Society for Industrial and Applied Mathematics. URL: https://dl.acm.org/doi/10.5555/1347082.1347087.
- [4] Aaron Bernstein and David Karger. A nearly optimal oracle for avoiding failed vertices and edges. In Proceedings of the Forty-First Annual ACM Symposium on Theory of Computing, STOC ’09, pages 101–110, New York, NY, USA, 2009. Association for Computing Machinery. doi:10.1145/1536414.1536431.
- [5] Greg Bodwin, Fabrizio Grandoni, Merav Parter, and Virginia Vassilevska Williams. Preserving Distances in Very Faulty Graphs. In Ioannis Chatzigiannakis, Piotr Indyk, Fabian Kuhn, and Anca Muscholl, editors, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017), volume 80 of Leibniz International Proceedings in Informatics (LIPIcs), pages 73:1–73:14, Dagstuhl, Germany, 2017. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ICALP.2017.73.
- [6] Anat Bremler-Barr, Yehuda Afek, Haim Kaplan, Edith Cohen, and Michael Merritt. Restoration by path concatenation: fast recovery of MPLS paths. In Proceedings of the Twentieth Annual ACM Symposium on Principles of Distributed Computing, PODC ’01, pages 43–52, New York, NY, USA, 2001. Association for Computing Machinery. doi:10.1145/383962.383980.
- [7] Shiri Chechik and Sarel Cohen. Near optimal algorithms for the single source replacement paths problem. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 2090–2109. SIAM, 2019. doi:10.1137/1.9781611975482.126.
- [8] Shiri Chechik and Ofer Magen. Near Optimal Algorithm for the Directed Single Source Replacement Paths Problem. In Artur Czumaj, Anuj Dawar, and Emanuela Merelli, editors, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020), volume 168 of Leibniz International Proceedings in Informatics (LIPIcs), pages 81:1–81:17, Dagstuhl, Germany, 2020. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ICALP.2020.81.
- [9] Shucheng Chi, Ran Duan, Benyu Wang, and Tianle Xie. Undirected 3-fault replacement path in nearly cubic time, 2024. doi:10.48550/arXiv.2411.18312.
- [10] Camil Demetrescu and Giuseppe F. Italiano. A new approach to dynamic all pairs shortest paths. J. ACM, 51(6):968–992, November 2004. doi:10.1145/1039488.1039492.
- [11] Camil Demetrescu, Mikkel Thorup, Rezaul Alam Chowdhury, and Vijaya Ramachandran. Oracles for distances avoiding a failed node or link. SIAM J. Comput., 37:1299–1318, 2008. doi:10.1137/S0097539705429847.
- [12] Dipan Dey and Manoj Gupta. Near Optimal Algorithm for Fault Tolerant Distance Oracle and Single Source Replacement Path Problem. In Shiri Chechik, Gonzalo Navarro, Eva Rotenberg, and Grzegorz Herman, editors, 30th Annual European Symposium on Algorithms (ESA 2022), volume 244 of Leibniz International Proceedings in Informatics (LIPIcs), pages 42:1–42:18, Dagstuhl, Germany, 2022. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ESA.2022.42.
- [13] Dipan Dey and Manoj Gupta. Nearly optimal fault tolerant distance oracle. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, pages 944–955, New York, NY, USA, 2024. Association for Computing Machinery. doi:10.1145/3618260.3649697.
- [14] E. W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1:269–271, 1959. doi:10.1007/BF01386390.
- [15] R. Duan, J. Mao, X. Shu, and L. Yin. A randomized algorithm for single-source shortest path on undirected real-weighted graphs. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 484–492, Los Alamitos, CA, USA, November 2023. IEEE Computer Society. doi:10.1109/FOCS57990.2023.00035.
- [16] R. Duan, H. Wu, and R. Zhou. Faster matrix multiplication via asymmetric hashing. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 2129–2138, Los Alamitos, CA, USA, November 2023. IEEE Computer Society. doi:10.1109/FOCS57990.2023.00130.
- [17] Ran Duan, Jiayi Mao, Xiao Mao, Xinkai Shu, and Longhui Yin. Breaking the sorting barrier for directed single-source shortest paths, 2025. arXiv:2504.17033.
- [18] Ran Duan and Seth Pettie. Dual-failure distance and connectivity oracles. In Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’09, pages 506–515, USA, 2009. Society for Industrial and Applied Mathematics. doi:10.1137/1.9781611973068.56.
- [19] Ran Duan and Hanlin Ren. Maintaining exact distances under multiple edge failures. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2022, pages 1093–1101, New York, NY, USA, 2022. Association for Computing Machinery. doi:10.1145/3519935.3520002.
- [20] Ran Duan and Tianyi Zhang. Improved distance sensitivity oracles via tree partitioning. In Faith Ellen, Antonina Kolokolova, and Jörg-Rüdiger Sack, editors, Algorithms and Data Structures, pages 349–360, Cham, 2017. Springer International Publishing. doi:10.1007/978-3-319-62127-2_30.
- [21] Zvi Gotthilf and Moshe Lewenstein. Improved algorithms for the k simple shortest paths and the replacement paths problems. Information Processing Letters, 109(7):352–355, 2009. doi:10.1016/j.ipl.2008.12.015.
- [22] Fabrizio Grandoni and Virginia Vassilevska Williams. Improved distance sensitivity oracles via fast single-source replacement paths. In 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science, pages 748–757, 2012. doi:10.1109/FOCS.2012.17.
- [23] Fabrizio Grandoni and Virginia Vassilevska Williams. Faster replacement paths and distance sensitivity oracles. ACM Trans. Algorithms, 16(1), December 2019. doi:10.1145/3365835.
- [24] Yong Gu and Hanlin Ren. Constructing a Distance Sensitivity Oracle in Time. In Nikhil Bansal, Emanuela Merelli, and James Worrell, editors, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021), volume 198 of Leibniz International Proceedings in Informatics (LIPIcs), pages 76:1–76:20, Dagstuhl, Germany, 2021. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ICALP.2021.76.
- [25] Yuzhou Gu, Adam Polak, Virginia Vassilevska Williams, and Yinzhan Xu. Faster Monotone Min-Plus Product, Range Mode, and Single Source Replacement Paths. In Nikhil Bansal, Emanuela Merelli, and James Worrell, editors, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021), volume 198 of Leibniz International Proceedings in Informatics (LIPIcs), pages 75:1–75:20, Dagstuhl, Germany, 2021. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ICALP.2021.75.
- [26] Enrico Nardelli, Guido Proietti, and Peter Widmayer. A faster computation of the most vital edge of a shortest path. Information Processing Letters, 79(2):81–85, 2001. doi:10.1016/S0020-0190(00)00175-7.
- [27] Binghui Peng and Aviad Rubinstein. Fully-dynamic-to-incremental reductions with known deletion order (eg sliding window). In Symposium on Simplicity in Algorithms (SOSA), pages 261–271. SIAM, 2023. doi:10.1137/1.9781611977585.ch24.
- [28] Hanlin Ren. Improved distance sensitivity oracles with subcubic preprocessing time. Journal of Computer and System Sciences, 123:159–170, 2022. doi:10.1016/j.jcss.2021.08.005.
- [29] Ryan Williams. Faster all-pairs shortest paths via circuit complexity. In Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing, STOC ’14, pages 664–673, New York, NY, USA, 2014. Association for Computing Machinery. doi:10.1145/2591796.2591811.
- [30] V. Williams, E. Woldeghebriel, and Y. Xu. Algorithms and lower bounds for replacement paths under multiple edge failure. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 907–918, Los Alamitos, CA, USA, November 2022. IEEE Computer Society. doi:10.1109/FOCS54457.2022.00090.
- [31] Virginia Vassilevska Williams and R. Ryan Williams. Subcubic equivalences between path, matrix, and triangle problems. J. ACM, 65(5), August 2018. doi:10.1145/3186893.
- [32] Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou. New bounds for matrix multiplication: from alpha to omega. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 3792–3835. SIAM, 2024. doi:10.1137/1.9781611977912.134.