Abstract 1 Introduction 2 Preliminaries 3 Technical Overview 4 Hardness of 2FRP 5 Almost Optimal Single-Source Replacement Path under Multiple Edge Failures References

Undirected 3-Fault Replacement Path in Nearly Cubic Time

Shucheng Chi Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China Ran Duan ORCID Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China Benyu Wang ORCID University of Michigan, Ann Arbor, MI, USA Tianle Xie Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing, China
Abstract

Given a graph G=(V,E) (n=|V|, m=|E|) and two vertices s,tV, the f-fault replacement path (fFRP) problem computes for every set F of at most f edges, the distance from s to t when edges in F fail. A recent result shows that 2FRP in directed graphs can be solved in O~(n3) time [Vassilevska Williams, Woldeghebriel, Xu 2022]. In this paper, we show a 3FRP algorithm in deterministic O~(n3) time for undirected weighted graphs, which almost matches the size of the output. This implies that fFRP in undirected graphs can be solved in nearly optimal O~(nf) time for all f3.

To construct our 3FRP algorithm, we introduce an incremental distance sensitivity oracle (DSO) for undirected graphs with O~(n2) worst-case update time, while preprocessing time, space, and query time are still O~(n3), O~(n2) and O~(1), respectively, which match the static DSO [Bernstein and Karger 2009]. Here in a DSO, we can preprocess a graph so that the distance between any pair of vertices given any failed edge can be answered efficiently. From the recent result in [Peng and Rubinstein 2023], we can obtain an offline dynamic DSO from the incremental worst-case DSO, which makes the construction of our 3FRP algorithm more convenient. By the offline dynamic DSO, we can also construct a 2-fault single-source replacement path (2-fault SSRP) algorithm in O~(n3) time, that is, from a given vertex s, we want to find the distance to any vertex t when any pair of edges fail. Thus the O~(n3) time complexity for 2-fault SSRP is also nearly optimal.

Now we know that in undirected graphs 1FRP can be solved in O~(m) time [Nardelli, Proietti, Widmayer 2001], and 2FRP and 3FRP in undirected graphs can be solved in O~(n3) time. In this paper, we also show that a truly subcubic algorithm for 2FRP in undirected weighted graphs does not exist under APSP hypothesis.

Keywords and phrases:
Graph Algorithm, Shortest Path, Replacement Path
Category:
Track A: Algorithms, Complexity and Games
Copyright and License:
[Uncaptioned image] © Shucheng Chi, Ran Duan, Benyu Wang, and Tianle Xie; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Shortest paths
; Theory of computation Graph algorithms analysis
Related Version:
Full Version: https://arxiv.org/abs/2411.18312 [9]
Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis

1 Introduction

The shortest path problem is one of the most fundamental problems in computer science, and the single-source shortest path problem is known to have an almost linear O~(m) time algorithm [14, 15, 17]. In a failure-prone graph G=(V,E) (n=|V|, m=|E|), some edges may fail on the shortest path, so we want to find a replacement path. In the classical replacement path (RP) problem, given source s and destination t, we want to find the s-t shortest path when edge e fails for every edge eE. Of course, if e is not on the original s-t shortest path in G, the replacement path is still the original shortest path, so in fact we just need to find an s-t replacement path for every edge e on the s-t shortest path in G, which counts for O(n) replacement paths.

We can generalize the RP problem to any number of failed edges f, that is, finding s-t shortest paths for every failed edge set F where |F|f. This problem is called f-fault replacement path (fFRP) [30], therefore 1FRP is just the original RP problem. As before, given the first (f1) failed edges and the corresponding s-t replacement shortest path, we still only need to consider the cases when the f-th failed edge is on it. Because there are O(n) edges on each shortest path, the total number of replacement paths we need to find for fFRP problem is O(nf). The current well-known results of fFRP algorithms for real-weighted directed and undirected graphs are summarized as follows.

  • RP problem in directed graphs can be solved in O(mn+n2loglogn)=O(n3) time [21].

  • Under APSP hypothesis111APSP is an abbreviation for all-pair shortest path problem, and APSP hypothesis [31] suggests that APSP cannot be solved in truly subcubic O(n3ϵ) time for any constant ϵ>0., RP in directed graphs does not have O(n3ϵ) time algorithm for any constant ϵ>0 [31]. So the O(n3)-time algorithm is (conditionally) nearly optimal.

  • RP problem in undirected graphs can be solved in O~(m) time222Here O~() hides polylogarithmic factors. [26], thus also almost optimal.

  • Recently, Vassilevska Williams, Woldeghebriel and Xu [30] gave a 2FRP algorithm with O~(n3) running time for directed graphs, almost matching the 1FRP case. (Although not formulated, it also works for undirected graphs with slight modifications.)

Thus, 1FRP and 2FRP problems both have O~(n3)-time algorithms, so it is natural to ask whether 3FRP still has a O~(n3)-time algorithm. In this paper, we give such a 3FRP algorithm for undirected graphs: (Note that all algorithms in this paper are deterministic.)

Theorem 1.

The 3FRP problem in undirected real-weighted graphs can be solved in O~(n3) time.

Denote the shortest path between s and t in graph G by πG(s,t). Then in O~(n3) time, for every edge d1πG(s,t), we can find all edges of πG{d1}(s,t), then for every edge d2πG{d1}(s,t), we can find all edges of πG{d1,d2}(s,t), then for every edge d3πG{d1,d2}(s,t), we can find |πG{d1,d2,d3}(s,t)|, so the algorithm outputs O(n3) distances in total. One can see the difficulty of this problem since every answer only takes O~(1) time on average.

Extend to 𝒇FRP for 𝒇𝟑.

For any f3, by Theorem 1, the fFRP problem can be solved in O~(nf) time in undirected graphs (see the reduction in [30]). Since the size of the output of fFRP can be Θ(nf), the running time is almost optimal. As in [30], we also take the 1-failure distance sensitivity oracle as an important subroutine, since it only takes O~(1) query time. Here an f-failure distance sensitivity oracle (DSO) is a data structure that supports distance queries between any pair of u,vV given any f failed edges. It is widely known that 1-failure DSO takes O~(n2) space, O(1) query time, and O~(mn) construction time [11, 4]. Since current 2-failure DSOs are still hard to construct efficiently [18], we instead construct an incremental 1-failure DSO: (In the following DSO means 1-failure DSO for convenience.)

Theorem 2.

For a given undirected graph G, there is an incremental DSO that can be constructed in O~(n3) time, so that when we insert an edge e into G, the DSO can be maintained in worst-case O~(n2) time. The DSO also has O~(n2) space and O~(1) query time.

Recently Peng and Rubinstein [27] gave a reduction from offline fully dynamic structure to worst-case incremental structure, where “offline” means all updates are known ahead. So we can obtain an offline dynamic efficient DSO. (We also give a simple proof of the reduction we need by a different method as [27], see Theorem 11.)

Theorem 3 ([27]).

Let T1 be the total number of updates. If there exists an incremental dynamic algorithm with query time Γq and worst-case update time Γu, then there is an offline dynamic algorithm with query time Γq and worst-case update time O(Γulog2(T)).

Corollary 4.

Given an undirected graph G and a sequence of O(n) edge insertions and deletions, there is an offline dynamic DSO which can be constructed in O~(n3) time, and the total update time is also O~(n3). It can answer the distance between any pair of vertices when any edge fails in O~(1) time.

We have known 1FRP is as hard as APSP in directed graphs [31], but in undirected graphs, 1FRP can be solved in O~(m) time [26], and 2FRP in undirected graphs can be solved in O~(n3) time. One may wonder whether 2FRP in undirected graphs has a truly subcubic time algorithm. However, we show that it is not possible under the APSP hypothesis.

Theorem 5.

Assuming the APSP hypothesis that APSP cannot be solved in truly subcubic O(n3ϵ) time for any constant ϵ>0, then 2FRP problem in undirected weighted graphs also cannot be solved in truly subcubic time.

We can also apply the offline dynamic DSO to get a 2-fault single-source replacement path (2-fault SSRP) algorithm for undirected graphs, that is, given a source s, for every other vertex tV, we can find πG{d1,d2}(s,t) for all edges d1,d2. The reduction is very simple: we can remove an edge d1 from the shortest path tree from s each time, and then put it back. By the offline dynamic DSO, we can answer the distance between s and t avoiding d2 in the current graph without d1.

Theorem 6.

The 2-fault single-source replacement path problem can be solved in O~(n3) time for undirected graphs.

Note that Theorem 6 can be extended to undirected f-fault SSRP algorithm in O~(nf+1) time for all f2, which is almost optimal. Previously there are 1-fault single-source replacement path algorithms of running time O~(mn+n2) for unweighted undirected graphs [7, 12] and unweighted directed graphs [8].

Note that the 2FRP, 3FRP, and single-source 2FRP algorithms in this paper obtain every distance from DSOs, which are essentially similar to the one in [4], or the APSP table. By the properties of the DSOs and the construction of our algorithms, in these algorithms in fact we can obtain an oracle of size O~(n3) in which we can retrieve a shortest path under failures in O(1) time per edge.

Other related work.

The current fastest running time for the directed all-pair shortest path (APSP) problem is n3/2Ω(logn) [29]. Finding truly subcubic time algorithms for APSP with arbitrary weights is considered as a major open problem in algorithm research, and its hardness is one of the major assumptions in proving conditional lower bounds.

For replacement path (RP) and single-source replacement path (SSRP) problems, there are also many subcubic time results for graphs with small integer edge weights. In unweighted directed graphs, the RP problem can be solved in O~(mn) time [2]. If the edge weights are integers in [M,M], Vassilevska Williams gave a O~(Mnω) time RP algorithm [29], where ω<2.371339 is the exponent of the complexity of matrix multiplication [1, 32, 16]. Moreover, [30] gave a 2FRP algorithm in small edge weight directed graphs in O~(M2/3n2.9153) time. For SSRP problem, there is also a O~(Mnω) time algorithm for graphs with integer weights in [1,M] and algorithms with running time O~(M0.7519n2.5286) and O~(M0.8043n2.4957) for graphs with integer weights in [M,M][23, 22, 25].

After the breakthrough result of efficient 1-failure DSO by Demetrescu, Thorup, Chowdhury and Ramachandran [11], there are many efforts to improve the preprocessing time [3, 4, 22, 23, 7, 28, 24], space [20], and extend to more failures. However, keeping query time O~(1) and space O~(n2) is difficult when generalized to any constant number of f failures. The 2-failure DSO of O~(n2) space and O~(1) query time [18] is much more complicated than the 1-failure case, so it seems impossible to generalize it to f-failure DSO in arbitrary graphs. For undirected graphs, recently Dey and Gupta gave an f-failure DSO with O(f4n2log2(nM)) space and O((flog(nM))O(f2)) query time [13], improving the result in [19], but their construction time is still large, thus still not suitable for efficient fFRP algorithms.

Organization.

In Section 2 we introduce basic notations, assumptions and concepts. In Section 3 we give a brief description of the 2FRP algorithm for undirected graphs, which is a little simpler than the one for directed graphs in [30], then give an overview on the ideas for our incremental DSO and 3FRP algorithm. We will also discuss the hardness of 2FRP and single-source replacement path under multiple edge failures in Section 4 and 5, respectively. Due to page limit, many details are omitted and can be found in the full version of this paper [9].

2 Preliminaries

2.1 Notations and Assumptions

We consider an undirected weighted graph G=(V,E) and n=|V|,m=|E|. For any two vertices u,vV, we use πG(u,v) to denote the shortest path from u to v in G. If the graph is clear in the context, then we use uv to be short for πG(u,v). For a path P, we use |P|,P to denote the length of P and the number of edges in P, respectively.

If the context is clear that a vertex z is on the shortest path πG(u,v) from u to v in a graph G, we use zi to represent the vertex that is i vertices after z in the direction from u to v. Similarly, we define zi to represent the vertex that is i vertices before z. When vertices u and v and the shortest path uv are clear in the context, for two vertices x and y on uv, we use the notation x>y to say that x is farther than y from u, or ux>uy. Similarly x<y means ux<uy, xy means uxuy, xy means uxuy.

If x,yuv, πG(x,y) will be the subpath from x to y on uv, and if xy, we use the interval [x,y] to denote the subpath πG(x,y) of uv in graph G. Thus the interval between i-th vertex and j-th vertex after u on uv is denoted as R=[ui,uj].

Let P1,P2 be two paths. If they share a same endpoint, we use P1P2 to denote the concatenation of them. If they do not share any edge, we say P1 avoids P2. Sometimes for brevity, we use min{P1,P2} to denote the shorter path of P1,P2, which means if |P1||P2|, min{P1,P2}=P1 and otherwise min{P1,P2}=P2.

In this paper, for a path P in G, GP denotes the graph obtained by removing all edges of P from G. (We do not need to remove vertices in this paper.)

As in many distance oracle papers, we also use the unique shortest path assumption, which means the shortest path between any two vertices in any subgraph of G is unique. For the incremental DSO, we also make the unique shortest path assumption at any time. So for example, we can check whether an edge e=(x,y) is on st or not by checking whether |sx(x,y)yt|=|st| or |sy(y,x)xt|=|st|. To assume unique shortest path, we can break the ties by adding small perturbations to the edge weights. (See [10].) One way to assume unique shortest path deterministically is to compare two paths of the same length by their number of edges, then by their vertices from the end node to start node.

2.2 Replacement Paths

We first consider the divergence and convergence points of two paths: (As before, although our graph is undirected, we may assume a path P starts from a vertex u and ends at v.)

  • For two paths P,Q both starting from u, we say they diverge at vertex aPQ if Q first gets into an edge out of P from a, that is, the subpaths of P and Q before a coincide, and we denote this divergence point a by Δ(P,Q). If P,Q are both shortest paths, their divergence point is unique.

  • Symmetrically, for P,Q both ends at v, we say they converges at vertex bPQ if Q gets into an edge in P from b, and the subpaths of P and Q coincide after b. The convergence point b is denoted by (P,Q).

  • For two shortest paths P,Q which do not share endpoints, if they intersect, the intersection part is also a shortest path, that is, they must first converge and then diverge.

The following theorem is well known for replacement paths.

Theorem 7 (Equivalent to Claim 1 in [11]).

Let l be a 1-fault replacement path in G, i.e. l=πGd(u,v) for an edge d. Let R1,R2 be two subpaths in uv with no intersection, and R3 be a subpath between them. If l avoids both R1 and R2, then it avoids R3.

From this theorem, we can see for any f-fault replacement path πGF(u,v) where only one failed edge in F is on the original shortest path uv, πGF(u,v) only diverges and converges once on uv. And if two failed edges d1,d2 in F are on uv, they will cut uv into three intervals D1,D2,D3. Because D1,D2,D3 are all shortest paths in GF, πGF(u,v) may:

  • diverge at D1 and converge at D3.

  • diverge at D1, converge at some point in D2, diverge from D2, then converge at D3.

In undirected weighted graphs, a notable theorem on the structure of replacement paths says that a 1-fault replacement path πGf(u,v) is a concatenation of a shortest path ux, an edge e=(x,y), and a shortest path yv, for some vertices x and y. (Any part can degenerate to a point.) More generally,

Theorem 8 ([6]).

For any undirected weighted graph G and any set of failed edges F with |F|=f, an f-fault replacement path can be partitioned into a concatenation interleaving at most f+1 subpaths and f edges, where each subpath is a shortest path in the graph G.

From this theorem, we have:

Lemma 9 ([5]).

For a given pair of vertices u,v, the union of all 1-fault u-v replacement paths has O(n) edges in total in undirected graphs.

Proof.

From Theorem 8, a 1-fault u-v replacement path in undirected graphs is composed of a shortest path ux, an edge (x,y), and a shortest path yv. The total number of the middle edges (x,y) is bounded by O(n) since there are O(n) 1-fault replacement paths, and other edges are on the shortest path trees from u and v, thus there are also at most O(n) edges.

2.3 DSOs

The paper uses the 1-fault DSO given by Bernstein and Karger [4] to initialize the incremental DSO. While [4] gives a 1-fault DSO in directed weighted graphs, it also works for undirected weighted graphs by a simple reduction. Suppose G is an undirected weighted graph, we introduce a directed graph G by replacing each edge in G by a pair of directed edges with the same weight. Suppose that in graph G edge (x,y) on the shortest path st is removed (x is closer to s), then we remove the directed edge (x,y) on the shortest path from s to t in G. We can see that the 1-fault replacement path πG(x,y)(s,t) cannot go through edge (y,x). This is because that no edge on πG(s,x) is removed, so πG(x,y)(s,x)=πG(s,x) does not go through (y,x). Therefore, πG(x,y)(s,t)=πG(x,y)(s,t).

There is a result in [27] to view worst-case incremental structures as offline fully dynamic structures.

Theorem 10 ([27]).

Let T1 be the total number of updates. Suppose there exists an incremental dynamic algorithm with query time Γq and worst-case update time Γu, then there is a dynamic algorithm for deletions-look-ahead setting with query time Γq and worst-case update time O(Γulog2(T)).

This theorem can be used to utilize our incremental DSO in Section 3.2 as an offline fully dynamic DSO, and here we also prove the reduction we need in a self-contained way:

Theorem 11.

Starting from a graph of O(n) vertices, if we have an incremental DSO of O~(n2) space with preprocessing time O~(n3), worst-case update time O~(n2) and query time O~(1), then it can be transformed into an offline fully dynamic DSO, such that if the total number of updates is T=O(n), the total preprocessing and update time is O~(n3) and the query time is still O~(1).

Proof.

In the offline fully dynamic DSO, let the initial graph be G0 and the graph after the i-th update be Gi. We first list all graphs G0,G1,,GT. For an interval of integers [i,j], define the graph G[i,j]=GiGi+1Gj. Then for an interval [i,j][i,j], we know that G[i,j]G[i,j] and |G[i,j]G[i,j]|ji.

Then we build a binary range tree on [0,T], in which the root contains the graph G[0,T], the children of the root contain the graphs G[0,T/2] and G[T/2,T], respectively, and each leaf contains a graph Gi. So we can build the DSOs on this tree from the root using the incremental structure, since GRGR if R is a child of R. Then we can see the total construction time is O~(n3), and we can use the DSOs on leaves to answer queries.

3 Technical Overview

In this section, we first give a brief description of 2FRP algorithm for undirected graphs, which can list all edges of all 2-fault replacement paths in O~(n3) time, so that we can find the third failed edge for our 3FRP algorithm. Then the high-level ideas for the incremental DSO and 3FRP algorithm are discussed.

3.1 2FRP Algorithm between 𝒔,𝒕

All 1-fault replacement paths are easy to find in O~(mn) time since we can delete each edge on st and then run Dijkstra’s algorithm [14] from s. Then for edges d1st and d2πGd1(s,t), consider whether d2 is on st or not.

3.1.1 Only 𝒅𝟏 is on 𝒔𝒕

By the methods in [30], we create a graph H from Gst by adding new vertices as follows. Let N be a number that is large enough. (Namely, N can be the sum of all edge weights in G.)

  • First let H be a copy of Gst, that is, remove all edges on πG(s,t) from G.

  • For every edge d=(x,y)st and x is before y on st, introduce two new vertices d and d+.

    • For every vertex x on sx, the node d is connected with x by an edge with weight |sx|+N.

    • For every vertex y on yt, the node d+ is connected with y by an edge with weight |yt|+N.

The number of vertices in H is still O(n), so we construct a 1-failure DSO on H with O~(n3) construction time, O~(n2) space and O(1) query time by [4]. We say the edge (d,x) corresponds to the path sx since their length differs by a fixed number N, similarly (y,d+) corresponds to the path yt. (Here N guarantees that any shortest path between new vertices will not travel through other new vertices.) Since d2 is not on st, we can obtain πG{d1,d2}(s,t) by querying the DSO. We have:

Lemma 12.

Given G and d1st, d2st, in the graph H we defined,

|πG{d1,d2}(s,t)|=|πHd2(d1,d1+)|2N
Proof.

The proof is easy to see since the first edge (d1,x) and last edge (y,d1+) on the path πHd2(d1,d1+) have:

|(d1,x)|=|sx|+N,|(y,d1+)|=|yt|+N

Also x is before d1 and y is after d1, so on πHd2(d1,d1+) the first edge and last edge can be replaced by the original shortest paths sx and yt, respectively, so it is a path in G{d1,d2}, and |πG{d1,d2}(s,t)||πHd2(d1,d1+)|2N. Similarly the subpaths of πG{d1,d2}(s,t) before divergence point x and after convergence point y on st can be replaced by edges (d1,x) and (y,d1+) in H, respectively (with value 2N added), so |πG{d1,d2}(s,t)||πHd2(d1,d1+)|2N. Since the 1-failure DSO supports path retrieving [11, 4] in O(1) time per edge, we can restore all paths πG{d1,d2}(s,t) in O~(n3) time.

3.1.2 Both 𝒅𝟏,𝒅𝟐 are on 𝒔𝒕

Our structure in this case is also similar to [30], but is simpler since G is undirected here. If we use the graph H before, the distance πH(d1,d2+) cannot capture πG{d1,d2}(s,t) since it only formulates the subpaths before divergence point and after convergence point on st, but πG{d1,d2}(s,t) may go through some edges between d1 and d2 on st, which are not included in H.

W.l.o.g., assume d1=(x1,y1) is before d2=(x2,y2) on st, so sx1<y1x2<y2t on the shortest path st. For all pairs of d1,d2, we want to find the paths: (w,a,b,z are all on st.)

P(d1,d2)=minwx1,y1abx2,zy2{swπGst(w,a)abπGst(b,z)zt}

and also:

P(d1,d2)=minwx1,y1bax2,zy2{swπGst(w,a)abπGst(b,z)zt}

First we run the APSP algorithm on the graph Gst, then find the paths U(d1,a)=minwx1{swπGst(w,a)} for all edge d1=(x1,y1) and vertex ay1 on st in O(n3) time, since we need O(n) time to enumerate w for every pair of d1 and a. Symmetrically also find U(d2,b)=minzy2{πGst(b,z)zt} for all edge d2=(x2,y2) and vertex bx2 on st in O(n3) time. Then for every pair of d1=(x1,y1),d2=(x2,y2) and y1bx2, we can find:

W(d1,d2,b)=minwx1,bax2{swπGst(w,a)ab}

That is, the path diverges before d1, converges with st at some point a between b and x2, then goes through subpath of st to b. If b=x2, it is just U(d1,x2). Then we can compute W(d1,d2,b) for b from x2 backward to y1. If (b,b) with b<b is an edge on st, the path W(d1,d2,b) can either first go through W(d1,d2,b) or directly go to b, so W(d1,d2,b)=min{U(d1,b),W(d1,d2,b)(b,b)}. So the total time to get all of W(d1,d2,b) is still O(n3).

Thus, given d1,d2, we can get P(d1,d2) from W(d1,d2,b)U(d2,b) by enumerating all possible b between d1,d2 in O(n) time. P(d1,d2) can be obtained similarly. So the total time is O(n3), and it is also easy to list all edges on every πG{d1,d2}(s,t).

3.2 Ideas of Incremental DSO

In 3FRP problem, consider the case that only d1 is on st but d2,d3 are not, the 2-failure DSO of O~(n2) space and O~(1) query time [18] cannot be directly applied here since its construction time is too large. However, since the number of possible “second failures” d2 is O(n) by Lemma 9, a dynamic 1-failure DSO with O~(n2) edge update time will help. We can delete every d2, query for every d1 and d3 as the method in Section 3.1.1, then add d2 back.

Thus below we first introduce ideas toward Theorem 2 to build an incremental DSO with construction time O~(n3), worst-case update time O~(n2) and query time O~(1). By the reduction in Theorem 11, we can construct an offline dynamic DSO with O~(n2) update time. In the incremental DSO, when inserting an edge e=(x,y), let G be the new graph G=Ge.

Inspired by APSP update.

It is not hard to maintain APSP incrementally in O(n2) time. When inserting an edge e=(x,y), we determine whether e provides a shorter path for any pair of vertices u,v. That is, for each pair (u,v), we compute the updated shortest path:

πG(u,v)=min{uv,uxeyv,uyexv}.

Following previous works [11], our goal is to construct a DSO that maintains the following information:

  • A shortest path tree rooted at each vertex u.

  • For all vertex pair (u,v), the detour πG(ui)(vj)(u,v), where i,j are either 0 or integer powers of 2, satisfying i+j<uv.

Here, uv denotes the number of edges in the shortest path uv. Note that in our definition, G(ui)(vj) represents the graph obtained by removing the edges in (ui)(vj) rather than removing vertices, which slightly differs from previous works [11, 4]. However, under the weak subpath constraint (as will be formally defined later), we indeed only consider the detours avoiding one edge in the range which also avoids all the edges and vertices in the whole range [ui,vj]. We want to maintain such detours after inserting a new edge e=(x,y) in O~(1) time per detour.

Weak subpath and proper form.

As in [4], to have an efficient construction time, instead of maintaining πG(ui)(vj)(u,v) avoiding the whole range, we can alternatively maintain the longest shortest detour avoiding one edge in the range maxf(ui)(vj)πGf(u,v). Moreover, it is easy to observe that we only need to maintain the detour maxf(ui)(vj)πGf(u,v) when the detour πGf(u,v) avoids the entire range [ui,vj].

So we define a subpath abuv to be weak under uv, if there exists an edge fab, πGf(u,v)=πGab(u,v). Here f is called a weak point of ab (under uv). Note that by Theorem 8, the detour πGf(u,v) is a concatenation of a shortest path ux, an edge (x,y) and a shortest path yv in G. Therefore, we can store the replacement path for weak subpaths abuv explicitly. We call such concatenation of any shortest path ux, any edge (x,y) and a shortest path yv a proper form. (Any part can degenerate to a point.)

In the incremental DSO, for every pair of vertices u,v and a=ui,b=vj where i,j are either 0 or integer powers of 2, we maintain ϖGab(u,v) to be a null path, or of the proper form, such that:

  • If ab is weak under uv, we store the correct replacement path ϖGab(u,v)=πGab(u,v) in proper form.

  • If ab is not weak under uv, ϖGab(u,v) can be either some path from u to v that avoids ab in proper form, or a null path of length +.

 Remark 13.

Note that in this structure we generally can hardly say whether a given ab is weak or not under uv, even if we know the value of ϖGab(u,v). We also point out that even if ab is weak under uv we do not record the position of the weak point.

Consider the case that the shortest uv path does not change after inserting an edge e=(x,y), that is, πG(u,v)=πG(u,v). If ux and yv do not intersect with the range abuv, then uxeyv is a path between u and v in Gab.

However, if |uxeyv| is shorter than the original |ϖGab(u,v)|, we can hardly check whether ab is weak under πG(u,v) in G, and even if it is weak, we cannot efficiently find the weak point f such that πGf(u,v)=πGab(u,v). So in this case, we just store ϖGab(u,v)=uxeyv, then it is the correct replacement path πGab(u,v) when ab is weak under πG(u,v).

Also note that the proper form for all stored ϖGab(u,v) makes it easier to locate vertices and compute intersections between paths.

3.2.1 Ideas of DSO Query

The correctness of DSO query is guaranteed by Theorem 15, showing that we can answer the shortest path between u,v avoiding any subpath ab of uv if ab is weak under uv. Note that in the query stage, we in fact only care about the case where ab is a single edge (because it is an 1-DSO), and an interval consisting of a single edge is, of course, weak.

We sometimes need to check whether a path is in proper form, and if so, transform it to its proper form representation.

Definition 14.

Let P be a path from u to v in G and let R be a subpath of uv in G. We define a transform TG,R(P) on P to be

  • the path P of the proper form, if P can be transformed to the proper form and it avoids edges of R.

  • a null path with weight +, otherwise.

When this transform is used in this section, P is given by a concatenation of several shortest paths and edges in G, and we want to transform it into the proper form in G or the new graph G. The transform TG,R can be done in O~(1) time, while the proof refers to the full version of this paper [9].

Now we state the DSO query theorem formally:

Theorem 15.

Let ab be a subpath of uv. By the information we maintain in the DSO, we can calculate the length of ϖGab(u,v) in O~(1) time. W.l.o.g., suppose a is closer to u than b is. Let i be the largest integer power of 2 s.t. ui falls in ua. Similarly j is the largest integer power of 2 s.t. vj falls in bv. Let a=ai,u=ui,v=vj,b=bj. (Note that if a=u, a=u=u and if b=v, b=v=v.) Then

ϖGab(u,v)=min{ TG,ab(uaϖGab(a,b)bv),ϖGuv(u,v), (1)
TG,ab(uaϖGav(a,v)),TG,ab(ϖGub(u,b)bv)}.

Proof refers to the full version of this paper [9].

3.2.2 Ideas of Incremental Update

Now we see how the DSO update is implemented. For the detours on ϖG(ui)(vj)(u,v), where i,j are zero or integer powers of 2, we consider two cases separately as below: either πG(u,v)uv, or πG(u,v)=uv. In both cases, the algorithm further studies many subcases based on the relative positions between ui,vj, the divergent and convergent points of uv and πG(u,v), and e.

Recall that in our incremental structure, the graph before introducing edge e is called G, and the graph G{e} is called G. Let p=Δ(ux,uv) be the last point that πG(u,x) and πG(u,v) shares, and similarly q=(yv,uv) be the first point that πG(y,v) and πG(u,v) shares. Let a=ui,b=vj. The incremental update requires handling multiple distinct scenarios, determined by the relative positions between nodes a, b, p, q, and edge e=(x,y). We present selected representative cases with corresponding update mechanisms from the full version of this paper [9].

In the case discussions, by default we assume that R=πG(ui,vj) is weak under πG(u,v). This is because we only care about the path πGR(u,v) when R is weak. If R is not weak, because of the transform of TG,R, we can always make sure that the path given by the algorithm is a proper form or a null path.

𝝅𝑮(𝒖,𝒗)𝝅𝑮(𝒖,𝒗)

We know in this case e=(x,y) is on πG(u,v). W.l.o.g., suppose that x is nearer to u than y. We can see that uv is a shortest uv path in G=Ge, i.e. uv=πGe(u,v). Recall that p=Δ(ux,uv), q=(yv,uv), a=ui,b=vj, then R=πG(a,b) and we want to find ϖGR(u,v). So the points a,b,x,y and p,q are all on the new shortest path πG(u,v).

CASE 1.

pax<ybq,

ϖGR(u,v)=TG,R(uv).

In this case, since uv=πGe(u,v), eR, R=πG(a,b), and uv avoids R, so R is weak with a weak point f=e, and πGe(u,v)=uv, which must pass the transform TG,R.

CASE 2.

abpx<yq,

ϖGR(u,v)=min{TG,R(ϖGR(u,v)),TG,R(ϖGR(u,x)eyv)}.
  • If πGR(u,v) does not go through e, similar as CASE 1, it is in Ge, which equals G. Since R is weak under πG(u,v), it is also weak under uv, so πGR(u,v)=ϖGR(u,v).

  • If πGR(u,v) goes through e, by Theorem 7 and 8, it takes a shortest ux path that avoids R, then goes along πG(x,v), i.e. it equals πGR(u,x)eyv. Suppose fab is a weak point of R under πG(u,v), πGf(u,x)eyv=πGf(u,v)=πGR(u,v)=πGR(u,x)eyv, so R is also weak under ux with a weak point f, so ϖGR(u,x)=πGR(u,x).

This means if ϖGR(u,x)eyv is not a proper form, πGR(u,v) does not go through e, so ϖGR(u,v)=uv. Otherwise, we should choose the minimum of both. Note that if ϖGR(u,x) intersects with yv, ϖGR(u,x)eyv cannot be the shortest one and also cannot pass the transform TG,R.

CASE 6.

apx<yqb,333We keep the case numbers same as those in the full version of this paper [9] for consistency.

ϖGR(u,v)=TG,R(ϖGab(u,v)).

Assume R is weak under πG(u,v) and fR is a weak point of R. πGf(u,v) avoids the whole interval R, so it avoids e. As a result, πGf(u,v) is a path in G, i.e. πGf(u,v)=πGf(u,v). If f is not in uv, πGf(u,v)=πGf(u,v)=uv. Since uv does not avoid the whole interval R, this is contradictory with the fact that f is a weak point of R, so fuv and πGf(u,v)=πGf(u,v)=πGR(u,v). As a result, fapqb, and πGf(u,v) avoids R.

If πGf(u,v) intersects ab=πG(a,b), it can only intersects abR=pq=πG(p,q). By Theorem 7, it is impossible that πGf(u,v) intersects pq but avoids ap and qb, so πGf(u,v) avoids ab, and ab is weak under uv with weak point f, thus ϖGR(u,v)=πGf(u,v)=ϖGab(u,v).

Also, if ϖGab(u,v) does not pass the transform TG,R, then R is not weak under πG(u,v), so the null path is legal for ϖGR(u,v).

𝝅𝑮(𝒖,𝒗)=𝝅𝑮(𝒖,𝒗)

As before, let a=ui, b=vj, R=πG(a,b), and we want to find ϖGR(u,v). Also suppose that R is weak under πG(u,v) in the new graph G, and f is a weak point of it. The update algorithm goes as follows.

  • Set ϖGR(u,v)TG,R(ϖGR(u,v))

  • Execute the following algorithm, then swap x,y and repeat again

  • Let p=Δ(ux,uv) be the last point that πG(u,x) and πG(u,v) shares, and similarly q=(yv,uv) be the first point that πG(y,v) and πG(u,v) shares, consider the following cases

    • If pa<bq, set ϖGR(u,v)min{ϖGR(u,v),TG,R(uxeyv)}

    • If a<pbq , set ϖGR(u,v)min{ϖGR(u,v),TG,R(ϖGap(u,x)eyv)}

    • If paq<b, set ϖGR(u,v)min{ϖGR(u,v),TG,R(uxeϖGqb(y,v))}

    • If a<pq<b, do nothing

    • If a<bp<q, set ϖGR(u,v)min{ϖGR(u,v),TG,R(ϖGR(u,x)eyv)}

    • if p<qa<b, set ϖGR(u,v)min{ϖGR(u,v),TG,R(uxeϖGR(y,v))}

We list some lemmas we need. The proofs refer to the full version of this paper [9].

Lemma 16.

For any edge f, if πGf(u,v) does not go through e=(x,y), then πGf(u,v)=πGf(u,v).

Lemma 17.

For any edge fpq, if πGf(u,v) goes through e=(x,y) and u is closer to x than to y in G, then πGf(u,v)=uxeyv.

Lemma 18.

If R is weak under πG(u,v) in G and πGR(u,v) does not go through e, then ϖGR(u,v)=ϖGR(u,v).

In Lemma 18 we can see that if ϖGR(u,v) does not pass the transform TG,R, R is not weak or ϖGR(u,v) must pass e. So in the algorithm we first set ϖGR(u,v) to be TG,R(ϖGR(u,v)). Then we consider the cases that ϖGR(u,v) goes through e=(x,y) (W.l.o.g., assume u is closer to x than to y):

CASE 1.

pa<bq,

By Lemma 17, we set ϖGR(u,v)=TG,R(uxeyv). (See Remark 13 for discussions.)

CASE 4.

a<pq<b,

We can prove that in this case if R is weak then πGR(u,v) cannot go through e.

Prove by contradiction. Suppose R is weak under πG(u,v) with weak point f and πGR(u,v) goes through e (first x then y), then πG(u,x)=ux and πG(y,v)=yv since they cannot go through e. We know πGf(u,v) is a 1-replacement-path in G, so by Theorem 8, it equals a concatenation of a shortest path, an edge, and a shortest path in G. However, a<p means that πGR(u,x) is different from ux, so it is a concatenation of at least two shortest paths in G. Same for πGR(y,v). Therefore, πGR(u,v) equals the shortest ux path avoiding R plus e plus the shortest yv path avoiding R, which will be a concatenation of at least three shortest paths in G, contradicting to R is weak.

CASE 5.

a<bp<q,

ϖGR(u,v)=TG,R(ϖGR(u,x)eyv).

Suppose that R is weak under uv with a weak point f. By assumption, πGf(u,v) goes through e, so it equals πGf(u,x)eπGf(y,v). By Lemma 16, πGf(y,v)=πGf(y,v)=yv and πGf(u,x)=πGf(u,x). We can see that R is also weak under ux with weak point f in G, because πGf(u,x) is a part of πGf(u,v), which avoids R. Therefore, by Lemma 18, πGf(u,x)=πGR(u,x)=ϖGR(u,x).

3.3 Ideas of 3FRP Algorithm

As in the 2FRP algorithm in Section 3.1, we also consider how many of d1,d2,d3 are on the original shortest path st, and use different methods to solve them.

3.3.1 Only 𝒅𝟏 is on 𝒔𝒕

To solve the case that only d1 is on st but d2,d3 are not, we use a similar structure as Section 3.1.1, that also starts from H which introduces new vertices to Gst. We further note that from Lemma 9, for all possible d1, the union of possible “second failures” d2 such that d2πGd1(s,t) is O(n).

By the incremental 1-failure DSO in Section 3.2 and the reduction from offline dynamic structure to worst-case incremental structure in Theorem 11, we can get an offline dynamic 1-failure DSO with O~(n2) update time. Then for every possible d2 we can delete it from H “temporarily” and query

|π(Hd2)d3(d1,d1+)|2N

for every d1 and d3, which equals |πG{d1,d2,d3}(s,t)|. After that, we add d2 back to H.

3.3.2 Only 𝒅𝟏,𝒅𝟐 are on 𝒔𝒕

To solve the case that d1,d2 are on st but d3 is not, we apply the incremental DSO on the binary partition structure as in the 2-failure DSO of [18], that is, we build a binary range tree on the path st, and define two graphs Hi,0 and Hi,1 (based on the graph H in Section 3.1.1) in each level i of the range tree, where in Hi,0 we remove all odd ranges on level i and in Hi,1 we remove all even ranges. Given d1 and d2, we find the first level in which d1 and d2 are in different ranges, then d1 is in an even range and d2 is in an adjacent odd range, and let m be the point that separates these two ranges. Consider the following cases for the intersection of πG{d1,d2,d3}(s,t) and the edges between d1 and d2 on st:

  • If πG{d1,d2,d3}(s,t) does not travel through edges between d1 and d2, we can use DSO on H to give the answer.

  • If πG{d1,d2,d3}(s,t) only goes through edges between d1 and m, we can use Hi,0 which does not contain edges between m and d2. So the distance between d1 and d2+ in Hi,0d1 avoiding d3 can give the answer. (Note that edges between s and d1 in Hi,0 will not affect the answer, similarly for edges between d2 and t.)

  • If πG{d1,d2,d3}(s,t) only goes through edges between m and d2, symmetrically we can use Hi,1d2.

  • The only case left is that πG{d1,d2,d3}(s,t) travels through m. In this case, we can capture the replacement path using the midpoint m. Namely, we compute its length by:

    |πG{d1,d2,d3}(s,m)|+|πG{d1,d2,d3}(m,t)|.

    Using the offline dynamic DSO on Hi,0d1 and Hi,1d2, we can answer these two subpaths separately.

So we need to build offline dynamic DSOs on Hi,0 and Hi,1 in each level i. Since there are O(logn) levels, the total time is still O~(n3).

3.3.3 All of 𝒅𝟏,𝒅𝟐,𝒅𝟑 are on 𝒔𝒕

We assume the order of them on st is just d1,d2,d3, then they will cut st into four ranges D1,D2,D3,D4 in order. The difficulty will come from the fact that the path πG{d1,d2,d3}(s,t) can go through both of D2 and D3, in any order. To solve this case, we first build a structure in O~(n3) time which can answer the following queries in O~(1) time:

  • Given two vertex-disjoint ranges R1,R2 of consecutive edges on the original shortest path st (R1,R2 can be single vertices), answer the shortest path that starts from the leftmost (rightmost) vertex of R1, diverges from R1, converges in R2, and finally ends at the leftmost (rightmost) vertex of R2.

  • Given an edge d1 on st and two vertex-disjoint ranges R1,R2 of consecutive edges on st after d1, answer the shortest path that starts from s, diverges before d1, converges in R1, and diverges from R1, then converges in R2, and finally ends at the leftmost (rightmost) point of R2.

In the structure we store the paths for all ranges R1,R2 in the binary range tree on st, then given any R1,R2 in the query, they can be split into O(logn) ranges in the binary range tree, so the queries for R1,R2 can be answered in O~(1) time.

Similar to Section 3.1.2, by this structure, when given d1,d2,d3 and a vertex b on st, it takes O~(1) time to find the shortest path goes through b avoiding d1,d2,d3. So it will take O~(n) time to find the shortest path goes through both of D2 and D3 avoiding d1,d2,d3, but we need the time bound to be O~(1) on average. For every pair of d1 and d3, the main ideas are as follows.

  • First find the middle edge e1 on the range between d1 and d3, and use O~(n) time to find the shortest path πG{d1,d3,e1}(s,t).

  • For other edges d2 between d1 and d3, if d2 is not on πG{d1,d3,e1}(s,t), then either πG{d1,d2,d3}(s,t) is equal to πG{d1,d3,e1}(s,t) or it goes through e1, so this case can be solved in O~(1) time.

  • The intersection of πG{d1,d3,e1}(s,t) and st between d1 and d3 consists of at most two ranges, then find the middle edge e2 of the larger range, and find the shortest path avoiding d1,d3,e1,e2 and goes through two ranges between them. This can also be done in O~(n) time.

  • As before, for d2 not on the new path we can solve it by querying the paths that go through e1 or e2, so next we only need to consider the edges in the intersection of all paths so far, and we can argue that the size of its intersection with the range between d1 and d3 will shrink by a constant factor in every two iterations, so only O(logn) iterations are needed. Thus the time for every pair of d1 and d3 is only O~(n).

4 Hardness of 2FRP

In this section we show that the 2FRP problem in undirected graphs is hard, assuming the APSP hypothesis. We note that in directed graphs, 1FRP is as hard as APSP [31]. However, in undirected graphs, 1FRP can be solved in almost-optimal O~(m) time [26].

Theorem 19.

If 2FRP in undirected weighted graphs can be solved in O(n3ϵ) time for any ϵ>0, then in undirected weighted graphs, the all-pairs shortest path problem can also be solved in O(n3ϵ) time.

We remark that by [31], the hardness of the all-pairs shortest path problem in undirected weighted graphs is equivalent to the hardness of APSP. Therefore, any algorithm in O(n3ϵ) time for 2FRP will refute the APSP hypothesis. So we immediately get the corollary:

Corollary 20.

Assuming the APSP hypothesis that APSP cannot be solved in O(n3ϵ) time for any ϵ>0, 2FRP problem in undirected weighted graphs cannot be solved in O(n3ϵ) time for any ϵ>0.

Suppose toward contradiction that we can solve 2FRP in O(n3ϵ) time. Consider any undirected weighted all-pairs shortest path instance G with n nodes v1,v2,,vn, we will construct an undirected weighted 2FRP instance H by adding vertices and edges on G, solving which will result in solving all-pairs shortest path in G.

Let N be a number that is larger than the sum of all weights in G. Let S be a path with (n+2) nodes (s0,s1,s2,,sn,sn+1=s) with weight 0 edges (si,si+1) for all 0in. Similarly, let T be a path with (n+2) nodes (t0,t1,t2,,tn,tn+1=t) with weight 0 edges (ti,ti+1) for all 0in. Moreover, we construct two matchings E1={(si,vi)} with weight w(si,vi)=iN for all 1in, and E2={(vi,ti)} with weight w(vi,ti)=iN for all 1in.

Let H=SE1GE2T. By inspection, the s to t shortest path in H is precisely the path goes from s to s1 in S, from s1 to v1 to t1, and then from t1 to t in T. Consider any pair of failed edges, one on πH(s,s1) and the other on πH(t1,t). We will show the following relationship for 2FRP in H and all-pairs shortest paths in G:

Lemma 21.

For any 1i,jn, let D={(si1,si),(tj1,tj)} be a set of two edge failures with one in S and one in T. Then regarding the failure set D:

|πHD(s,t)|=|πG(vi,vj)|+(i+j)N.

The proof refers to the full version of this paper [9]. With this lemma, we can prove Theorem 19. Considering any undirected weighted all-pairs shortest path instance G, we construct an undirected weighted graph H as above. Now we check all failure set D={(si1,si),(tj1,tj)} for all pairs (i,j), 1i,jn. By Lemma 21, we can obtain |πG(vi,vj)| from |πHD(s,t)|, which means by solving 2FRP in H we can solve all-pairs shortest paths in G.

Since we know n=2(n+2)+n=O(n) nodes in H, if we have a truly subcubic time algorithm that solving 2FRP in time O((n)3ϵ)=O(n3ϵ) for H, then using this algorithm we can solve all-pairs shortest paths in O(n3ϵ) time for G. Therefore, Theorem 19 is true.

5 Almost Optimal Single-Source Replacement Path under Multiple Edge Failures

For a graph G and a vertex s, the 2-fault single-source replacement path problem is that for every tuple of two edges d1,d2 and a vertex t, answer the distance between s and t in G after removing edges d1 and d2. We show how to solve it in almost optimal time O~(n3) for undirected weighted graphs, via the incremental DSO.

Let T be the shortest path tree of s in G, with edges e1,e2,,en1. Our algorithm goes as follows: we maintain an offline dynamic DSO on G. For each 1i<n, we remove ei in G, maintain the dynamic DSO. For every vertex tj in the subforest of ei in T and every edge dk on the replacement path πGei(s,tj), save the distance π(Gei)dk(s,tj) in Gei via querying the dynamic DSO. At the end of each step, add ei back into G.

Correctness.

For each tuple (d1,d2,t), if none of d1,d2 is in T, we know the shortest path st is the same after removing d1,d2. W.l.o.g., suppose d1T, and t is in the subtree of d1 in T. If d2 is not in πGd1(s,t), we know that πGd1d2(s,t)=πGd1(s,t), and this value can be queried in the static DSO. If d2πGd1(s,t), we know the value equals π(Gd1)d2(s,t) from the dynamic DSO in graph Gd1, which is obtained in our algorithm.

Time analysis.

The incremental DSO can be viewed as an offline fully dynamic DSO by Theorem 10. Therefore, the update time is O~(n2) for each edge ei. The number of edges on a shortest path tree is O(n), so the total update time is O~(n3). The number of distances we query is O(n3), each with query time O~(1), so the total query time is also O~(n3). We also retrieve all 1-fault single-source replacement paths πGei(s,tj) in the static DSO, and the time is also bounded by O~(n3).

Extend to 𝒇-fault SSRP for 𝒇𝟐.

As the fFRP reduction in [30], when we have an f-fault SSRP algorithm in O(ns) time, we can construct an (f+1)-fault SSRP algorithm in O(ns+1) time, by computing the f-fault SSRP in graph Ge for every e in the shortest path tree from s. Thus, the undirected 2-fault SSRP algorithm in O~(n3) time can be extended to undirected f-fault SSRP algorithm in O~(nf+1) time for all f2, which is almost optimal.

References

  • [1] Josh Alman, Ran Duan, Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou. More asymmetry yields faster matrix multiplication. In Proceedings of the 2025 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2005–2039, 2025. doi:10.1137/1.9781611978322.63.
  • [2] Noga Alon, Shiri Chechik, and Sarel Cohen. Deterministic combinatorial replacement paths and distance sensitivity oracles. In International Colloquium on Automata, Languages and Programming, 2019. URL: https://api.semanticscholar.org/CorpusID:159041702.
  • [3] Aaron Bernstein and David Karger. Improved distance sensitivity oracles via random sampling. In Proceedings of the Nineteenth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’08, pages 34–43, USA, 2008. Society for Industrial and Applied Mathematics. URL: https://dl.acm.org/doi/10.5555/1347082.1347087.
  • [4] Aaron Bernstein and David Karger. A nearly optimal oracle for avoiding failed vertices and edges. In Proceedings of the Forty-First Annual ACM Symposium on Theory of Computing, STOC ’09, pages 101–110, New York, NY, USA, 2009. Association for Computing Machinery. doi:10.1145/1536414.1536431.
  • [5] Greg Bodwin, Fabrizio Grandoni, Merav Parter, and Virginia Vassilevska Williams. Preserving Distances in Very Faulty Graphs. In Ioannis Chatzigiannakis, Piotr Indyk, Fabian Kuhn, and Anca Muscholl, editors, 44th International Colloquium on Automata, Languages, and Programming (ICALP 2017), volume 80 of Leibniz International Proceedings in Informatics (LIPIcs), pages 73:1–73:14, Dagstuhl, Germany, 2017. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ICALP.2017.73.
  • [6] Anat Bremler-Barr, Yehuda Afek, Haim Kaplan, Edith Cohen, and Michael Merritt. Restoration by path concatenation: fast recovery of MPLS paths. In Proceedings of the Twentieth Annual ACM Symposium on Principles of Distributed Computing, PODC ’01, pages 43–52, New York, NY, USA, 2001. Association for Computing Machinery. doi:10.1145/383962.383980.
  • [7] Shiri Chechik and Sarel Cohen. Near optimal algorithms for the single source replacement paths problem. In Proceedings of the Thirtieth Annual ACM-SIAM Symposium on Discrete Algorithms, pages 2090–2109. SIAM, 2019. doi:10.1137/1.9781611975482.126.
  • [8] Shiri Chechik and Ofer Magen. Near Optimal Algorithm for the Directed Single Source Replacement Paths Problem. In Artur Czumaj, Anuj Dawar, and Emanuela Merelli, editors, 47th International Colloquium on Automata, Languages, and Programming (ICALP 2020), volume 168 of Leibniz International Proceedings in Informatics (LIPIcs), pages 81:1–81:17, Dagstuhl, Germany, 2020. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ICALP.2020.81.
  • [9] Shucheng Chi, Ran Duan, Benyu Wang, and Tianle Xie. Undirected 3-fault replacement path in nearly cubic time, 2024. doi:10.48550/arXiv.2411.18312.
  • [10] Camil Demetrescu and Giuseppe F. Italiano. A new approach to dynamic all pairs shortest paths. J. ACM, 51(6):968–992, November 2004. doi:10.1145/1039488.1039492.
  • [11] Camil Demetrescu, Mikkel Thorup, Rezaul Alam Chowdhury, and Vijaya Ramachandran. Oracles for distances avoiding a failed node or link. SIAM J. Comput., 37:1299–1318, 2008. doi:10.1137/S0097539705429847.
  • [12] Dipan Dey and Manoj Gupta. Near Optimal Algorithm for Fault Tolerant Distance Oracle and Single Source Replacement Path Problem. In Shiri Chechik, Gonzalo Navarro, Eva Rotenberg, and Grzegorz Herman, editors, 30th Annual European Symposium on Algorithms (ESA 2022), volume 244 of Leibniz International Proceedings in Informatics (LIPIcs), pages 42:1–42:18, Dagstuhl, Germany, 2022. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ESA.2022.42.
  • [13] Dipan Dey and Manoj Gupta. Nearly optimal fault tolerant distance oracle. In Proceedings of the 56th Annual ACM Symposium on Theory of Computing, STOC 2024, pages 944–955, New York, NY, USA, 2024. Association for Computing Machinery. doi:10.1145/3618260.3649697.
  • [14] E. W. Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1:269–271, 1959. doi:10.1007/BF01386390.
  • [15] R. Duan, J. Mao, X. Shu, and L. Yin. A randomized algorithm for single-source shortest path on undirected real-weighted graphs. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 484–492, Los Alamitos, CA, USA, November 2023. IEEE Computer Society. doi:10.1109/FOCS57990.2023.00035.
  • [16] R. Duan, H. Wu, and R. Zhou. Faster matrix multiplication via asymmetric hashing. In 2023 IEEE 64th Annual Symposium on Foundations of Computer Science (FOCS), pages 2129–2138, Los Alamitos, CA, USA, November 2023. IEEE Computer Society. doi:10.1109/FOCS57990.2023.00130.
  • [17] Ran Duan, Jiayi Mao, Xiao Mao, Xinkai Shu, and Longhui Yin. Breaking the sorting barrier for directed single-source shortest paths, 2025. arXiv:2504.17033.
  • [18] Ran Duan and Seth Pettie. Dual-failure distance and connectivity oracles. In Proceedings of the Twentieth Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’09, pages 506–515, USA, 2009. Society for Industrial and Applied Mathematics. doi:10.1137/1.9781611973068.56.
  • [19] Ran Duan and Hanlin Ren. Maintaining exact distances under multiple edge failures. In Proceedings of the 54th Annual ACM SIGACT Symposium on Theory of Computing, STOC 2022, pages 1093–1101, New York, NY, USA, 2022. Association for Computing Machinery. doi:10.1145/3519935.3520002.
  • [20] Ran Duan and Tianyi Zhang. Improved distance sensitivity oracles via tree partitioning. In Faith Ellen, Antonina Kolokolova, and Jörg-Rüdiger Sack, editors, Algorithms and Data Structures, pages 349–360, Cham, 2017. Springer International Publishing. doi:10.1007/978-3-319-62127-2_30.
  • [21] Zvi Gotthilf and Moshe Lewenstein. Improved algorithms for the k simple shortest paths and the replacement paths problems. Information Processing Letters, 109(7):352–355, 2009. doi:10.1016/j.ipl.2008.12.015.
  • [22] Fabrizio Grandoni and Virginia Vassilevska Williams. Improved distance sensitivity oracles via fast single-source replacement paths. In 2012 IEEE 53rd Annual Symposium on Foundations of Computer Science, pages 748–757, 2012. doi:10.1109/FOCS.2012.17.
  • [23] Fabrizio Grandoni and Virginia Vassilevska Williams. Faster replacement paths and distance sensitivity oracles. ACM Trans. Algorithms, 16(1), December 2019. doi:10.1145/3365835.
  • [24] Yong Gu and Hanlin Ren. Constructing a Distance Sensitivity Oracle in O(n2.5794M) Time. In Nikhil Bansal, Emanuela Merelli, and James Worrell, editors, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021), volume 198 of Leibniz International Proceedings in Informatics (LIPIcs), pages 76:1–76:20, Dagstuhl, Germany, 2021. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ICALP.2021.76.
  • [25] Yuzhou Gu, Adam Polak, Virginia Vassilevska Williams, and Yinzhan Xu. Faster Monotone Min-Plus Product, Range Mode, and Single Source Replacement Paths. In Nikhil Bansal, Emanuela Merelli, and James Worrell, editors, 48th International Colloquium on Automata, Languages, and Programming (ICALP 2021), volume 198 of Leibniz International Proceedings in Informatics (LIPIcs), pages 75:1–75:20, Dagstuhl, Germany, 2021. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ICALP.2021.75.
  • [26] Enrico Nardelli, Guido Proietti, and Peter Widmayer. A faster computation of the most vital edge of a shortest path. Information Processing Letters, 79(2):81–85, 2001. doi:10.1016/S0020-0190(00)00175-7.
  • [27] Binghui Peng and Aviad Rubinstein. Fully-dynamic-to-incremental reductions with known deletion order (eg sliding window). In Symposium on Simplicity in Algorithms (SOSA), pages 261–271. SIAM, 2023. doi:10.1137/1.9781611977585.ch24.
  • [28] Hanlin Ren. Improved distance sensitivity oracles with subcubic preprocessing time. Journal of Computer and System Sciences, 123:159–170, 2022. doi:10.1016/j.jcss.2021.08.005.
  • [29] Ryan Williams. Faster all-pairs shortest paths via circuit complexity. In Proceedings of the Forty-Sixth Annual ACM Symposium on Theory of Computing, STOC ’14, pages 664–673, New York, NY, USA, 2014. Association for Computing Machinery. doi:10.1145/2591796.2591811.
  • [30] V. Williams, E. Woldeghebriel, and Y. Xu. Algorithms and lower bounds for replacement paths under multiple edge failure. In 2022 IEEE 63rd Annual Symposium on Foundations of Computer Science (FOCS), pages 907–918, Los Alamitos, CA, USA, November 2022. IEEE Computer Society. doi:10.1109/FOCS54457.2022.00090.
  • [31] Virginia Vassilevska Williams and R. Ryan Williams. Subcubic equivalences between path, matrix, and triangle problems. J. ACM, 65(5), August 2018. doi:10.1145/3186893.
  • [32] Virginia Vassilevska Williams, Yinzhan Xu, Zixuan Xu, and Renfei Zhou. New bounds for matrix multiplication: from alpha to omega. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 3792–3835. SIAM, 2024. doi:10.1137/1.9781611977912.134.