Abstract 1 Introduction 2 Results and Technical Overview 3 Preliminaries 4 Near-Optimal LDDs via Expander Decompositions References

Near-Optimal Directed Low-Diameter Decompositions

Karl Bringmann ORCID Saarland University and Max Planck Institute for Informatics, Saarland Informatics Campus, Saarbücken, Germany Nick Fischer INSAIT, Sofia University “St. Kliment Ohridski”, Bulgaria Bernhard Haeupler ORCID INSAIT, Sofia University “St. Kliment Ohridski”, Bulgaria
ETH Zürich, Switzerland
Rustam Latypov ORCID Aalto University, Finland
Abstract

Low Diameter Decompositions (LDDs) are invaluable tools in the design of combinatorial graph algorithms. While historically they have been applied mainly to undirected graphs, in the recent breakthrough for the negative-length Single Source Shortest Path problem, Bernstein, Nanongkai, and Wulff-Nilsen [FOCS ’22] extended the use of LDDs to directed graphs for the first time. Specifically, their LDD deletes each edge with probability at most O(1Dlog2n), while ensuring that each strongly connected component in the remaining graph has a (weak) diameter of at most D.

In this work, we make further advancements in the study of directed LDDs. We reveal a natural and intuitive (in hindsight) connection to Expander Decompositions, and leveraging this connection along with additional techniques, we establish the existence of an LDD with an edge-cutting probability of O(1Dlognloglogn). This improves the previous bound by nearly a logarithmic factor and closely approaches the lower bound of Ω(1Dlogn). With significantly more technical effort, we also develop two efficient algorithms for computing our LDDs: a deterministic algorithm that runs in time O~(mpoly(D)) and a randomized algorithm that runs in near-linear time O~(m).

We believe that our work provides a solid conceptual and technical foundation for future research relying on directed LDDs, which will undoubtedly follow soon.

Keywords and phrases:
Low Diameter Decompositions, Expander Decompositions, Directed Graphs
Category:
Track A: Algorithms, Complexity and Games
Funding:
Karl Bringmann: This work is part of the project TIPEA that has received funding from the European Research Council (ERC) under the European Unions Horizon 2020 research and innovation programme (grant agreement No. 850979).
Nick Fischer: Partially funded by the Ministry of Education and Science of Bulgaria’s support for INSAIT as part of the Bulgarian National Roadmap for Research Infrastructure.
Bernhard Haeupler: Partially funded by the Ministry of Education and Science of Bulgaria’s support for INSAIT as part of the Bulgarian National Roadmap for Research Infrastructure and through the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation program (ERC grant agreement 949272).
Rustam Latypov: Supported by the Research Council of Finland (grant 334238), The Finnish Foundation for Technology Promotion and The Nokia Foundation
Copyright and License:
[Uncaptioned image] © Karl Bringmann, Nick Fischer, Bernhard Haeupler, and Rustam Latypov; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Graph algorithms analysis
Related Version:
Full Version: https://arxiv.org/abs/2502.05687
Editors:
Keren Censor-Hillel, Fabrizio Grandoni, Joël Ouaknine, and Gabriele Puppis

1 Introduction

In the design of combinatorial graph algorithms, decomposing graphs into smaller regions where problems become naturally easy to solve has emerged as a remarkably powerful paradigm. Two such graph decompositions stand out as particularly successful: Low-Diameter Decompositions (LDDs) and Expander Decompositions (EDs) (defined formally in Definition 1 and Section 3, respectively). Both decompositions remove some edges from the graph so that the remaining components either have a low diameter and thus behave nicely with respect to distance-type problems (in the case of LDDs), or are expanders which are well-suited for flow-type problems (in the case of EDs). This paradigm is particularly appealing as it typically leads to natural and efficient implementations in parallel, distributed and dynamic models, and often also to deterministic algorithms.

The vast majority of algorithmic results following this broader framework have been established for undirected graphs. However, a recently increasing number of papers have successfully applied this paradigm also to directed graphs [14, 16, 29, 22, 13], despite the serious technical difficulties that typically arise compared to the undirected setting. One celebrated example is the breakthrough near-linear time algorithm for negative-length Single-Source Shortest Paths by Bernstein, Nanongkai and Wulff-Nilsen [16]. In fact, their work was the first to define and apply Low-Diameter Decompositions in directed graphs.

Given these rapid and recent developments, it is to be expected that many new results will follow in this line of research. Consequently, anticipating a wave of researchers in need of directed Low-Diameter Decompositions, we initiate the systematic study of directed LDDs. Our main technical contribution is a near-optimal LDD – near-optimal with respect to the loss parameter and the near-linear running time. Beyond improving the state of the art, we also unveil a compelling and novel connection between directed Low-Diameter Decompositions and Expander Decompositions, bringing together two previously studied concepts that are independently well-established. We believe this connection to be the main take-away of our paper, and are confident that it will inspire further applications.

1.1 Low-Diameter Decompositions

Low-Diameter Decompositions in undirected graphs have been introduced almost 40 years ago by Awerbuch [7], and, have since evolved into an irreplaceable tool in the design of combinatorial algorithms [9, 10, 8, 32, 11, 17, 33, 34, 26, 21, 15, 27, 16]. More generally, LDDs have been utilized in the development of diverse graph-theoretic structures such as oblivious routings [37], hopsets, neighborhood covers, and notably probabilistic tree embeddings [11, 12, 25] (where the rough goal is to approximate an arbitrary (graph) metric by a simpler tree metric with polylogarithmic stretch) and low-stretch spanning trees [4, 24, 1, 31, 3, 2]. These structures in turn have led to further applications in approximation algorithms, online algorithms and network design problems [11, 18, 28].

While the precise definitions of LDDs differed at first, all of them had in common that the graph was decomposed by removing few edges (on average) so that the remaining connected components have bounded diameter. More precisely, the modern definition is that an LDD is a probability distribution over edge cuts that cut each individual edge with probability at most L/D (for some small loss factor L) so that the resulting connected components have diameter at most D. For undirected graphs, the best-possible loss factor turns out to be L=Θ(logn) with natural matching upper and lower bounds (see e.g. [11, 25] and Footnote 1).

Only very recently, low-diameter decompositions debuted in directed graphs. They played a key role in the near-linear time algorithm for the Single-Source Shortest Paths problem in graphs with negative edge lengths by Bernstein, Nanongkai and Wulff-Nilsen [16] (following the paradigm outlined before). Curiously, before their seminal work, low-diameter decompositions for directed graphs were mostly unexplored – to the extent that it was even unclear what the definition should be. An even more recent application of directed LDDs is in the context of restricted (i.e., bicriteria) shortest paths [6].

Definition 1 (Directed Low-Diameter Decomposition).

A directed low-diameter decomposition with loss L for a directed edge-weighted graph G=(V,E,) and a parameter D1 is a probability distribution over edge sets SE that satisfies the following two properties:

  • For any two nodes u,vV that are part of the same strongly connected component in GS, we have dG(u,v)D and dG(v,u)D.

  • For all edges eE, it holds that Pr(eS)(e)DL.

That is, for directed graphs we require that all remaining strongly connected components have diameter D in the sense that each pair of nodes is at distance at most D in the original uncut graph (this property is also called a “weak” diameter guarantee; see more details in the paragraph towards the end of Section 1.1).

In light of this definition, it is natural to study directed LDDs with respect to two objectives: Minimizing the loss factor L, and computing the LDD efficiently. Both of these requirements typically translate immediately to algorithmic improvements (e.g., the loss factor L typically becomes a factor in the running time of algorithmic applications).

Quest 1: Minimizing the Loss 𝑳

It is easy to see that a loss factor of L1 is necessary: Consider a graph consisting of disjoint copies of D+1-cycles (of unit length, say). Any LDD is forced to cut at least one edge from every cycle, and thus some edges are deleted with probability 1D+1. In fact, from the same lower bound as for undirected graphs one can show that LΩ(logn) is necessary111Specifically, using e.g., the probabilistic method, one can construct an undirected n-node graph G with the following two properties: (i) the graph contains cn edges (for some constant c>1), and (ii) the girth of G (i.e., the length of the shortest cycle) is at least g=Ω(logn). Such a graph constitutes an obstruction against undirected LDDs with parameter D=g21, say. Indeed, all connected components that remain after the LDD cuts edges cannot contain cycles (as any cycle contains two nodes at distance at least g2>D). This means that all connected components are trees, and thus the remaining graph is a forest containing at most n1 edges. Therefore, the LDD has cut at least cnn=Ω(n) edges. In particular, the per-edge cutting probability of some edges must be Ω(1)=Ω(logn/D). The same argument applies for directed LDDs by taking the same undirected graph and making all edges bidirected. for some directed graphs.

Conversely, Bernstein, Nanongkai and Wulff-Nilsen [16] showed that a loss of O(log2n) can be achieved. Their work leaves open whether this factor can be improved, possibly to O(logn) which would match the existing unconditional lower bound, or whether Ω(log2n) is necessary.

Quest 2: Efficient Algorithms

To be useful in algorithmic applications, it is necessary to be able to compute the LDD (or, formally speaking, to be able to sample from the LDD efficiently). This is a non-negligible problem as many graph decompositions are much simpler to prove existentially than constructively and efficiently – for instance, for Expander Decompositions there is a stark contrast between the extremely simple existential proof and the involved efficient algorithms based on cut-matching games [35, 30]. The O(log2n)-loss LDD due to [16] is of course implementable by an efficient algorithm.

Side Quest: Weak versus Strong

Another dimension is the distinction between “weak” and “strong” diameter guarantees. Specifically, Definition 1 requires the weak guarantee by requiring that any two nodes in the same strongly connected component are at distance at most D in the original graph G. The strong version instead requires that each strongly connected component in GS has diameter at most D in the graph GS. While strong LDDs give a theoretically more appealing guarantee, for most algorithmic applications it turns out that weak LDDs suffice. The LDD developed by Bernstein, Nanongkai and Wulff-Nilsen [16] has a weak guarantee, but follow-up work [19] later extended their results to a strong LDD with cubic loss O(log3n).

In this paper, we will mostly ignore the distinction between weak and strong and follow Definition 1 as it is. We remark however that all of our existential results do in fact have the strong diameter guarantee.

2 Results and Technical Overview

In a nutshell, our result is that we establish a directed LDD with near-optimal loss O(lognloglogn). This comes tantalizingly close to the unconditional lower bound of Ω(logn). It resembles a similar milestone in the development of probabilistic tree embeddings [12], and also the current state of the art for low-stretch spanning trees [3]. In fact, similar loglogn barriers show up also in different contexts for directed graphs [20]. In a sense, all of these results with loglogn overhead (including ours) apply a careful recursive approach that can be traced back to work by Seymour [36] (though with strongly varying implementations depending on the setting).

We state our result in three Theorems 2, 8, and 9, where the first theorem is purely existential, the latter two are algorithmic, and the last has near-linear running time.

2.1 Near-Optimal LDDs via Expander Decompositions

The main conceptual contribution of our paper is that Low-Diameter Decompositions are closely and in a very practical way related to Expander Decompositions. Based on this insight, we prove the following theorem.

Theorem 2.

For every directed graph there exists a directed LDD with loss O(lognloglogn).

For the remainder of Section 2.1 we will elaborate on the proof of Theorem 2. It involves two separate steps: reducing the problem to cost-minimizing using the Multiplicative Weight Update method, and then showing that a so called lopsided expander decomposition is the desired cost-minimizer. We emphasize that in Section 2.1 we only focus on Quest 1 from the introduction, which is minimizing the loss L of an LDD. Besides, the statement of Theorem 2 applies in fact to strong LDDs, though for simplicity we will not focus on this distinction in the overview.

Reduction to Unit Lengths

Let us assume throughout that we only deal with unit-length graphs (i.e., where (e)=1 for all edges). This assumption is in fact without loss of generality, as we can simply replace each edge e of length (e) by a path of (e) unit-length edges. This transformation may blow up the graph, but as we focus on the existential result first this shall not concern us.222We are assuming throughout that all edge weights are bounded by poly(n), therefore this transformation leads to a graph on at most n0poly(n) nodes. In particular, to achieve a loss of L=O(lognloglogn) on the original graph it suffices to achieve a loss of O(logn0loglogn0) on the transformed graph. Whenever the LDD cuts at least one of the path edges in the transformed graph, we imagine that the entire edge in the original graph is cut. This way, when we cut each edge with probability at most LD, in the original graph we cut each edge with probability at most (e)LD (by a union bound) as required.

Reduction to Cost-Minimization

Perhaps it appears surprising that we claim a connection between LDDs – which are inherently probabilistic objects – and EDs – which rather have a deterministic flavor. As a first step of bringing these notions together consider the following lemma based on the well-studied Multiplicative Weight Update [5] method.

Lemma 3 (Multiplicative Weight Update).

Let G=(V,E) be a directed graph, and suppose that for all cost functions c:E[|V|10] there is a set of cut edges SE satisfying the following two properties:

  • For any two nodes u,vV that are part of the same strongly connected component in GS, we have dG(u,v)D and dG(v,u)D.

  • c(S)c(E)LD.

Then there exists a directed LDD for G with loss O(L).

Intuitively, the lemma states that in order to construct an LDD that cuts each edge with small probability L/D, it suffices to instead find a cut which globally minimizes the total cost of the cut edges while ensuring that every remaining strongly connected component has diameter at most D. This however comes with the price of introducing a cost function c to the graph, and the goal becomes to cut edges which collectively have at most an L/D fraction of the total cost. For the reader’s convenience, we include a quick proof sketch of Lemma 3.

Proof Sketch..

Assign a unit cost to every edge in the graph, and consider the following iterative process. In each iteration we find a set of cut edges Si with the desired guarantees, and adjust the costs of the edges multiplicatively by doubling the cost of any every edge eSi. Repeat this process for R=100ln|E|D/L iterations. We claim that the uniform distribution over the cut sets Si encountered throughout is an LDD as required.

Clearly the diameter condition holds for each set Si, but it remains to bound the edge-cutting probability. Each iteration doubles the cost of every cut edge, and hence the total cost increases by at most c(Si)c(E)LD, i.e., by a factor 1+LD. After all R repetitions the total cost is at most |E|(1+L/D)R<|E||E|100=|E|101. It follows that each edge is cut in at most 101log|E| iterations as otherwise its cost alone would already be more than |E|101. Thus, the probability of cutting any fixed edge in a randomly sampled cut set Si is at most 101log|E|/R=O(L/D).

Henceforth, we refer to the process of finding a cut with the properties stated in Lemma 3 as a cost-minimizer, which we aim to construct in the following paragraphs. That is, we now also consider a cost function c, and our goal is to cut edges S of total cost c(S)c(E)LD (without worrying about a per-edge guarantee), while ensuring that the strongly connected components in GS have diameter D. We remark that this cost-minimization framework is quite standard.

Directed Expander Decomposition

In a leap forward, let us rename the costs c to capacities c. Our idea is simple: We want to use an Expander Decomposition to remove edges of small total capacity so that all strongly connected components in the graph become expanders and thus, in particular, have small diameter.

To make this more precise, we first introduce some (standard) notation. A node set UV naturally induces a cut (between U and U¯=VU). We write c(U,U¯) for the total capacity of edges crossing the cut from U to U¯, and define the volume of U as

vol(U)=c(U,V)=eE(U×V)c(e),

and also write minvol(U)=min{vol(U),vol(U¯)}. The sparsity of the cut U is defined by

ϕ(U)=c(U,U¯)minvol(U).

We say that U is ϕ-sparse if ϕ(U)ϕ, and we call a graph without ϕ-sparse cuts a ϕ-expander. The standard Expander Decomposition can be stated as follows.

Lemma 4 (Directed Expander Decomposition).

Let ϕ>0. For any directed graph G=(V,E,c) there is an edge set SE of total capacity c(S)c(E)ϕlogc(E) such that every strongly connected component in the remaining graph GS is a ϕ-expander.

Moreover, an important property of a ϕ-expander decomposition is that the diameter of its strongly connected components depends on ϕ in the following manner.

Lemma 5.

Any ϕ-expander has diameter O(ϕ1logvol(V)).

To understand the intuition behind Lemma 5, imagine that we grow a ball (i.e., a breadth-first search tree) around some node. With each step, the expansion property (related to the absence of ϕ-sparse cuts) guarantees that we increase the explored capacity by a factor of (1+ϕ); thus after O(ϕlogvol(V)) steps we have explored essentially the entire graph.

Already at this point we have made significant progress and can recover the O(log2n)-loss LDD: Apply the Expander Decomposition in Lemma 4 with parameter ϕ=logvol(V)/D. This removes edges with total capacity at most a O(log2vol(V)/D)-fraction of the total capacity. In the remaining graph each strongly connected component is a ϕ-expander and thus, by Lemma 5, has diameter at most O(ϕ1logvol(V))=O(D) (by choosing the constants appropriately, this bound can be adjusted to D). Finally, Lemma 3 turns this into an LDD with loss O(log2vol(V))=O(log2n).

Unfortunately, both log-factors in the Expander Decomposition (fraction of the cut capacity and diameter) are tight. Nevertheless, with some innovation we manage to bypass these bounds and improve the O(log2n) loss. To this end we propose a refined notion of expanders called lopsided expanders.

Lopsided Expander Decomposition

The notion of ϕ-sparsity defined above is oblivious to the ratio between vol(V) and minvol(U). For example, two cuts U and W can have the same sparsity, even though vol(U)vol(U¯) while vol(W)vol(W¯). It turns out that this leaves some unused potential, and that we should incentivize cutting cuts with large volume on both sides compared to more lopsided cuts with large volume only on one side.

Formally, we define ψ-lopsided sparsity of a cut U as

ψ(U)=c(U,U¯)minvol(U)logvol(V)minvol(U),

where we include the ratio vol(V)minvol(U) in the denominator. Since vol(V)>minvol(U), a cut can only have smaller lopsided sparsity than regular sparsity, so a graph with no sparse cuts may still have lopsided sparse cuts. A ψ-lopsided expander is defined as a graph with no ψ-lopsided sparse cuts, and can be thought of as a subclass of expanders which in addition to having no sparse cuts also has no cuts that are both sufficiently lopsided and sufficiently sparse.

The lopsided expander decomposition is otherwise identical to the standard expander decomposition defined previously, except that every strongly connected component is required to be a ψ-lopsided expander instead of a ϕ-expander. Our Lopsided Expander Decomposition has the same global “total capacity cut” guarantee as the standard expander decomposition, as stated in the following lemma coupled with a proof sketch.

Lemma 6 (Lopsided Expander Decomposition).

Let ψ>0. For any directed graph G=(V,E,c) there is an edge set SE of total capacity c(S)c(E)ψlogc(E) such that every strongly connected component in the remaining graph GS is a ψ-lopsided expander.

Proof Sketch..

Consider the following algorithm: If there are no ψ-lopsided sparse cuts, then the graph is already a ψ-lopsided expander and we can stop. Otherwise, we cut a ψ-lopsided sparse cut (add the cut edges to S), and recurse on the remaining strongly connected components. It is clear that this eventually produces a ψ-lopsided expander decomposition. In order to prove that the cut edges have capacity at most c(E)ψlogc(E), we use a potential argument. We assign to each edge e a potential of c(e)logc(E). Throughout the procedure we maintain the invariant that each edge holds a potential of at least c(e)logvol(C), where C is the strongly connected component containing edge e. When cutting a ψ-lopsided cut (U,U¯) in a component C, an edge e on the smaller side by volume (say U) suddenly needs to hold a potential of c(e)logc(U) instead of c(e)logc(C). In particular, the amount of excess potential we have freed is

eE(U×V)c(e)(logvol(C)logvol(U))=eE(U×V)c(e)logvol(C)vol(U)=vol(U)logvol(C)vol(U).

Since the cut is ψ-lopsided sparse it follows that the total capacity of the cut edges is at most

c(U,U¯)ψvol(U)logvol(C)vol(U).

Thus, we can afford to donate to each cut edge eS a potential of c(e)/ψ while maintaining our invariant. Since the total potential in the graph is c(E)logc(E) and every cut edge eS receives a potential of at least c(e)/ψ, we finally have that c(E)logc(E)c(S)/ψ and the claim follows.

The main motivation for defining ψ-lopsided sparsity and taking the ratio vol(V)minvol(U) into account lies in the following lemma: Compared to standard expanders, lopsided expanders only suffer a loglog-factor in the diameter.

Lemma 7.

Any ψ-lopsided expander has diameter O(ψ1loglogvol(V)+logvol(V)).

Proof Sketch..

The proof is similar in spirit to the proof for standard expanders in Lemma 5, for which we restate the intuition for clarity. Imagine growing a ball (i.e., a breadth-first search tree) around some node. With each step, the expansion property (related to the absence of ϕ-sparse cuts) guarantees that we increase the explored capacity by a factor of (1+ϕ); thus after O(ϕlogvol(V)) steps we have explored essentially the entire graph.

The difficulty in adopting the same proof for ψ-lopsided expanders stems from the fact that ψ-lopsided sparsity of a cut UV depends on the volume of U, so the expansion of the ball around a node differs in every step, e.g., if the volume of the ball around some node is constant, after one ball-growing step it becomes O(ψlogvol(V)). We resolve this challenge by analyzing the number of steps needed for the volume of the ball to grow from vol(V)/2i+1 to vol(V)/2i. This turns out to be roughly (ψi)1, implying that after

i=1logvol(V)(ψi)1=O(ψ1loglogvol(V)+logvol(V))

steps we have explored essentially the entire graph.

Putting these two lemmas for lopsided expanders together, we indeed obtain the low-loss LDD in Theorem 2. Specifically, we apply the Lopsided Expander Decomposition from Lemma 6 with parameter ψ=loglogvol(V)/D. We cut only a O(logvol(V)loglogvol(V)/D)-fraction of the total capacity, and end up with a graph in which every strongly connected component is a ψ-lopsided expander. By Lemma 7 said components have diameter O(ψ1loglogvol(V))=O(D) (which, again, can be made D by adjusting constants). Plugging this procedure into Lemma 3 we conclude that there is an LDD with loss O(logvol(V)loglogvol(V))=O(lognloglogn), completing the proof sketch of Theorem 2.

2.2 A Deterministic Algorithm

So far we have neglected Quest 2, i.e., the design of efficient algorithms. But how far from algorithmic is this approach of Section 2.1 really? It turns out that implementing this framework with some simple tricks leads to the following algorithmic result.

Theorem 8.

For every directed graph there exists a directed LDD with loss O(lognloglogn) and support size O(Dlogn) that can be computed in time O~(mD2) by a deterministic algorithm.

This theorem comes with a strength and a weakness – the strength is that the theorem is deterministic. Note that the algorithm produces the explicit support of an LDD (of size O(Dlogn)); in fact, the probability distribution is simply a uniform distribution over a given list of cut sets S. For this reason, our algorithm might lead to some derandomized applications down the road by executing an LDD-based algorithm one-by-one for all O(Dlogn) cut sets. Derandomizations of this sort are common and sought after, especially in distributed and parallel domains. The weakness is that the running time has an overhead of poly(D). We remark that almost all algorithmic applications of LDDs (with the notable exceptions of [16, 6]) set D=polylog(n) or D=no(1), in which case the overhead is not dramatic, and the runtime is possibly near-linear.

Proof Idea

It can be quickly verified that implementing the Multiplicative Weights Update method from Lemma 3 is not a problem algorithmically, and requires O(RT) time, where R=O(log|E|D/L) and T is the time required by the cost-minimizer. Hence, only computing the Lopsided Expander Decomposition could be costly. Since this is a strictly stronger definition that the standard Expander Decomposition, at first glance it appears hopeless that we could achieve a simple algorithm that avoids the cut-matching games machinery. Luckily, we find yet another simple insight that allows us to find a simple and intuitive algorithm after all: While it may generally be hard to find a sparse cut (even NP-hard!), in our case we only ever need to find a sparse cut in a graph with diameter more than D. Indeed, if the graph has already diameter at most D, we can stop immediately (recall that we ultimately only care about the diameter and not the expander guarantee). One way to view the algorithm is that we construct a truncated ψ-lopsided expander decomposition, where we terminate prematurely if the diameter is small.

Fueled by this insight, consider the following two-phase process for computing a (truncated) ψ-lopsided expander decomposition:

  • Phase (I): We repeatedly take an arbitrary node v and grow a ball around v in the hope of finding a ψ-lopsided sparse cut. By choosing ψ=Θ(loglogvol(V)/D) we can guarantee two possible outcomes:

    1. (a)

      We find a sparse cut after, say, radius D4, in which case we cut the edges crossing the cut and recur on both sides (as in the Lopsided Expander Decomposition).

    2. (b)

      The problematic case happens: the volume of the ball reaches 34vol(V). In this case we have spent linear time but made no progress in splitting off the graph, so we remember v and move to Phase (II).

  • Phase (II): We compute a buffer zone around v, which consists of all nodes with distance D2 to and from v. We repeatedly take an arbitrary node u from outside this buffer zone and grow a ball around u in the hope of finding a ψ-lopsided sparse cut. Because we know that the D4-radius ball around v contains most of the volume of the graph, and node u is picked outside of the D2-radius buffer zone, we can prove (using similar techniques to the proof sketch of Lemma 7) that we will always find a ψ-lopsided sparse cut U around u. Upon finding it, we cut the edges crossing the cut (U,U¯) and start Phase (I) for the graph induced by nodes in U, while continuing Phase (II) in graph GU.

    When eventually there are no nodes left outside the buffer zone, we stop, since the remaining graph has diameter D.

The correctness of the procedure above is straightforward to show: we are essentially constructing a ψ-lopsided expander decomposition from Lemma 6, but terminating prematurely if the diameter is small. Hence, the total capacity of the cut edges is at most the total capacity of the cut edges in a “complete” ψ-lopsided expander, which is a O(logvol(V)loglogvol(V)/D)-fraction of the total capacity. The diameter guarantee follows directly from the stopping condition. By plugging this procedure into the algorithmic version of Lemma 3 we arrive at an LDD with loss O(logvol(V)loglogvol(V))=O(lognloglogn) in time O~(mpoly(D)), completing the proof sketch of Theorem 8.

2.3 A Near-Optimal Randomized Algorithm

Our third and final contribution is achieving a near-linear running time, regardless of the magnitude of the diameter, by a randomized algorithm:

Theorem 9.

For every directed graph there exists a directed LDD with loss O(lognloglogn) which can be computed (i.e., sampled from) in expected time O(mlog5nloglogn)=O~(m).

In contrast to Theorems 2 and 8, our approach to Theorem 9 is rather technical. We try more directly to extend the ideas of Bernstein, Nanongkai and Wulff-Nilsen [16] (which in turn borrow ideas from [15]). On a very high level, their algorithm classifies nodes v heavy if it reaches more than n2 nodes within distance D2, or light otherwise. We can afford to cut around light nodes v (with a specifically sampled radius rD), and recur on both sides of the cut – since v is light, the inside of the cut reduces by a constant fraction which is helpful in bounding the recursion depth. And if there are only heavy nodes in the graph then we can stop as the diameter is already at most D – indeed, the radius-D2 balls around heavy nodes must necessarily intersect. The radius is sampled from a geometric distribution with rate O(logn/D) inspired by classical LDDs in undirected graphs [11]. The two logarithmic factors stem from (i) this logn overhead in the sampling rate, and (ii) the logarithmic recursion depth.

Our approach refines these ideas by classifying not into two classes – heavy or light – but into loglogn different levels. We essentially say that a node v is at level if it reaches roughly n/22 nodes within some distance roughly D. We can take advantage of this in two ways: At the one extreme, we can only cut around few nodes of small level . Indeed, when we cut around a node at level we remove roughly n/22 nodes from the graph, so this can be repeated at most 22n times. For these levels we can afford to sample the radius in the geometric distribution with a significantly smaller rate, O(2i/D), thereby improving (i). At the other extreme, whenever we cut around nodes with large level then in the recursive call of the inside of the cut there are only n/22n nodes. This effectively reduces the recursion depth of these levels to much less than logn, improving (ii). In our algorithm, we find a balance between these two extreme cases.

Unfortunately, while this idea works out existentially, there are several issues when attempting to implement this approach in near-linear time. The first issue – initially classifying which nodes are at what level in time O~(m) – can be solved by an algorithm due to Cohen [23]. However, over the course of the algorithm when we repeatedly remove nodes and edges from the graphs the level of a node might change. Our solution involves a cute trick: First we prove that during the execution of our algorithm the level of a node can only increase (and never decrease). Second, instead of cutting around an arbitrary node v, we always pick a random node. The argument is as follows: If the classification is still correct for at least half of the nodes, then in each step we make progress with probability 12. And otherwise we can in fact afford to reclassify by Cohen’s algorithm as the level of the nodes can only increase a small number of times, leading to only few repetitions of Cohen’s algorithm in total.

We omit further details here and refer to the technical sections of the full paper version.

3 Preliminaries

Before diving into the technical results, we state the basic graph notations used throughout the paper and recap the new non-standard definitions we have introduced throughout Section 2.

Graphs

Throughout we consider directed simple graphs G=(V,E), where EV2, with n=|V| nodes and m=|E| edges. The edges of the graph can be associated with some value: a length (e) or a capacity/cost c(e), all of which we require to be positive. For any UV, we write U¯=VU. Let G[U] be the subgraph induced by U. We denote with δ+(U) the set of edges that have their starting point in U and endpoint in U¯. We define δ(U) symmetrically. We also sometimes write c(S)=eSc(e) (for a set of edges S) or c(U,W)=eE(U×W)c(e) and c(U)=c(U,U) (for sets of nodes U,W).

The distance between two nodes v and u is written dG(v,u) (throughout we consider only the length functions to be relevant for distances). We may omit the subscript if it is clear from the context. The diameter of the graph is the maximum distance between any pair of nodes. For a subgraph G of G we occasionally say that G has weak diameter D if for all pairs of nodes u,v in G, we have dG(u,v),dG(v,u)D. A strongly connected component in a directed graph G is a subgraph where for every pair of nodes v,u there is a path from v to u and vise versa. Finally, for a radius r0 we write B+(v,r)={xV:dG(v,x)r} and B(v,r)={yV:dG(y,v)r}.

Polynomial Bounds

For graphs with edge lengths (or capacities), we assume that they are positive and the maximum edge length is bounded by poly(n). This is only for the sake of simplicity in Section 4 (where in the more general case that all edge lengths are bounded by some threshold W some logarithmic factors in n become log(nW) instead), and is not necessary for our strongest LDD developed in the full paper version.

Expander Graphs

Let G=(V,E,,c) be a directed graph with positive edge capacities c and positive unit edge lengths . We define the volume vol(U) by

vol(U)=c(U,V)=eE(U×V)c(e),

and set minvol(U)=min{vol(U),vol(U¯)} where U¯=VU. A node set U naturally corresponds to a cut (U,U¯). The sparsity (or conductance) of U is defined by

ϕ(U)=c(U,U¯)minvol(U).

In the special cases that U= we set ϕ(U)=1 and in the special case that U but vol(U)=0, we set ϕ(U)=0. We say that U is ϕ-sparse if ϕ(U)ϕ. We say that a directed graph is a ϕ-expander if it does not contain a ϕ-sparse cut UV. We define the lopsided sparsity of U as

ψ(U)=c(U,U¯)minvol(U)logvol(V)minvol(U),

(with similar special cases), and we similarly say that U is ψ-lopsided sparse if ψ(U)ψ. Finally, we call a graph a ψ-lopsided expander if it does not contain a ψ-lopsided sparse cut UV.

4 Near-Optimal LDDs via Expander Decompositions

In this section, we show the existence of a near-optimal LDD, thereby proving our first main theorem:

See 2

We introduce some technical lemmas in order to build up the framework for the proof of Theorem 2, which can be found in the end of the section.

4.1 Reduction to Cost-Minimizers

See 3

Proof.

Let G=(V,E) denote the graph for which we are supposed to design the LDD. Let us also introduce edge costs that are initially defined as c(e)=1 for all edges. We will now repeatedly call the cost-minimizer, obtain a set of cut edges SE, and then update the edge costs by c(e)2c(e) for all eS. We stop the process after R=log|E|D/L iterations, and let 𝒮 denote the collection of all R sets S that we have obtained throughout. We claim that the uniform distribution on 𝒮 is the desired LDD for G.

It is clear that all for all sets S𝒮 the diameter condition is satisfied. We show that additionally for all edges eE we have that

Pr(eS)10LD,

for S sampled uniformly from 𝒮. Suppose otherwise, then in particular we have increased the cost of e to at least

c(e)210LDR=210log|E|=|E|10.

On the other hand, let c denote the adapted costs after running the process for one iteration. Then the total cost increase is

eEc(e)c(e)=eSc(e)c(e)=eSc(e)=c(S)c(E)LD.

That is, with every step of the process the total cost increases by a factor of (1+LD) and thus the total cost when the process stops is bounded by

|E|(1+LD)R|E|eLDR=|E|elog|E||E|3,

leading to a contradiction. The same argument shows that all costs are bounded by |E|3|V|6 throughout.

See 6

Proof.

Consider the following algorithm: If there is no ψ-lopsided sparse cut then the graph is a ψ-lopsided expander by definition and we stop. Otherwise, there exists a ψ-lopsided sparse cut (U,U¯). We then distinguish two cases: If vol(U)vol(U¯) then we remove all edges from U to U¯, and otherwise we remove all edges from U¯ to U (in both cases placing these edges in S). Then we recursively continue on all strongly connected components in the remaining graph GS.

It is clear that all strongly connected components in the remaining graph GS are ψ-lopsided expanders, but it remains to show that we cut edges with total capacity at most c(E)ψlogc(E). Imagine that initially we associate to each edge e a potential of c(e)logc(E). The total initial potential is thus ec(e)logc(E)=c(E)logc(E). Throughout the procedure we maintain the invariant that each edge holds a potential of at least c(e)logvol(C), where C is the strongly connected component containing edge e. Focus on any recursion step and its current strongly connected component C, and let C=UU¯ denote the current ψ-lopsided sparse cut. Assume first that vol(U)vol(U¯). Observe that an edge eU suddenly needs to hold a potential of c(e)logc(U) instead of c(e)logc(C). Hence, the amount of freed potential in U is at least

eE(U×V)c(e)(logvol(C)logvol(U)) =eE(U×V)c(e)logvol(C)vol(U)
=vol(U)logvol(C)vol(U).

On the other hand, since (U,U¯) is a ψ-lopsided sparse cut we have that

ψψ(U)=c(U,U¯)minvol(U)logvol(C)minvol(U)=c(U,U¯)vol(U)logvol(C)vol(U).

Putting these together, this means any cut edge e from U to U¯ can get “paid” a potential of c(e)ψ1 while still maintaining the potential invariant. (Note that here we only exploit the potential freed by the smaller side of the cut U, and forget about the overshoot potential in the larger side U¯.) A symmetric argument applies when vol(U)<vol(U¯).

All in all, we start with a total potential of c(E)logc(E) and pay for each cut edge eS with a potential of at least c(e)ψ1. This implies that c(E)logc(E)c(S)ψ1 and the claim follows.

To prove that lopsided expanders have small diameter, we first establish the following technical lemma.

Lemma 10.

Let G=(V,E,c) be a directed graph and let ψ>0. For any node vV there is some radius R=O(ψ1loglogvol(V)+logvol(V)) such that one of the following two properties holds:

  • vol(B+(v,R))12vol(V), or

  • ψ(B+(v,r))ψ for some 0rR.

Proof.

We write Δi=1iψ and define the radii 1=rlogvol(V)r1 by ri=ri+1+Δi. We prove by induction that vol(B+(v,ri))2ivol(V), or alternatively that we find a sparse cut. This is clearly true in the base case for i=logvol(V): Either vol(B+(v,1))1 or v is an isolated node and therefore ψ(B+(v,r))=0.

For the inductive case, suppose for the sake of contradiction that vol(B+(v,ri))<2ivol(V). By induction we know however that vol(B+(v,ri+1))2i1vol(V). It follows there is some radius ri+1r<ri=ri+1+Δi such that

vol(B+(v,r+1))vol(B+(v,r))21/Δi1+1Δi.

It follows that

c(B+(v,r),B+(v,r)¯)=vol(B+(v,r+1))vol(B+(v,r))vol(B+(v,r))Δi.

Therefore the cut induced by B+(v,r) has lopsided sparsity

ψ(B+(v,r)) =c(B+(v,r),B+(v,r)¯)minvol(B+(v,r))logvol(V)minvol(B+(v,r))
c(B+(v,r),B+(v,r)¯)vol(B+(v,r))logvol(V)vol(B+(v,r))
1Δilogvol(V)vol(B+(v,r))
1Δilogvol(V)vol(B+(v,ri))
1Δilog(2i)
ψ.

Here, in the second step we have used that minvol(B+(v,r))=vol(B+(v,r)) as in the opposite case we have vol(B+(v,r))12vol(V) which also proves the claim. This finally leads to a contradiction since we assume that the graph is a ψ-lopsided expander and thus does not contain ψ-lopsided sparse cuts.

In summary, the induction shows that vol(B+(v,r1))12vol(V) and thus we may choose R=r1. To prove that R is as claimed, consider the following calculation:

R =2+i=1logvol(V)Δi
=2+i=1logvol(V)1iψ
2+logvol(V)+i=1logvol(V)1iψ
O(logvol(V)+ψ1loglogvol(V)),

using the well-known fact that the harmonic numbers are bounded by k=1n1/k=O(logn).

One can easily strengthen the lemma as follows. This insight will play a role in the full paper version where we construct the deterministic algorithm.

Lemma 11.

Let G=(V,E,c) be a directed graph and let ψ>0 and 0<α<1. For any node vV there is some radius R=O(ψ1loglogvol(V)+ψ1α1+logvol(V)) such that one of the following two properties holds:

  • vol(B+(v,R))(1α)vol(V), or

  • ψ(B+(v,r))ψ for some 0rR.

Proof.

Applying the previous lemma with parameter ψ yields R=O(ψ1loglogvol(V)+logvol(V)) such that either vol(B+(v,R))12vol(V), or ψ(B+(v,r))ψ for some 0rR. In the latter case we are immediately done, so suppose that we are in the former case.

Let Δ=2α1ψ1 and let R=R+Δ. If vol(B+(v,R))(1α)vol(V) then we have shown the first case and are done. So suppose that otherwise vol(B+(v,R))(1α)vol(V). Then due to the trivial bound vol(B+(v,R))vol(V), there is some radius RrR=R+Δ with

vol(B+(v,r+1))vol(B+(v,r))21/Δ1+1Δ,

and hence,

c(B+(v,r),B+(v,r)¯)=vol(B+(v,r+1))vol(B+(v,r))vol(B+(v,r))Δ.

For this radius r it further holds that vol(B+(v,r))(1α)vol(V) and thus vol(B+(v,r)¯)αvol(V). In particular, we have that minvol(B+(v,r))α2vol(V). Putting these statements together, we have that

ψ(B+(v,r)) =c(B+(v,r),B+(v,r)¯)minvol(B+(v,r))logvol(V)minvol(B+(v,r))
vol(B+(v,r))Δα2vol(B+(v,r))logvol(V)minvol(B+(v,r))
1Δα2
ψ,

witnessing indeed the desired sparse lopsided cut.

See 7

Proof.

Take an arbitrary pair of nodes v,u. Applying Lemma 11 with parameters ψ and α=14, say, yields a radius R=O(ψ1loglogvol(V)+logvol(V)) such that

vol(B+(v,R))34vol(V),

and symmetrically,

vol(B(u,R))34vol(V).

Therefore, there is some edge e=(x,y) contributing to both of these volumes. Thus xB+(v,R) and yB(u,R). It follows that

dG(u,v)dG(u,x)+dG(x,y)+dG(y,u)R+1+R=O(ψ1loglogvol(V)+logvol(V)).

Since the nodes u,v were chosen arbitrarily this establishes the claimed diameter bound.

Proof of Theorem 2.

Let G=(V,E,) be a directed graph with positive edge lengths. We show that there is an LDD with loss O(lognloglogn) for G. We first deal with two trivial cases: First, if Dlogn/γ (for some constant γ>0 to be determined later) then we simply remove all edges and stop. Second, we remove all edges with length more than D from the graph. In both cases edges can be deleted with probability 1 without harm.

Next, we transform the graph into G by replacing each e by a path of (e) unit-length edges. In the following it suffices to design an LDD for the augmented graph; if that LDD cuts any of the edges along the path corresponding to an original edge e we will cut e entirely. An LDD with loss L in the augmented graph will thus delete an original edge with probability at most (e)LD by a union bound. All in all, this transformation blows up the number of nodes and edges in the graph by a factor of at most D (since we removed edges with larger length). Recall that we throughout assume that Dnc, for some constant c, and thus |V|nO(1).

By Lemma 3 we further reduce the existence of an LDD of G to the following cost-minimizer task: View G as an edge-capacitated graph G=(V,E,c) for some capacities c:E[|V|10]. In particular, under this capacity function G has volume vol(V)|V|2|V|10=|V|12=nO(1). The goal is to delete edges SE in G so all remaining strongly connected components have (weak) diameter at most D, and the total cost of all deleted edges is only c(S)c(E)LD.

Finally, we apply the Lopsided Expander Decomposition from Lemma 6 on G. Specifically, we define

ψ=loglogvol(V)ϵD

for some constant ϵ>0 to be determined later. The Expander Decomposition then cuts edges SE so that each remaining strongly connected component is ψ-lopsided expander. Thus, by Lemma 7 each strongly connected component has diameter

O(ψ1loglogvol(V)+logvol(V))=O(ϵD+γD).

By choosing the constants ϵ and γ to be sufficiently small, the diameter bound becomes D as desired. Moreover, Lemma 4 guarantees that we cut edges of total capacity

c(S)c(E)ψlogvol(V)c(E)logvol(V)loglogvol(V)ϵD,

which becomes LD by choosing

L=logvol(V)loglogvol(V)ϵlog|V|12loglog|V|12ϵ=O(lognloglogn)

as planned.

References

  • [1] Ittai Abraham, Yair Bartal, and Ofer Neiman. Nearly tight low stretch spanning trees. In 49th Annual Symposium on Foundations of Computer Science (FOCS 2008), pages 781–790. IEEE Computer Society, 2008. doi:10.1109/FOCS.2008.62.
  • [2] Ittai Abraham, Shiri Chechik, Michael Elkin, Arnold Filtser, and Ofer Neiman. Ramsey spanning trees and their applications. ACM Trans. Algorithms, 16(2):19:1–19:21, 2020. doi:10.1145/3371039.
  • [3] Ittai Abraham and Ofer Neiman. Using petal-decompositions to build a low stretch spanning tree. SIAM J. Comput., 48(2):227–248, 2019. doi:10.1137/17M1115575.
  • [4] Noga Alon, Richard M. Karp, David Peleg, and Douglas B. West. A graph-theoretic game and its application to the k-server problem. SIAM J. Comput., 24(1):78–100, 1995. doi:10.1137/S0097539792224474.
  • [5] Sanjeev Arora, Elad Hazan, and Satyen Kale. The multiplicative weights update method: a meta-algorithm and applications. Theory of Computing, 8(6):121–164, 2012. doi:10.4086/toc.2012.v008a006.
  • [6] Vikrant Ashvinkumar, Aaron Bernstein, and Adam Karczmarz. Faster approximation algorithms for restricted shortest paths in directed graphs. In Yossi Azar and Debmalya Panigrahi, editors, 36th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2025), pages 5263–5277. SIAM, 2025. doi:10.1137/1.9781611978322.180.
  • [7] Baruch Awerbuch. Complexity of network synchronization. J. ACM, 32(4):804–823, 1985. doi:10.1145/4221.4227.
  • [8] Baruch Awerbuch, Bonnie Berger, Lenore Cowen, and David Peleg. Fast network decomposition (extended abstract). In Norman C. Hutchinson, editor, 11th Annual ACM Symposium on Principles of Distributed Computing (PODC 1992), pages 169–177. ACM, 1992. doi:10.1145/135419.135456.
  • [9] Baruch Awerbuch, Andrew V. Goldberg, Michael Luby, and Serge A. Plotkin. Network decomposition and locality in distributed computation. In 30th Annual Symposium on Foundations of Computer Science (FOCS 1989), pages 364–369. IEEE Computer Society, 1989. doi:10.1109/SFCS.1989.63504.
  • [10] Baruch Awerbuch and David Peleg. Routing with polynomial communication-space trade-off. SIAM J. Discret. Math., 5(2):151–162, 1992. doi:10.1137/0405013.
  • [11] Yair Bartal. Probabilistic approximations of metric spaces and its algorithmic applications. In 37th Annual Symposium on Foundations of Computer Science (FOCS 1996), pages 184–193. IEEE Computer Society, 1996. doi:10.1109/SFCS.1996.548477.
  • [12] Yair Bartal. On approximating arbitrary metrices by tree metrics. In Jeffrey Scott Vitter, editor, 30th Annual ACM Symposium on Theory of Computing (STOC 1998), pages 161–168. ACM, 1998. doi:10.1145/276698.276725.
  • [13] Aaron Bernstein, Joakim Blikstad, Thatchaphol Saranurak, and Ta-Wei Tu. Maximum flow by augmenting paths in n2+o(1) time. In 65th Annual Symposium on Foundations of Computer Science (FOCS 2024). IEEE, 2024. To appear, see https://arxiv.org/abs/2406.03648. doi:10.48550/arXiv.2406.03648.
  • [14] Aaron Bernstein, Maximilian Probst Gutenberg, and Thatchaphol Saranurak. Deterministic decremental reachability, scc, and shortest paths via directed expanders and congestion balancing. In Sandy Irani, editor, 61st Annual Symposium on Foundations of Computer Science (FOCS 2020), pages 1123–1134. IEEE, 2020. doi:10.1109/FOCS46700.2020.00108.
  • [15] Aaron Bernstein, Maximilian Probst Gutenberg, and Christian Wulff-Nilsen. Near-optimal decremental SSSP in dense weighted digraphs. In Sandy Irani, editor, 61st Annual Symposium on Foundations of Computer Science (FOCS 2020), pages 1112–1122. IEEE, 2020. doi:10.1109/FOCS46700.2020.00107.
  • [16] Aaron Bernstein, Danupon Nanongkai, and Christian Wulff-Nilsen. Negative-weight single-source shortest paths in near-linear time. In 63rd Annual Symposium on Foundations of Computer Science (FOCS 2022), pages 600–611. IEEE, 2022. doi:10.1109/FOCS54457.2022.00063.
  • [17] Guy E. Blelloch, Anupam Gupta, Ioannis Koutis, Gary L. Miller, Richard Peng, and Kanat Tangwongsan. Nearly-linear work parallel SDD solvers, low-diameter decomposition, and low-stretch subgraphs. Theory Comput. Syst., 55(3):521–554, 2014. doi:10.1007/s00224-013-9444-5.
  • [18] Allan Borodin and Ran El-Yaniv. Online computation and competitive analysis. Cambridge University Press, 1998.
  • [19] Karl Bringmann, Alejandro Cassis, and Nick Fischer. Negative-weight single-source shortest paths in near-linear time: Now faster! In 64th Annual Symposium on Foundations of Computer Science (FOCS 2023), pages 515–538. IEEE, 2023. doi:10.1109/FOCS57990.2023.00038.
  • [20] Shiri Chechik, Yang P. Liu, Omer Rotem, and Aaron Sidford. Constant girth approximation for directed graphs in subquadratic time. In Konstantin Makarychev, Yury Makarychev, Madhur Tulsiani, Gautam Kamath, and Julia Chuzhoy, editors, 52nd Annual ACM Symposium on Theory of Computing (STOC 2020), pages 1010–1023. ACM, 2020. doi:10.1145/3357713.3384330.
  • [21] Shiri Chechik and Tianyi Zhang. Dynamic low-stretch spanning trees in subpolynomial time. In Shuchi Chawla, editor, 31st Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2020), pages 463–475. SIAM, 2020. doi:10.1137/1.9781611975994.28.
  • [22] Julia Chuzhoy and Sanjeev Khanna. Maximum bipartite matching in n2+o(1) time via a combinatorial algorithm. In Bojan Mohar, Igor Shinkar, and Ryan O’Donnell, editors, 56th Annual ACM Symposium on Theory of Computing (STOC 2024), pages 83–94. ACM, 2024. doi:10.1145/3618260.3649725.
  • [23] Edith Cohen. Size-estimation framework with applications to transitive closure and reachability. J. Comput. Syst. Sci., 55(3):441–453, 1997. doi:10.1006/JCSS.1997.1534.
  • [24] Michael Elkin, Yuval Emek, Daniel A. Spielman, and Shang-Hua Teng. Lower-stretch spanning trees. SIAM J. Comput., 38(2):608–628, 2008. doi:10.1137/050641661.
  • [25] Jittat Fakcharoenphol, Satish Rao, and Kunal Talwar. A tight bound on approximating arbitrary metrics by tree metrics. J. Comput. Syst. Sci., 69(3):485–497, 2004. doi:10.1016/J.JCSS.2004.04.011.
  • [26] Sebastian Forster and Gramoz Goranci. Dynamic low-stretch trees via dynamic low-diameter decompositions. In Moses Charikar and Edith Cohen, editors, 51st Annual ACM Symposium on Theory of Computing (STOC 2019), pages 377–388. ACM, 2019. doi:10.1145/3313276.3316381.
  • [27] Sebastian Forster, Martin Grösbacher, and Tijn de Vos. An improved random shift algorithm for spanners and low diameter decompositions. In Quentin Bramas, Vincent Gramoli, and Alessia Milani, editors, 25th International Conference on Principles of Distributed Systems (OPODIS 2021), volume 217 of LIPIcs, pages 16:1–16:17. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2021. doi:10.4230/LIPIcs.OPODIS.2021.16.
  • [28] Bernhard Haeupler, D. Ellis Hershkowitz, and Goran Zuzic. Tree embeddings for hop-constrained network design. In Samir Khuller and Virginia Vassilevska Williams, editors, 53rd Annual ACM Symposium on Theory of Computing (STOC 2021), pages 356–369. ACM, 2021. doi:10.1145/3406325.3451053.
  • [29] Yiding Hua, Rasmus Kyng, Maximilian Probst Gutenberg, and Zihang Wu. Maintaining expander decompositions via sparse cuts. In Nikhil Bansal and Viswanath Nagarajan, editors, 34th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2023), pages 48–69. SIAM, 2023. doi:10.1137/1.9781611977554.CH2.
  • [30] Rohit Khandekar, Satish Rao, and Umesh Vazirani. Graph partitioning using single commodity flows. J. ACM, 56(4), July 2009. doi:10.1145/1538902.1538903.
  • [31] Ioannis Koutis, Gary L. Miller, and Richard Peng. A nearly-m log n time solver for SDD linear systems. In Rafail Ostrovsky, editor, 52th Annual Symposium on Foundations of Computer Science (FOCS 2011), pages 590–598. IEEE Computer Society, 2011. doi:10.1109/FOCS.2011.85.
  • [32] Nathan Linial and Michael E. Saks. Low diameter graph decompositions. Comb., 13(4):441–454, 1993. doi:10.1007/BF01303516.
  • [33] Gary L. Miller, Richard Peng, and Shen Chen Xu. Parallel graph decompositions using random shifts. In Guy E. Blelloch and Berthold Vöcking, editors, 25th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2013), pages 196–203. ACM, 2013. doi:10.1145/2486159.2486180.
  • [34] Jakub Pachocki, Liam Roditty, Aaron Sidford, Roei Tov, and Virginia Vassilevska Williams. Approximating cycles in directed graphs: Fast algorithms for girth and roundtrip spanners. In Artur Czumaj, editor, 29th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2018), pages 1374–1392. SIAM, 2018. doi:10.1137/1.9781611975031.91.
  • [35] Harald Räcke, Chintan Shah, and Hanjo Täubig. Computing cut-based hierarchical decompositions in almost linear time. In Chandra Chekuri, editor, 25th Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2014), pages 227–238. SIAM, 2014. doi:10.1137/1.9781611973402.17.
  • [36] Paul D. Seymour. Packing directed circuits fractionally. Comb., 15(2):281–288, 1995. doi:10.1007/BF01200760.
  • [37] Goran Zuzic, Gramoz Goranci, Mingquan Ye, Bernhard Haeupler, and Xiaorui Sun. Universally-optimal distributed shortest paths and transshipment via graph-based 1-oblivious routing. In Joseph (Seffi) Naor and Niv Buchbinder, editors, 33rd Annual ACM-SIAM Symposium on Discrete Algorithms (SODA 2022), pages 2549–2579. SIAM, 2022. doi:10.1137/1.9781611977073.100.