Abstract 1 Introduction 2 Preliminaries 3 Timestamp optimal heaps 4 Universal Optimality 5 Conclusion References

Simpler Universally Optimal Dijkstra

Ivor van der Hoog ORCID IT University of Copenhagen, Denmark Eva Rotenberg ORCID IT University of Copenhagen, Denmark Daniel Rutschmann ORCID IT University of Copenhagen, Denmark
Abstract

Let G be a weighted (directed) graph with n vertices and m edges. Given a source vertex s, Dijkstra’s algorithm computes the shortest path lengths from s to all other vertices in O(m+nlogn) time. This bound is known to be worst-case optimal via a reduction to sorting. Theoretical computer science has developed numerous fine-grained frameworks for analyzing algorithmic performance beyond standard worst-case analysis, such as instance optimality and output sensitivity. Haeupler, Hladík, Rozhoň, Tarjan, and Tětek [FOCS ’24] consider the notion of universal optimality, a refined complexity measure that accounts for both the graph topology and the edge weights. For a fixed graph topology, the universal running time of a weighted graph algorithm is defined as its worst-case running time over all possible edge weightings of G. An algorithm is universally optimal if no other algorithm achieves a better asymptotic universal running time on any particular graph topology.

Haeupler, Hladík, Rozhoň, Tarjan, and Tětek show that Dijkstra’s algorithm can be made universally optimal by replacing the heap with a custom data structure. Their approach builds on Iacono’s [SWAT ’00] working-set bound ϕ(x). This is a technical definition that, intuitively, for a heap element x, counts the maximum number of simultaneously-present elements y that were pushed onto the heap whilst x was in the heap. They design a new heap data structure that can pop an element x in O(1+logϕ(x)) time. They show that Dijkstra’s algorithm with their heap data structure is universally optimal.

In this work, we revisit their result. We use a simpler heap property that we will call timestamp optimality, where the cost of popping an element x is logarithmic in the number of elements inserted between pushing and popping x. We show that timestamp optimal heaps are not only easier to define but also easier to implement. Using these time stamps, we provide a significantly simpler proof that Dijkstra’s algorithm, with the right kind of heap, is universally optimal.

Keywords and phrases:
Graph algorithms, instance optimality, Fibonnacci heaps, simplification
Copyright and License:
[Uncaptioned image] © Ivor van der Hoog, Eva Rotenberg, and Daniel Rutschmann; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Design and analysis of algorithms
; Theory of computation Shortest paths
Related Version:
Full Version: https://arxiv.org/abs/2504.17327
Funding:
Ivor van der Hoog, Eva Rotenberg, and Daniel Rutschmann are grateful to the Carlsberg Foundation for supporting this research via Eva Rotenberg’s Young Researcher Fellowship CF21-0302 “Graph Algorithms with Geometric Applications”. This work was supported by the the VILLUM Foundation grant (VIL37507) “Efficient Recomputations for Changeful Problems” and the European Union’s Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant agreement No 899987.
Editors:
Anne Benoit, Haim Kaplan, Sebastian Wild, and Grzegorz Herman

1 Introduction

Let G=(V,E) be a (di)graph with edge weights w, n vertices, and m edges. Let s be a designated source vertex. In the single-source shortest path problem, the goal is to sort all the vertices in V by the length of the shortest path from s to v. Dijkstra’s algorithm with a Fibonacci heap solves this problem in O(m+nlogn) time, which is worst-case optimal.

However, worst-case optimality can be overly coarse as a performance measure. A stronger notion, called universal optimality, requires an algorithm to run as fast as possible on every graph topology. Recently, Haeupler, Hladík, Rozhoň, Tarjan, and Tětek [6] proved that Dijkstra’s algorithm, when paired with a suitably designed heap, achieves universal optimality. The paper was well-received, as it won the best paper award at FOCS’24. Their contribution is threefold: First, they observe that Dijkstra’s efficiency relies on a heap supporting Push and DecreaseKey in amortized constant time, and Pop in O(logn) time. They refine this analysis using the working-set bound by Iacono [9] (we forward reference Definition 1). Intuitively, for a heap element x, the working set size ϕ(x) is the maximum number of elements y that are simultaneously present in the heap while x resides in it (excluding all y that already were in the heap). Haeupler et al. [6] design a new heap data structure for in the word-RAM model in which the Pop operation for element x takes O(1+logϕ(x)) time.

Secondly, they show, for any fixed graph topology G=(V,E) and designated source s, a universal lower bound of Ω(m+n+xiV{s}logϕ(xi)). This lower bound matches the running time of Dijkstra’s algorithm, when it is equipped with their heap. They thereby prove that Dijkstra’s algorithm is universally optimal.

Third, they refine their analysis by distinguishing between edge weight comparisons and all other operations. This argument is technical, and we only refer to it on a high level. For completeness, we explain this result in great detail in the full version. Imagine mGm to be some technical parameter that only depends on G. Note that, for any fixed weighted graph, Dijkstra’s algorithm always performs the same sequence of push and pop operations on the heap. Therefore, the weighted graph uniquely determines the working set size of each element. For a fixed graph topology G, for any weighting w of G with corresponding working set sizes ϕ(xi), they prove a universal lower bound of Ω(mG+xiV{s}(1+log(ϕ(xi)))) on the number of weight comparisons used to solve the single shortest path problem. They adapt Dijkstra’s algorithm so that the number of edge weight comparisons performed is also asymptotically universally optimal. A recent paper by Haeupler et al. [7] shows a simple instance optimal result for the bidirectional problem: Compute, given G and two vertices (s,t), the shortest st-path. Although the problems are related, the techniques and bounds are different, and thus, we do not consider the st-path problem in the paper at hand.

Contribution.

We offer a significantly more concise proof of Dijkstra’s universal optimality. Our main insight is to replace the working-set bound with a conceptually simpler measure called timestamps.111We note that the original work by Iacono [10] included several notions of working-set bounds. One of these notions, not present in [6, 9], but used in [5], matches our notion of timestamp optimality. By working set, we explicitly refer to the definition used in [6, 9]. We use the more distinct term timestamps to properly distinguish between the concepts in this work and [6]. This also highlights that our notion yields a family of intervals [ai,bi], which is more convenient than having just the working set sizes ϕ(xi). We maintain a heap H and global time counter that increments after H.Push. For each heap element xi, let ai and bi denote the time it was pushed and popped, respectively. We define H to be timestamp optimal if popping xi takes O(1+log(biai)) time. Timestamps are not only conceptually simpler than working-set sizes, timestamp optimality is also considerably simpler to implement.

We design a heap data structure that supports Push and DecreaseKey in amortized constant time, and Pop in amortized O(1+log(biai)) time. Using timestamps, we show a universal lower bound of Ω(m+xiV{s}(1+log(biai))) for any fixed graph topology G=(V,E) and source s. We consider this to be our main contribution, as it offers a significantly simpler proof that Dijkstra’s algorithm can be made universally optimal. Finally, our formulation of timestamps allow us to apply recent techniques for universally optimal sorting [5, 11] to show that Dijkstra’s algorithm, with our timestamp optimal heap, matches our universal lower bound. Additionally, we show in the full version that our algorithm is equally general to that of [6], as it can be made universally optimal with respect to edge weight comparisons. While this proof is indeed new, different, and perhaps simpler, it is not self-contained; therefore, we relegate it to the full version for the sake of completeness.

2 Preliminaries

Let G=(V,E) be a (directed) graph and let w assign to each edge a positive weight. For ease of notation, we say that G has n+1 vertices and m edges. In the single-source shortest path problem (SSSP), the input consists of G, w, and a designated source vertex s. The goal is to order all n vertices in V{s} by the length of the shortest path from s to each vertex vV. We follow [6] and assume that the designated source s can reach every vertex in V{s} and that no two vertices are equidistant from s.

For weighted graph algorithms A that solves SSSP, the running time Runtime(A,G,s,w) depends on the graph topology G=(V,E), the designated source vertex s, and the edge weights w. Let 𝔾n denote the set of all (directed) graph topologies G=(V,E) with |V|=n+1. For a fixed graph topology G, let 𝕎(G) denote the (infinite) set of all possible edge weightings of G. Then, for a fixed input size n, the worst-case running time of an algorithm A that solves the single-source-shortest path problem is a triple maximum:

Worst-case(A,n):=maxG𝔾nmaxsV(G)maxw𝕎(G)Runtime(A,G,s,w).

An algorithm A is worst-case optimal if there exists a constant c such that, for sufficiently large n, there exists no algorithm A with Worst-case(A,n)>cWorst-case(A,n).

Haeupler, Wajc, and Zuzic [8] introduce a more fine-grained notion of algorithmic optimality for weighted graphs called universal optimality. Universal optimality requires an algorithm to perform optimally for every fixed graph topology. Formally, in the single-source-shortest path problem, it removes the outer two maximizations: For a fixed algorithm A, graph topology G=(V,E), and source s, the universal running time is

Universal(A,G,s):=maxw𝕎(G)Runtime(A,G,s,w).

An algorithm A is universally optimal if there exists a constant c such that for every fixed topology G and every sV, no algorithm A satisfies Universal(A,G,s)>cUniversal(A,G,s).

Algorithm 1 Dijkstra’s algorithm with a black-box heap H.

2.1 Dijkstra’s algorithm

Dijkstra [2] presented an algorithm for the single-source shortest paths problem in 1959. Since the invention of Fibonnacci heaps[3], Dijkstra’s algorithm can be implemented in a way that is worst-case optimal, by utilising these heaps in the implementation of Dijkstra’s algorithm. A heap H maintains a set of elements where each xH is associated with a unique key γ(x). A heap H typically supports the following five operations:

  • Push(x,y): Insert x into H where the key γ(x) is y.

  • DecreaseKey(x,y): For xH and yγ(x), update γ(x) to y.

  • Pop(): Remove and return the element xH with the smallest key.

  • Peak(): Return an element xH with the smallest key.

  • Meld(H): Return a heap H′′ on the elements in HH.

Fibonacci heaps support Push and DecreaseKey in O(1) amortized time, Pop in amortized O(logn) time, and Peak and Meld in worst-case O(1) time [3]. Dijkstra’s algorithm only uses the first three heap operations. Using Fibonacci heaps, it runs in O(m+nlogn) time which is worst-case optimal. Dijkstra’s algorithm (Algorithm 1) is conceptually simple: it initializes the heap with all vertices v that s can directly reach, where the key γ(v) is the edge weight of (s,v). Then, it repeatedly pops the vertex u with the smallest key and it appends u to the output. Each time a vertex u is popped, its neighbors v are examined. For each neighbor v, the tentative distance dv is computed as the distance from s to u plus the edge weight w(u,v). If v is not in the heap, it is pushed with key γ(v)=dv. If v is already in the heap and γ(v)>dv then the key γ(v) is decreased to dv.

2.2 Universally optimal Dijkstra

Iacono [9] introduced a refined measure for the running time of heap operations via the concept of working-set size:

Definition 1 ([9] ).

Consider a heap supporting Push and Pop and let xi be an element in the heap. For each time step t between pushing xi and popping xi, we define the working set of xi at time t as the set of elements W(xi,t) inserted after xi and still present at time t. We include xi itself in W(xi,t). Fix any time t0 that maximizes the value of |W(xi,t)|; we call the set W(xi,t0) working-set of xi and ϕ(xi):=|W(xi,t0)| the working-set size.

Haeupler, Hladík, Rozhoň, Tarjan, and Tětek [6] design a heap H that supports Push and DecreaseKey in amortized O(1) time, and supports Pop for an element xH in O(1+logϕ(x)) time. The basis of their data structure are Fibonacci heaps F which support Push and DecreaseKey in amortized constant time and Pop in amortized O(logN) time where N is the number of elements in the heap.

The core idea of their data structure is to partition elements into buckets Bj of doubly exponentially increasing size. Each bucket Bj contains a Fibonacci heap Fj. Denote by mj the minimum key of Fj. The set of O(loglogn) values mj is stored in a fusion tree [4]. Fusion trees of size at most O(loglogn) have constant update time in the word-RAM model. By carefully merging and managing buckets, they ensure that each element xi is stored in a Fibonacci heap Fj whose size is proportional to ϕ(xi). We refer to [6] for full details; we only note here that ensuring that Push and DecreaseKey take amortized constant time whilst Pop takes O(1+logϕ(xi)) time requires non-trivial maintenance and analysis.

Using this heap, Dijkstra’s algorithm takes O(m+xiV{s}(1+logϕ(xi))) time. They [6] also prove a universal lower bound: for any fixed (directed) graph topology G=(V,E) and source sV, no algorithm A achieves runtime o(m+n+xiV{s}logϕ(xi)) for all edge weightings w𝕎(G). Dijkstra’s algorithm with this heap is therefore universally optimal.

3 Timestamp optimal heaps

Instead of designing heaps with a running time proportional to the working-set size of a heap element, we use a simple timestamp approach:

Definition 2.

For any heap H, we define time t as a counter that increments after each H.Push(x,y). For any element xiH, we denote by ai the time when xi was pushed onto the heap, and by bi the time when xi was popped from the heap. Observe that bi>ai.

Definition 3.

A heap H is timestamp optimal if it supports Push and DecreaseKey in amortized constant time, and Pop with output xi in amortized O(1+log(biai)) time.

Our Data Structure.

Let H denote our abstract heap data structure, supporting operations H.Push(x,y), H.DecreaseKey(x,y), and H.Pop(). We implement H as follows:

Invariant 1 (Figure 1).

We partition the elements of H into an array of buckets B. Each bucket B[j] contains one or two Fibonacci heaps. Every element of H lies in exactly one Fibonacci heap. Along with each Fibonacci heap F we store a half-open interval IF. We maintain the following invariants; for all times t:

  1. (a)

    An element xiH is in F if and only if aiIF,

  2. (b)

    The interval IF for FB[j] has size 2j,

  3. (c)

    All intervals of B[j1] lie to the right of the intervals in B[j],

  4. (d)

    All intervals collectively partition [0,t).

Figure 1: We illustrate our invariant. Each bucket B[j] contains one or two Fibonacci heaps. Each heap F is associated with an interval on the number line in [0,t). The elements xi stored in a Fibonacci heap F are visualised by placing them at positions ai along the number line. Importantly, when the buckets in B are indexed from left to right (i.e., from low to high indices), the corresponding intervals of the Fibonacci heaps appear in the reverse order along the number line.

By Invariant 1, for all elements xiB[j], (tai) is at least 2j1, but less than 2j+2. We define M as an array of size logn, where M[j] stores the minimum key in B[j]. Furthermore, we maintain a bitstring SM where SM[j]=1 if for all k>j, we have M[j]M[k].

Lemma 4.

For any element xiH we can, given ai and t, compute the bucket containing xi and the Fibonacci heap that contains xi in constant time.

Proof.

Let ai be the insertion time of xi. We now want to determine the possible values j such that B[j] contains xi. From invariant 1.b we get: tai2j1, or equivalently, jlog(tai)+1; and tai<2j+2, or equivalently, j>log(tai)2. In particular, for =log(tai), the Fibonacci heap containing xi lies in one of the buckets B[1],B[],B[+1]. For each of these at most 6 corresponding intervals of the at most 6 heaps contained in those 3 buckets, we test, in constant time, whether ai is contained in the interval. This procedure returns us the interval containing ai, and thus, the heap (and bucket) containing xi.

Theorem 5.

The above data structure is a timestamp optimal heap H.

Proof.

We prove that the data structure supports all three heap operations within the specified time bounds while maintaining Invariant 1.

Push(𝒙𝒊, 𝒚).

To push xi onto H, we set ai=t and increment t. We then add a new Fibonacci heap F to B[0] with IF=[ai,ai+1). Whilst a bucket B[j] contains three Fibonacci heaps, we select its oldest two Fibonacci heaps F1,F2. We Meld F1 and F2 into a new Fibonacci heap F in worst-case constant time, and we move F to B[j+1]. We define the corresponding interval IF as the union of the consecutive intervals IF1 and IF2. Since Property (a) held for both F1 and F2, it now holds for F. Moreover, |IF|=2j+1 and so Property (b) is maintained. Since, before the update, all intervals collectively partitioned [0,t) and we only added the interval [t,t+1) we maintain Properties (c) and (d). Every Meld operation decreases the number of Fibonacci heaps and every push operation creates only one heap. Hence, each push operation performs amortized O(1) Meld operations.

Finally, we update M and the string SM. After we have merged two heaps in a bucket B[j], only M[j] and M[j+1] change. Moreover, M[j] can only increase in value and M[j+1] can only decrease to at least the original value of M[j]. Therefore, all bits SM[k] for k{j,j+1} remain unchanged and we update SM in amortized constant time. Thus, a push operation takes amortized constant time.

DecreaseKey(𝒙𝒊, 𝒚).

By Lemma 4, we find the Fibonacci heap F and the bucket B[j] that contain xi in constant time. We perform F.DecreaseKey(xi,y) in amortized constant time. We update M[j] and identify the index k of the leftmost 1-bit in SM after index j in constant time using bitstring operations [4]. Note that SM[j]=1 if and only if M[j]M[k].

If SM[j]=1, then we recursively find the next j<j for which SM[j]=1 using the same bitstring operation. If M[j]<M[j], then we set SM[j]=0 and we recurse. If M[j]M[j] then for all j′′<j with SM[j′′]=1, M[j]M[j′′] and our update terminates. The recursive call takes amortized constant time since it can only decrease the number of 1-bits in SM.

Pop().

We obtain the index j of the leftmost 1-bit of SM in constant time using bitstring operations. Per definition, the minimum key of H is in B[j]. We perform the Peak operation on both heaps in B[j]. For the heap F containing the minimum key, we do F.Pop() in O(1+log2j)=O(j) time to return the element xiH with the minimum key. Finally, we update M[j] and SM. We update M[j] in constant time and iterate from j to 1. For each iteration , let k> be the minimum index where SM[k]=1. We obtain k in constant time using bitstring operations and update SM[] by comparing M[] to M[k]. Thus, this update takes O(j) time. By Invariant 1, log(biai)O(j) and thus the H.Pop() takes O(1+log(biai)) amortized time.

Corollary 6.

Algorithm 1 can be adapted to use O(m+xiV{s}(1+log(biai)) time.

4 Universal Optimality

We show that categorizing the running time of Dijkstra’s algorithm by timestamps substantially simplifies its proof for universal optimality. For a given graph G=(V,E) and a designated source vertex s, we define a linearization L of (G,s) as any permutation of V{s} such that there exists some choice of edge weights w𝕎(G) where L is the solution to the single-source-shortest path problem on (G,s,w). Let (G,s) denote the number of distinct linearizations of (G,s). Observe that Ω(log(G,s)) is a comparison-based universal lower bound for the single-source shortest path problem. Indeed, any comparison-based algorithm has a corresponding binary decision tree with leaves representing possible outputs. With Ω((G,s)) distinct outputs, there must exist a root-to-leaf path of length Ω(log(G,s)).

Our approach.

We fix the input (G=(V,E),s,w) and execute Dijkstra’s algorithm, which defines a set of n timestamp intervals {[ai,bi]}i[n]. Each interval [ai,bi] has a minimum size of 1 and corresponds to a unique vertex xiV{s}. Dijkstra’s algorithm operates in O(m+xiV{s}(1+log(biai))) time. Since any algorithm must spend Ω(n+m) time to read the input, it remains to show that xiV{s}(1+log(biai))Ω(n+log(G,s)).

Lemma 7.

For each vertex xiV{s}, choose a real value ri[ai,bi]. If all ri are distinct, then the sequence L, which orders all xi by ri, is a linearization of (G,s).

Proof.

Consider the initial execution of Dijkstra’s algorithm on (G,s,w). For each vertex xiV{s}, there is a unique edge (u,xi) that, upon inspection, pushes the element xi onto the heap H. The set of these edges forms a spanning tree T rooted at s.

Denote by rank(ri) the rank of ri in the ordered sequence L. We construct a new integer edge weighting w such that running Dijkstra’s algorithm on (G,s,w) outputs L. We set w(e)= for all edges eET. We then traverse T in a breath-first manner. For each edge (xj,xi)T, we observe that in our original run of Dijkstra’s algorithm, the algorithm popped the vertex xj before it pushed the vertex xi onto the heap. It follows that for the intervals {[aj,bj],[ai,bi]}, the value bjai. Since rj[aj,bj], ri[ai,bi], and rjri we may conclude that rank(ri)>(rj). We assign to each edge (xj,xi)T the positive edge weight w((xj,xi))=rank(ri)rank(rj). It follows that the distance in (G,E,w) from s to any vertex xi is equal to rank(ri) and so Dijkstra’s algorithm on (G,s,w) outputs L.

What remains is to show that the number of ways to select distinct ri[ai,bi], correlates with the size of the timestamp intervals. This result was first demonstrated by Cardinal, Fiorini, Joret, Jungers, and Munro [1, the first 2 paragraphs of page 8 of the ArXiV version]. It was later paraphrased in Lemma 4.3 in [5].

Lemma 8 (Originally in [1], paraphrased by Lemma 4.3 in [5]).

Let {[ai,bi]} be a set of n integer intervals where each interval has size at least 1. Let Z be the set of linear orders realizable by real numbers ri[ai,bi]. Then i[n]log(biai)O(n+log|Z|).

Proof.

To illustrate the simplicity of this lemma, we quote the proof in [5]: For each i between 1 and n inclusive, choose a real number ri uniformly at random from the real interval [0,n], independently for each i. With probability 1, the ri are distinct. Let L be the permutation of [n] obtained by sorting [n] by ri. Each possible permutation is equally likely. If each ri[ai,bi] then L is in Z. The probability of this happening is i=1k(biai)/n. It follows that |Z|n!i=1n(biai)/n. Taking logarithms gives log|Z|i=1nlog(biai)+logn!nlogn. By Stirling’s approximation of the factorial, logn!nlognnloge. The lemma follows.

It follows that we can recreate the main theorem from [6]:

Theorem 9 (Theorem 1.2 in [6]).

Dijkstra’s algorithm, when using a sufficiently efficient heap data structure, has a universally optimal running time.

Proof.

The running time of Dijkstra’s algorithm with the input (G,s,w) is O(m) plus the time needed for push and pop operations. With a timestamp-optimal heap, by Corollary 6, the algorithm thus takes O(m+xiV{s}(1+log(biai))) total time. By Lemma 8, the total running time is then O(m+n+log|Z|), where |Z| denotes the number of distinct linear orders created by choosing distinct real values ri[ai,bi]. By Lemma 7: |Z|(G,s), and so Dijkstra’s algorithm with any timestamp-optimal heap runs in O(m+n+log(G,s)) time. Conversely, any algorithm that solves SSSP needs Ω(m+n) time to read the input, and Ω(log(G,s)) is an information-theoretical universal lower bound.

5 Conclusion

We study Dijkstra’s algorithm for the single-source-shortest path problem and give an alternative proof that Dijkstra’s algorithm is universally optimal. We consider our construction to be considerably simpler than the one in [6]. We find time stamps easier to define than the working set size of an element, and regard our timestamp optimal heap to be simpler in both its construction and analysis than the heap used in [5]. As an added benefit, we only rely on Fibonacci trees and bitstring manipulation and no involved data structures (e.g., the fusion trees used in [6]). We regard Section 4 as our main contribution, as it gives a very concise proof of universal optimality. We understand that algorithmic simplicity is subjective, and we leave it to the reader to judge the simplicity of our approach. We note, however, that proofs are unarguably shorter since the full version of [6] counts over 50 pages. We observe that both our paper and [6] must assume a word RAM and we consider it an interesting open problem to show universal optimality on a pointer machine.

Finally, we note that our data structure is equally general to [6]. In the full version, we show that our data structure, just as the one in [6], is also universally optimal if algorithmic running time is exclusively measured by the number of edge weight comparisons that the algorithm performs. While our proof is indeed new, different, and perhaps simpler than the previous one, it is not self-contained; for this reason, we relegate it to our full version.

References

  • [1] Jean Cardinal, Samuel Fiorini, Gwenaël Joret, Raphaël M Jungers, and J Ian Munro. Sorting under partial information (without the ellipsoid algorithm). In ACM Symposium on Theory of Computing (STOC), pages 359–368, 2010. doi:10.1145/1806689.1806740.
  • [2] Edsger Dijkstra. A note on two problems in connexion with graphs. Numerische Mathematik, 1:269–271, 1959. doi:10.1007/BF01386390.
  • [3] Michael Fredman and Robert Tarjan. Fibonacci heaps and their uses in improved network optimization algorithms. Journal of the ACM (JACM), 34(3):596–615, 1987. doi:10.1145/28869.28874.
  • [4] Michael Fredman and Dan Willard. Surpassing the information theoretic bound with fusion trees. Journal of computer and system sciences, 47(3):424–436, 1993. doi:10.1016/0022-0000(93)90040-4.
  • [5] Bernhard Haeupler, Richard Hladík, John Iacono, Václav Rozhoň, Robert E Tarjan, and Jakub Tětek. Fast and simple sorting using partial information. In ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 3953–3973. SIAM, 2025. doi:10.1137/1.9781611978322.134.
  • [6] Bernhard Haeupler, Richard Hladík, Václav Rozhoň, Robert Tarjan, and Jakub Tětek. Universal optimality of dijkstra via beyond-worst-case heaps. In Symposium on Foundations of Computer Science (FOCS). IEEE, 2024. doi:10.1109/FOCS61266.2024.00125.
  • [7] Bernhard Haeupler, Richard Hladík, Václav Rozhoň, Robert E Tarjan, and Jakub Tětek. Bidirectional dijkstra’s algorithm is instance-optimal. In Symposium on Simplicity in Algorithms (SOSA), pages 202–215. SIAM, 2025. doi:10.1137/1.9781611978315.16.
  • [8] Bernhard Haeupler, David Wajc, and Goran Zuzic. Universally-optimal distributed algorithms for known topologies. In ACM Symposium on Theory of Computing (STOC), pages 1166–1179, 2021. doi:10.1145/3406325.3451081.
  • [9] John Iacono. Improved upper bounds for pairing heaps. In Scandinavian Workshop on Algorithm Theory, pages 32–45. Springer, 2000. doi:10.5555/645900.672600.
  • [10] John Iacono and Michael L. Fredman. Distribution-sensitive data structures. PhD thesis, Rutgers University, New Jersey, 2001.
  • [11] Ivor van der Hoog, Eva Rotenberg, and Daniel Rutschmann. Simpler optimal sorting from a directed acyclic graph. In SIAM Symposium on Simplicity in Algorithms (SOSA), 2025. doi:10.1137/1.9781611978315.26.