Faster Algorithms for Largest Empty Rectangles and Boxes

We revisit a classical problem in computational geometry: finding the largest-volume axis-aligned empty box (inside a given bounding box) amidst n given points in d dimensions. Previously, the best algorithms known have running time O(nlog2n)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n\log ^2n)$$\end{document} for d=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d=2$$\end{document} (Aggarwal and Suri 1987) and near nd\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n^d$$\end{document} for d≥3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d\ge 3$$\end{document}. We describe faster algorithms with the following running times (where ε>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\varepsilon >0$$\end{document} is an arbitrarily small constant and O~\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{O}$$\end{document} hides polylogarithmic factors): n2O(log∗n)logn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n2^{O({\log }^*n)}\log n$$\end{document} for d=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d=2$$\end{document}, O(n2.5+ε)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n^{2.5+\varepsilon })$$\end{document} for d=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d=3$$\end{document}, and O~(n(5d+2)/6)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{O}(n^{(5d+2)/6})$$\end{document} for any constant d≥4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d\ge 4$$\end{document}. To obtain the higher-dimensional result, we adapt and extend previous techniques for Klee’s measure problem to optimize certain objective functions over the complement of a union of orthants. n2O(log∗n)logn\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$n2^{O({\log }^*n)}\log n$$\end{document} for d=2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d=2$$\end{document}, O(n2.5+ε)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$O(n^{2.5+\varepsilon })$$\end{document} for d=3\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d=3$$\end{document}, and O~(n(5d+2)/6)\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\widetilde{O}(n^{(5d+2)/6})$$\end{document} for any constant d≥4\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$d\ge 4$$\end{document}.


Introduction
Two dimensions. In the first part of this paper, we tackle the largest empty rectangle problem: Given a set P of n points in the plane and a fixed rectangle B 0 , find the largest rectangle B ⊂ B 0 such that B does not contain any points of P in its interior. Here and throughout this paper, a "rectangle" refers to an axis-parallel rectangle; and unless stated otherwise, "largest" refers to maximizing the area. The problem has been studied since the early years of computational geometry. While similar basic problems such as largest empty circle or largest empty square can be solved efficiently using Voronoi diagrams, the largest empty rectangle problem seems more challenging. The earliest reference on the 2D problem appears to be by Naamad, Lee, and Hsu in 1984 [26], who gave a quadratic-time algorithm. In 1986, Chazelle, Drysdale, and Lee [15] obtained an O(n log 3 n)-time algorithm. Subsequently, at SoCG'87, Aggarwal and Suri [3] presented another algorithm requiring O(n log 3 n) time, followed by a more complicated second algorithm requiring O(n log 2 n) time. The O(n log 2 n) worst-case bound has not been improved since. 1 A few results on related questions have been given. Dumitrescu and Jiang [20] examined the combinatorial problem of determining the worst-case number of maximum-area empty rectangles and proved an O(n2 α(n) log n) upper bound; their proof does not appear to have Dumitrescu and Jiang attempted to give a subcubic algorithm for the 3D problem, but their conditional solution required a sublinear-time dynamic data structure for finding the 2D maximum empty rectangles containing a query point -currently, the existence of such a data structure is not known.
On the lower bound side, Giannopoulos, Knauer, Wahlström, and Werner [23] proved that the largest empty box problem is W [1]-hard with respect to the dimension. This implies a conditional lower bound of Ω(n βd ) for some absolute constant β > 0, assuming a popular conjecture on the hardness of the clique problem.
We answer the above question affirmatively. For d = 3, we give an O(n 5/2+ε )-time algorithm, where ε > 0 is an arbitrarily small constant. For higher constant d ≥ 4, we obtain an algorithm with an intriguing time bound that improves over n d even more dramatically: O(n (5d+2)/6 ). For example, the bound is O(n 3.667 ) for d = 4, O(n 4.5 ) for d = 5, and O(n 8.667 ) for d = 10.
Not too surprisingly, our 3D algorithm achieves subcubic complexity by applying standard range searching data structures (though the application is not be immediately obvious). Dynamic data structures are not used.
The techniques for our higher-dimensional algorithm are perhaps more original and significant, with potential impact to other problems. We first transform the largest empty box problem into a problem about a union of n orthants in D = 2d dimensions (the transformation is simple and has been exploited before, such as in [5]). The union of orthants is known to have worst-case combinatorial complexity O(n ⌊D/2⌋ ) [7]. Interestingly, we show that it is possible to maximize certain types of objective functions over the complement of the union, in time significantly smaller than the worst-case combinatorial complexity.
We accomplish this by adapting known techniques on Klee's measure problem [27,10,8,11]. Specifically, we build on a remarkable method by Bringmann [8] for computing the volume of a union of n orthants in D dimensions in O(n D/3+O (1) ) time (the O(1) term in the exponent was 2/3 but has been later removed by author [11]). However, maximizing an objective function over the complement of the union is different from summing or integrating a function, and Bringmann's method does not immediately generalize to the former (for example, it exploits subtraction). We introduce extra ideas to extend the method, which results in a bigger time bound than n D/3 = n 2d/3 but nevertheless beats n D/2 = n d . In particular, we use some simple graph-theoretical arguments, applied to graphs with O(D) vertices.
Organization. We present our 2D algorithm in Sec. 2, our 3D algorithm in the full paper, and our higher-dimensional algorithms in Sec. 3-4 (all these parts may be read independently).

Largest empty rectangle in 2D
As in previous work [15,3], we focus on solving a line-restricted version of the 2D largest empty rectangle problem: given a set P of n points below a fixed horizontal line ℓ 0 and a set Q of n points above ℓ 0 , where the x-coordinates of all points have been pre-sorted, and given a rectangle B 0 , find the largest-area rectangle B ⊂ B 0 that intersects ℓ 0 and is empty of points of P ∪ Q. By standard divide-and-conquer, an O(T (n))-time algorithm for the line-restricted problem immediately yields an O(T (n) log n)-time algorithm for the original largest empty rectangle problem, assuming that T (n)/n is nondecreasing. We begin by reformulating the line-restricted problem as a problem about horizontal line segments. In the subsequent subsections, we will work with this re-formulation.
For each point p ∈ P , let s(p) be the longest horizontal line segment inside B 0 such that s(p) passes through p and there are no points of P above s(p). See Figure 1(a). We can compute s(p) for all p ∈ P in O(n) time: this step is equivalent to the construction of the standard Cartesan tree [31,22], for which there are simple linear-time algorithms (for example, by inserting points from left to right and maintaining a stack, like Graham's scan, as also re-described in previous papers [15,3]). Similarly, for each q ∈ Q, let t(q) be the longest horizontal line segment inside B 0 such that t(q) passes through q and there are no points of Q below t(q). We can also compute t(q) for all q ∈ Q in O(n) time.
For a horizontal segment s, let x − s and x + s denote the x-coordinates of its left and right endpoints respectively, and let y s denote its y-coordinate. We say that a set S of horizontal segments is laminar if for every s, s ′ ∈ S, either the two intervals [x − s , x + s ] and [x − s ′ , x + s ′ ] are disjoint, or one interval is contained in the other (in other words, the intervals form a "balanceded parentheses" or tree structure). It is easy to see that for the segments defined above, {s(p) : p ∈ P } is laminar and {t(q) : q ∈ Q} is laminar.
The optimal rectangle must have some point p * ∈ P on its bottom side and some point q * ∈ Q on its top side (except when the optimal rectangle touches the bottom or top side of B 0 , a case that can be easily dismissed in linear time). Chazelle, Drysdale, and Lee [15] already noted that the case when can be handled in O(n) time (in their terminology, this is the case of "three supports in one half, one in the  other"). 3 The key remaining case is when ). All other cases are symmetric. The problem is thus reduced to the following (see Figure 1 We find it more convenient to work with the corresponding decision problem, as stated below. By the author's randomized optimization technique [9], an O(T (n))-time algorithm for Problem 2 yields an O(T (n))-expected-time algorithm for Problem 1, assuming that T (n)/n is nondecreasing: ▶ Problem 2. Given a laminar set S of n horizontal segments and a laminar set T of n horizontal segments, where all x-coordinates have been pre-sorted, and given a value and if so, report one such pair. We call such a pair good.

Preliminaries
To help solve Problem 2, we define a curve γ s for each s ∈ S: for a sufficiently small δ > 0 and a sufficiently large M = M (δ). (The main first part of the curve is a hyperbola.) The condition ( i.e., the right endpoint of t) is above the curve γ s , assuming x + t ≥ x − s + δ. Note that these curves form a family of pseudo-lines, i.e., every pair of curves intersect at most once: this can be seen from the fact that for any two curves γ s and Define the curve segment ← γ s to be the part of γ s restricted to x ≤ x + s . (See Figure 1(c).) These curve segments form a family of pseudo-rays. The lower envelope of n pseudo-rays has at most 2n edges, by known combinatorial bounds on order-2 Davenport-Schinzel sequences [29]. The following lemma summarizes known subroutines we need on the computation of lower envelopes (proofs are briefly sketched). Proof. Part (a) follows by a straightfoward variant of Graham's scan [17] (originally for computing planar convex hulls, or by duality, lower envelopes of lines). We insert pseudo-rays in decreasing order of their right endpoints' x-values, while maintaining the portion of the lower envelope to the left of the right endpoint of the current pseudo-ray. In each iteration, by the monotonicity assumption, a prefix or suffix of the lower envelope gets deleted (i.e., popped from a stack). For part (b), the main case is when both the left and right endpoints are monotonically increasing in the pseudo-slopes (the case when both are monotonically decreasing is symmetric, and the case when they are monotone in different directions easily reduces to two instances of the pseudo-ray case). Greedily construct a minimal set of vertical lines that stab all the pseudo-segments: namely, draw a vertical line at the leftmost right endpoint, remove all pseudo-segments stabbed, and repeat. This process can be done in O(n) time by a linear scan. These vertical lines divide the plane into slabs. (See Figure 2.) In each slab, the pseudo-segments behave like pseudo-rays, so we can compute the lower envelope inside the slab in linear time by applying part (a) twice, for the leftward rays and for the rightward rays (the two envelopes can be merged in linear time). Since each pseudo-segment participates in at most two slabs, the total time is linear. ◀ As an application of Lemma 1(b), we mention an efficient algorithm for a special case of Problem 2, which will be useful later.

Faster Algorithms for Largest Empty Rectangles and Boxes
Proof. Since S and T are laminar, the x-projected intervals in each set are nested. Let The problem reduces to finding a pair (s i , t j ) such that max{a(i), b(i)} ≤ j ≤ c(i) and the right endpoint of t j is above γ si . Define the curve segment γ si to be the part of γ si The problem reduces to finding a t j whose right endpoint is above some curve segment γ si , i.e., above the lower envelope of these curve segments. We can compute this lower envelope in O(n) time by Lemma 1(b) (more precisely, by two invocations of the lemma, as max{x + t a(i) , x + t b(i) } consists of a monotonically increasing and a monotonically decreasing part). The problem can be then be solved by linear scan over the envelope and the endpoints of t j . ◀

Algorithm
We are now ready to describe our new algorithm for solving Problem 2, using interval trees and an interesting recursion with O(log * n) depth.
Proof. As a first step, we build the standard interval tree for the given horizontal segments in S ∪ T . This is a perfectly balanced binary tree of with O(log n) levels, where each node corresponds to a vertical slab. The root slab is the entire plane, the slab at a node is the union of the slabs of its two children, and each leaf slab contains no endpoints in its interior. Each segment is stored in the lowest node v whose slab contains the segment (i.e., the segment is contained in v's slab but is not contained in either child's subslab). Note that each segment is stored only once (unlike in another standard structure called the "segment tree"). We can determine the slab containing each segment in O(1) time by an LCA query [6] (which is easier in the case of a perfectly balanced binary tree). For each node v, let S v (resp. T v ) be the set of all segments of S (resp. T ) stored in v. Define the level of a segment to be the level of the node it is stored in. Case 1. There exists a good pair (s * , t * ) where s * and t * have the same level. Here, s * and t * must be stored in the same node v of the interval tree. Thus, a good pair can be found as follows: Note that all segments in S v ∪ T v indeed intersect a fixed vertical line (the dividing line at v).
The total running time of this step is O(n), since each segment is in only one S v or T v .

Case 2.
There exists a good pair (s * , t * ) where s * is on a strictly lower level than t * . To deal with this case, we perform the following steps, for some choice of parameter b ≥ log n: 2b. Divide the plane into a set Σ of n/b vertical slabs each containing b right endpoints of T . 2c. For each slab σ ∈ Σ, let T σ be the set of all segments t ∈ T with right endpoints in σ, and let S σ be the set of all segments s ∈ S such that ← γ s appears on E v ∩ σ for some node v. Divide S σ (arbitrarily) into blocks of size b and recursively solve the problem for T σ and each block of S σ .
Correctness. Consider a good pair (s * , t * ) with s * on a strictly lower level than t * . Let σ be the slab in Σ containing the right endpoint of t * , i.e., t * ∈ T σ . Let v be the node s * is stored in. Then t * intersects the left wall of the slab at v (since t * must be stored in a proper ancestor of v). Now, the right endpoint of t * is below ← γ s * and is thus below E v . Let ← γ s be the curve on E v that the right endpoint of t * is below, with s ∈ S v . Then ← γ s appears on E v ∩ σ, and so s ∈ S σ . Since the right endpoint of t * is below is good, and the recursive call for T σ and some block of S σ will find a good pair.

Largest empty anchored box in higher dimensions (warm-up)
To prepare for our solution to the largest empty box problem in higher constant dimensions, we first investigate a simpler variant, the largest empty anchored box problem: given a set P of n points in R d and a fixed box B 0 , find the largest-volume anchored box in B 0 that does not contain any points of P in its interior, where an anchored box has the form B = (0, x 1 ) × · · · (0, x d ) (having the origin as one of its vertices). Let S denote the union of a set S of objects. By mapping a box B = (0, x 1 ) × · · · (0, x d ) to the point (x 1 , . . . , x d ), and mapping each input point (p 1 , . . . , p d ) to the orthant (p 1 , ∞) × · · · × (p d , ∞), the largest empty anchored box problem reduces to: By known results [7], the union of n orthants in R d has worst-case combinatorial complexity O(n ⌊d/2⌋ ) and can be constructed in O(n ⌊d/2⌋ ) time. We will show that Problem 3 can be solved faster than explicitly constructing the union.

Preliminaries
A key tool we need is a spatial partitioning scheme due to Overmars and Yap [27] (originally developed for solving Klee's measure problem in O(n d/2 ) time). The version stated below is taken from [11,Lemma 4.6]; see that paper for a short proof. (The partitioning scheme is also related to "orthogonal BSP trees" [21,14].) where each h i is a univariate step function. The complexity of H refers to the total complexity (number of steps) in these step functions. As an illustration of the usefulness of Lemma 5, we first how to maximize simple functions over the complement of a union of orthants in O(n d/2 ) time:

Algorithm
To improve over n d/2 , we adapt an approach by Bringmann [8]   We first observe a few simple rules for rewriting expressions: increasing or both decreasing. Note that the lower envelope min{f, g} is still a monotone step function with O(n) complexity. A similar rule applies for ≥.
. A similar rule applies for ≥.
The plan is to decrease the dimension by repeatedly eliminating variables: We maintain a simple function H. Initially, H(x 1 , . . . , x d ) = σ(x 1 ) · · · σ(x d ), where σ(x) denotes the successor of x among the O(n) input coordinate values (σ is a step function). We call an index i free if the variable x i appears exactly once in H and is "unaltered", i.e., h i (x i ) = σ(x i ). All indices are initially free.
In each iteration, we pick a free index i. Whenever x i appears more than twice in E, we can apply rule 4 (in combination with rules 1-3) to obtain a disjunction of 2 subexpressions, where in each subexpression, the number of occurrences of x i is decreased. By repeating this process O(1) times (recall that d is a constant), we obtain a disjunction of O(1) subexpressions, where in each subexpression, only at most two occurrences of x i remain -in at most one predicate of the form [x i ≤ f (x j )], and at most one predicate of the form [ We branch off to maximize H over each of these subexpressions separately. In such a subexpression, to eliminate the variable x i while maximizing H, we replace the two predicates , which is still a step function with O(n) complexity). Now, i and j are not free.
We stop a branch when there are no free indices left. At the end, we get a large but O(1) number of subproblems, where in each subproblem, at least ⌈d/2⌉ variables have been eliminated, i.e., the dimension is decreased to d ′ ≤ ⌊d/2⌋. We solve each subproblem by

Largest empty box in higher dimensions
We now adapt the approach from Section 3 to solve the original largest empty box problem in higher  The above objective function H new-vol is a bit more complicated than the one from Section 3, and so further ideas are needed. . .

Preliminaries
For a multigraph G with vertex set {1, . . . , d} (without self-loops), define a G-function H : R d → R to be a function of the form where h i , h ′ e , and h ′′ e are univariate step functions. The complexity of H refers to the total complexity of these step functions. A pseudo-forest is a graph where each component is either a tree, or a tree plus an edge -in the latter case, the component is called a 1-tree (and we allow the extra edge to be a duplicate of an edge in the tree).
where h, h ′ , h ′′ are step functions and x i does not appear in "· · · ". Define F (ξ) := max x∈R h(x) · (h ′ (x) + ξ). Then F is the upper envelope of O(n) linear functions in the single variable ξ, and can be constructed in O(n) time by the dual of a planar convex hull algorithm [17]. We can eliminate the variable x i by replacing the h(

Algorithm
We now modify the proof of Theorem 7 to solve Problem 4 for the 2-sided orthant case: , with G being a matching with d/2 edges, where σ(x) denotes the successor of x among all O(n) input coordinate values. We call an index i free if x i appears exactly once in H and is "unaltered" (i.e., H is of the form (σ(x i ) + h(x ℓ )) · · · where x i does not appear in "· · · "). All indices are initially free. We maintain the following invariants: at any time, (i) G is a pseudo-forest with at most d/2 edges, and (ii) for each component T of G which is a tree (not a 1-tree), T has at least two free leaves.
In each iteration, we pick a free leaf i in some component T of G which is a tree. As before, we rewrite the expression E as a disjunction of O(1) subexpressions, where in each subexpression, only two occurrences of x i remain -in a predicate of the form [x i ≤ f (x j )], and another predicate of the form [x i ≥ g(x k )].
We branch off to maximize H for each of these subexpressions separately. In such a subexpression, to eliminate the variable x i while maximizing H, we replace the two predicates [x i ≤ f (x j )] and [x i ≥ g(x k )] with [f (x j ) ≥ g(x k )], and replace x i with f (x j ) in H (since x i is free). Now, i and j are not free. Also, in the graph G, the unique edge iℓ incident to i is replaced by jℓ (unless j = ℓ). If j is in the same component T as i, then T becomes a 1-tree; otherwise, two components are merged and the new component is either a tree with at least two free leaves, or a 1-tree. (See

Remarks
On the 2D algorithm. The 2 O(log * n) factor can be analyzed more precisely (an upper bound of 3 log * n can be shown with minor changes to the algorithm). A question remains whether the extra factor could be further lowered to inverse-Ackermann, or eliminated completely. The previous algorithm by Aggarwal and Suri [3] used matrix searching techniques, namely, for finding row minima in certain types of partial Monge matrices. We are able to bypass such subroutines because we have focused our effort on solving the decision problem (due to the author's randomized optimization technique [9]). Generally, the row minima problem is equivalent to the computation of lower envelopes of pseudo-rays and pseudo-segments, not necessarily of constant complexity [12]. However, to solve the decision problem, we only need lower envelopes of pseudo-rays and pseudo-segments of constant complexity (formed by hyperbolas), for which there are simpler direct methods, as we have noted in Lemma 1. (Incidentally, the proof we gave for reducing Lemma 1(b) to (a) is essentially equivalent to Aggarwal and Klawe's reduction of row minima in double-staircase to staircase matrices [2]; a similar idea has also been used in dynamic data structures with "FIFO updates" [13].)