Abstract 1 Introduction 2 Basic Insertion-Only Structure References

Incremental Planar Nearest Neighbor Queries with Optimal Query Time

John Iacono ORCID Université libre de Bruxelles, Belgium Yakov Nekrich ORCID Michigan Technological University, Houghton, MI, USA
Abstract

In this paper we show that two-dimensional nearest neighbor queries can be answered in optimal O(logn) time while supporting insertions in O(log1+εn) time. No previous data structure was known that supports O(logn)-time queries and polylog-time insertions. In order to achieve logarithmic queries our data structure uses a new technique related to fractional cascading that leverages the inherent geometry of this problem. Our method can be also used in other semi-dynamic scenarios.

Keywords and phrases:
Data Structures, Dynamic Data Structures, Nearest Neighbor Queries
Funding:
John Iacono: Research supported by the Fonds de la Recherche Scientifique – FNRS.
Yakov Nekrich: Supported by the National Science Foundation under NSF grant 2203278.
Copyright and License:
[Uncaptioned image] © John Iacono and Yakov Nekrich; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Computational geometry
; Theory of computation Data structures design and analysis
Related Version:
Full Version: https://arxiv.org/abs/2504.07366 [15]
Editors:
Oswin Aichholzer and Haitao Wang

1 Introduction

In the nearest neighbor problem a set of points S is stored in a data structure so that for a query point q the point pS that is closest to q can be found efficiently. The nearest neighbor problem and its variants are among the most fundamental and extensively studied problems in computational geometry; we refer to [19] for a survey. In this paper we study dynamic data structures for the Euclidean nearest neighbor problem in two dimensions. We show that the optimal O(logn) query time for this problem can be achieved while allowing insertions in time O(log1+ϵn).

Previous Work

See Table 1. In the static scenario the planar nearest neighbor problem can be solved in O(logn) time by point location in Voronoi diagrams. However the dynamic variant of this problem is significantly harder because Voronoi diagrams cannot be dynamized efficiently: it was shown by Allen et al. [2] that a sequence of insertions can lead to Ω(n) amortized combinatorial changes per insertion in the Voronoi diagram. A static nearest-neighbor data structure can be easily transformed into an insertion-only data structure using the logarithmic method of Bentley and Saxe [3] at the cost of increasing the query time to O(log2n). Several researchers [11, 9, 20] studied the dynamic nearest neighbor problem in the situation when the sequence of updates is random in some sense (e.g. the deletion of any element in the data structure is equally likely). However their results cannot be extended to the case when the complexity of a specific sequence of updates must be analyzed.

Using a lifting transformation [10], 2-d nearest neighbor queries can be reduced to extreme point queries on a 3-d convex hulls. Hence data structures for the dynamic convex hull in 3-d can be used to answer 2-d nearest neighbor queries. The first such data structure (without assumptions about the update sequence) was presented by Agarwal and Matoušek [1]. Their data structure supports queries in O(logn) time and updates in O(nε) time; another variant of their data structure supports queries in O(nε) time and updates in O(logn) time. A major improvement was achieved in a seminal paper by Chan [4] where he presented a structure that supports queries in O(log2n) time, insertions in O(log3n) expected time and deletions in O(log6n) expected time. The update procedure can be made deterministic using the result of Chan and Tsakalidis [6]. The deletion time was further reduced to O(log5n) [16] and to O(log4n) [5]. This sequence of papers makes use of shallow cuttings, a general powerful technique, but, alas, all uses of it for the point location problem in 2-d have resulted in O(log2n) query times.

Even in the case of insertion-only scenario, the direct application of the 45-year-old classic technique of Bentley and Saxe [3] remains the best insertion-only method with polylogarithmic update before this work; no data structure with o(log2n) query time and polylogarithmic update time was described previously.

Table 1: Known results. Insertion and deletion times are amortized, † denotes in expectation.
Query Insert Delete
Bentley and Saxe 1980  [3] O(log2n) O(log2n) Not supported
Agarwal and Matoušek 1995 [1] O(logn) O(nε) O(nε)
'' O(nε) O(logn) O(logn)
Chan 2010 [4] O(log2n) O(log3n) O(log6n)
Chan and Tsakalidis 2016 [6] O(log2n) O(log3n) O(log6n)
Kaplan et al. 2020 [16] O(log2n) O(log3n) O(log5n)
Chan 2020 [5] O(log2n) O(log3n) O(log4n)
Here O(logn) O(log1+εn) Not supported
Our Results

We demonstrate that optimal O(logn) query time and poly-logarithmic update time can be achieved in some dynamic settings. The following scenarios are considered in this paper:

  1. 1.

    We describe a semi-dynamic insertion-only data structure that uses O(n) space, supports insertions in O(log1+εn) amortized time and answers queries in O(logn) time.

  2. 2.

    In the semi-online scenario, introduced by Dobkin and Suri [12], we know the deletion time of a point p when a point p is inserted, i.e., we know how long a point will remain in a data structure at its insertion time. We describe a semi-online fully-dynamic data structure that answers queries in O(logn) time and supports updates in O(log1+εn) amortized time. The same result is also valid in the offline scenario when the entire sequence of updates is known in advance.

  3. 3.

    In the offline partially persistent scenario, the sequence of updates is known and every update creates a new version of the data structure. Queries can be asked to any version of the data structure. We describe an offline partially persistent data structure that uses O(nlog1+εn) space, can be constructed in O(nlog1+εn) time and answers queries in O(logn) time.

All three problems considered in this paper can be reduced to answering point location queries in (static) Voronoi diagrams of O(logn) different point sets. For example, we can obtain an insertion-only data structure by using the logarithmic method of Bentley and Saxe [3], which we now briefly describe. The input set S is partitioned into a logarithmic number of subsets S1, , Sf of exponentially increasing sizes. In order to find the nearest neighbor of some query point q we locate q in the Voronoi diagram of each set Si and report the point closest to q among these nearest neighbors. Since each point location query takes O(logn) time, answering a logarithmic number of queries takes O(log2n) time.

The fractional cascading technique [7] applied to this problem in one dimension decreases the query cost to logarithmic by sampling elements of each Si and storing copies of the sampled elements in other sets Sj, j<i. Unfortunately, it was shown by Chazelle and Liu [8] that fractional cascading does not work well for two-dimensional non-orthogonal problems, such as point location: in order to answer O(logn) point location queries in O(logn) time, we would need Ω~(n2) space, even in the static scenario.

To summarize, the two obvious approaches to the insertion-only problem are to maintain a single search structure and update it with each insertion, the second is to maintain a collection of static Voronoi diagrams of exponentially-increasing size and to execute nearest neighbor queries by finding the closest point in all structures, perhaps aided by some kind of fractional cascading. The first approach cannot obtain polylogarithmic insertion time due to the lower bound on the complexity change in Voronoi diagrams caused by insertions [2], and the second approach cannot obtain O(logn) search time due to Chazelle and Liu’s lower bound [8]. Our main intellectual contribution is showing that the lower bound of Chazelle and Liu [8] can be circumvented for the case of point location in Voronoi diagrams. Specifically, a strict fractional cascading approach requires finding the closest point to a query point in each of the subsets Si; we loosen this requirement: in each Si, we either find the closest point or provide a certificate that the closest point in Si is not the closest point in S. This new, powerful and more flexible form of fractional cascading is done by using a number of novel observations about the geometry of the problem. We imagine our general technique may be applicable to speeding up search in other dynamic search problems. Our method employs planar separators to sample point sets and uses properties of Voronoi diagrams to speed up queries. We explain our method and show how it can be applied to the insertion-only nearest neighbor problem in Section 2.

A further modification of our method, that is described in the full version of this paper [15], improves the insertion time and the insertion time to O(log1+εn) and space usage to O(n). The description of partially persistent and semi-online variants is also in the full version.

2 Basic Insertion-Only Structure

We present our basic insertion only structure in several parts. In the first part, the overview (§2.1), we present the structure where one needed function, jump, is presented as a black box. With the jump function abstracted, our structure is a combination of known techniques, notably the logarithmic method of Bentley and Saxe [3] and sampling. In §2.2 we present the implementation of the jump function and the needed geometric preliminaries. In contrast to combination of standard techniques presented in the overview which require little use of geometry, our implementation of the jump function is novel and requires thorough geometric arguments. We then fully describe the underlying data structures needed in §2.3. Note that all arguments presented here are done with the goal of obtaining O(logn) queries and polylogarithmic insertion. We opt here for clarity of presentation versus reducing the number of logarithmic factors in the insertion, as the arguments are already complex.

2.1 Overview

We let S denote the set of points currently stored in the structure, and use n to denote |S|. In this section we set a constant d to 2. Even though for this section d is fixed, we will express everything as a function of d.

Let 𝒮={S1,S2,Sf} denote a partition of S into sets of exponentially-increasing size where f|𝒮|=Θ(logdn) and |Si|=Θ(di). Note that the partition of 𝒮 into Si is not unique. Let 𝑁𝑁(P,q) be the nearest neighbor of q in a point set P, which we assume to be unique. Given a point q, the computation of 𝑁𝑁(S,q) is the query that our data structure will support.

We now define a sequence of point sets T1,Tf. The intuition is that, as in classical fractional cascading [7], the set Ti contains all elements of Si and a sample of elements from the sets Tj where j>i; this implies the last sets are equal: Tf=Sf. This sampling will be provided by the function 𝑆𝑎𝑚𝑝𝑙𝑒j(k) which returns a subset of Tj of size O(|Tj|/d2k); while it will have other important properties, for now only the size matters.

We now can formally define Ti: TiSij=i+1f𝑆𝑎𝑚𝑝𝑙𝑒j(ji). From this definition we have several observations which we group into a lemma, the proof can be found in the full version of this paper:

Lemma 1.

Facts about Ti

  1. 1.

    Tf=Sf

  2. 2.

    Ti is a function of the Sj, for ji.

  3. 3.

    S=i=1fTi

  4. 4.

    𝑁𝑁(S,q)i=1f{𝑁𝑁(Ti,q)}

  5. 5.

    |Ti|=Θ(di)

  6. 6.

    For any i j=i+1f|𝑆𝑎𝑚𝑝𝑙𝑒j(ji)|=Θ(|Ti|)

A note on notation

We assume the partition of S into the sets 𝒮={S1,S2,,Sf}. Any further notation that includes a subscript, such as Ti, is a function of the Sj, for ji. This compact notation makes explicit the dependence of 𝑎𝑛𝑦𝑡ℎ𝑖𝑛𝑔i only on Sj, for ji. This dependence in one direction only (i.e., on non-smaller indexes only) is crucial to the straightforward application of the standard Bentley and Saxe [3] rebuilding technique in the implementation of the data structure and the insertion operation described in section §2.3-2.4.

Voronoi and Delaunay

Let 𝑉𝑜𝑟(P) be the Voronoi diagram of point set P, let 𝐶𝑒𝑙𝑙(P,p) be the cell of a point p in 𝑉𝑜𝑟(P), that is the locus of points in the plane whose closest element in P is p. Thus q𝐶𝑒𝑙𝑙(P,p) is equivalent to 𝑁𝑁(P,q)=p. Let |𝐶𝑒𝑙𝑙(P,p)| be the complexity of the cell, that is, the number of edges on its boundary. Let G(P) refer to the Delaunay graph of P, the dual graph of the Voronoi diagram of P; the degree of p in G(P) is thus |𝐶𝑒𝑙𝑙(P,p)| and each point in P corresponds to a unique vertex in G(P). Delaunay graphs are planar. To simplify the description, we will not distinguish between points in a planar set P and vertices of G(P). For example, we will sometimes say that a point p has degree x or say that a point p is a neighbor of p. We will find it useful to have a compact notation for expressing the union of Voronoi cells; thus for a set of points PP, let 𝐶𝑒𝑙𝑙𝑠(P,P) denote pP𝐶𝑒𝑙𝑙(P,p).

Pieces and Fringes

Given a graph, G=(V,E), and a set of vertices VV, the fringe of V (with respect to G) is the subset of V incident to edges whose other endpoint is in VV. Let G=(V,E) be a planar graph. For any ρ, Frederickson [13] showed the vertices of G can be decomposed111We use the word decomposed to mean a division of a set into into a collection of sets, the decomposition, whose union is the original set, but, unlike with a partition, elements may belong to multiple sets. into Θ(|V|/ρ) pieces, so that: (i) Each vertex is in at least one piece. (ii) Each piece has at most ρ vertices in total and only O(ρ) vertices on its fringe. (iii) If a vertex is a non-fringe vertex of a piece (with respect to G), then it is not in any other pieces. (iv) The total size of all pieces is in Θ(|V|). Intuitively, the pieces are almost a partition of V where those vertices on the fringe of each piece may appear in multiple pieces. Such a decomposition of G can be computed in time O(|V|) [14, 18]. We will apply this decomposition to Ti, which is both a point set and the vertex set of G(Ti), for exponentially increasing sizes of ρ.

Given integers 1k<j<f, let

𝑃𝑖𝑒𝑐𝑒𝑠j(k){𝑃𝑖𝑒𝑐𝑒j1(k),𝑃𝑖𝑒𝑐𝑒j2(k),𝑃𝑖𝑒𝑐𝑒jr(k)}

be a decomposition of Tj into r=Θ(|Tj|/d4k) subsets such that each subset 𝑃𝑖𝑒𝑐𝑒jl(k) has size O(d4k) and a fringe of size O(d2k) with respect to G(Ti). We let

𝑆𝑒𝑝𝑠j(k){𝑆𝑒𝑝j1(k),𝑆𝑒𝑝j2(k),𝑆𝑒𝑝jr(k)}

be defined such that 𝑆𝑒𝑝j(k) denotes the fringe of 𝑃𝑖𝑒𝑐𝑒j(k), and let Sep¯j(k) be 𝑃𝑖𝑒𝑐𝑒j(k)𝑆𝑒𝑝j(k). Thus each 𝑃𝑖𝑒𝑐𝑒j(k) is partitioned into its fringe vertices, 𝑆𝑒𝑝j(k), and its interior non-fringe vertices 𝑆𝑒𝑝¯j(k); note that 𝑆𝑒𝑝¯j(k) may be empty if all elements of 𝑃𝑖𝑒𝑐𝑒j(k) are on its fringe.

Finally, we define 𝑆𝑎𝑚𝑝𝑙𝑒j(k) to be the union of all the fringe vertices:

𝑆𝑎𝑚𝑝𝑙𝑒j(k)Sep𝑆𝑒𝑝𝑠j(k)Sep

thus 𝑆𝑒𝑝𝑠j(k) is a decomposition of 𝑆𝑎𝑚𝑝𝑙𝑒j(k).

For any k[1..j1], the decomposition of Tj into 𝑃𝑖𝑒𝑐𝑒𝑠j(k), the partition of each 𝑃𝑖𝑒𝑐𝑒j(k) into 𝑆𝑒𝑝j(k) and Sep¯j(k), and the set 𝑆𝑒𝑝𝑠j(k) can all be computed in time O(|Tj|) using [18] if the Delaunay triangulation is available; if not it can be computed in time O(|Tj|log|Tj|). Thus computing these for all valid i takes time and space O(|Tj|log|Tj|logd|Tj|) as k<j=O(logd|Tj|).

Figure 1: Part of a Voronoi diagram for a point set Tj. Two elements of 𝑃𝑖𝑒𝑐𝑒𝑠j(k) have been highlighted, one in striped blue, call it 𝑃𝑖𝑒𝑐𝑒j1(k), and one in striped green, call it 𝑃𝑖𝑒𝑐𝑒j2(k). For each piece, the cells of fringe vertices are shaded red. Thus, the set 𝑆𝑎𝑚𝑝𝑙𝑒j(k) are the red verticies, and the region 𝐶𝑒𝑙𝑙𝑠(Tj,𝑆𝑎𝑚𝑝𝑙𝑒j(k)) is shaded red. The green-and-red shaded region is 𝐶𝑒𝑙𝑙𝑠(Tj,𝑆𝑒𝑝j2(k)) and the green-but-not-red shaded region is 𝐶𝑒𝑙𝑙𝑠(Tj,𝑆𝑒𝑝¯j2(k)).

Figure 2: High complexity cells can occur in Voronoi diagrams. Such cells must be included in fringe verticies 𝑆𝑎𝑚𝑝𝑙𝑒j(k), illustrated in red for some point set Tj. This results in the complexity of the boundary of the interior sets 𝐶𝑒𝑙𝑙𝑠(Tj,𝑆𝑒𝑝¯_j(k)), the connected components of white Voronoi cells, are of complexity O(d4(ji)) by Lemma 3.

One property of this sampling technique is that points in Tj with Voronoi cells in 𝑉𝑜𝑟(Tj) of complexity at least k are included in Ti if j>i and ji=O(logdk). By complexity of a region, we mean the number of edges that bound this region.

Lemma 2.

Given i<j, if pTi and pTj then the complexity of 𝐶𝑒𝑙𝑙(Tj,p) is O(d4(ji)).

Proof.

Suppose that |𝐶𝑒𝑙𝑙(Tj,p)|>cd4(ji), for a constant c chosen later, we will show this implies pTi. Thus the degree of p in G(Tj) is greater than cd4(ji). Consider the piece 𝑃𝑖𝑒𝑐𝑒j(ji) in 𝑃𝑖𝑒𝑐𝑒𝑠j(ji) that contains p. This piece has size at most O(d4(ji)), which is at most cd4(ji) for some c (here we choose c). Thus in G(Tj), p must have neighbors which are not in 𝑃𝑖𝑒𝑐𝑒j(ji). By definition, p is thus in the fringe Sepj(ji), which implies p𝑆𝑒𝑝j(ji). From the definition of Ti, Ti𝑆𝑒𝑝j(ji) and which gives pTi.

While we cannot bound the complexity of any Voronoi cell in a fringe, we can bound the complexity of Sep¯j(ji), the cells inside a fringe. Intuitively, each piece has the fringe cells on its exterior and non-fringe cells on its interior; imagining the fringe cells of a piece as an annulus gives two boundaries, the exterior boundary of the fringes, which is the boundary between the cells of this and other pieces, and the interior boundary of the fringes, which is the boundary between the fringe cells and the interior cells in this piece. Crucially, while the exterior boundary could have high complexity, the interior boundary does not, which we now formalize:

Lemma 3.

The complexity of 𝐶𝑒𝑙𝑙𝑠(Tj,Sep¯j(ji)) is O(d4(ji)).

Proof.

See Figure 2. Each cell in 𝐶𝑒𝑙𝑙𝑠(Tj,Sep¯j(ji)) is adjacent to either other cells in 𝐶𝑒𝑙𝑙𝑠(Tj,Sep¯j(ji)) or cells in 𝐶𝑒𝑙𝑙𝑠(Tj,Sepj(ji)). The adjacency graph of these Voronoi regions is planar, and as Sep¯j(ji)Sepj(ji)=Piecej(ji), and recalling that |Piecej(ji)|=O(d4(ji)) gives the lemma.

The Jump function: definition

At the core of our nearest neighbor algorithm is the function Jump, defined as follows. We will find it helpful to use 𝑁𝑁R(q) for a range R=[l,r] to denote 𝑁𝑁(i[l,r]Ti,q); for example 𝑁𝑁[1,k](q) is the nearest neighbor of q in T1,T2,Tk.

Intuitively, a call to Jump(i,j,q,pi,ei) is used when trying to find the nearest neighbor of q, and assuming we know the nearest neighbor of q in T1,T2T(i+j)/2 determines whether there are any points that could be the nearest neighbor of q in T(i+j)/2+1Tj. This information could be either a simple no, or it could provide the nearest neighbor of q for some prefix of these sets. Additionally, the edge of an the Voronoi cell of the currently known nearest neighbor in the direction of the query point is always passed and returned to aide the search using the combinatorial bounds from Lemma 8, point 4.

  • Input to Jump(i,j,q,pi,ei):

    • Integers i and j, where ji is required to be a power of 2. We use m to refer to (j+i)/2, the midpoint.

    • Query point q.

    • Point pi where pi=𝑁𝑁(Ti,q).

    • The edge ei on the boundary of 𝐶𝑒𝑙𝑙(Ti,pi) that the ray piq intersects.

  • Output: Either one of two results, Failure or a triple (j,pj,ej)

    • If Failure, this is a certificate that 𝑁𝑁(m,min(j,f)](q)𝑁𝑁[1,min(j,f)](q)

    • If a triple (j,pj,ej) is returned, it has the following properties:

      • *

        The integer j is in the range (m,j] and 𝑁𝑁(m,j)(q)𝑁𝑁[1,j)(q).

      • *

        The point pj is 𝑁𝑁(Tj,q).

      • *

        The edge ej is on the boundary of 𝐶𝑒𝑙𝑙(Tj,pj) that the ray pjq intersects.

We will show later that Jump runs in O(ji) time. Implementation details are deferred to Section 2.2.

The nearest neighbor procedure

A nearest neighbor query can be answered through a series of calls to the Jump function:

  • Initialize i=1,j=2, p1 to be 𝑁𝑁(T1,q), and e1 to be the edge of 𝐶𝑒𝑙𝑙(T1,p1) crossed by the ray p1q; all of these can be found in constant time as |T1|=Θ(1). Initialize p𝑛𝑒𝑎𝑟𝑒𝑠𝑡 to p1.

  • Repeat the following while i+j2f:

    • Run Jump(i,j,q,pi,ei). If the result is failure:

      • *

        Set j=j+(ji)

    • Else a triple (j,pj,ej) is returned:

      • *

        If d(pj,q)<d(p𝑛𝑒𝑎𝑟𝑒𝑠𝑡,q) set p𝑛𝑒𝑎𝑟𝑒𝑠𝑡=pj

      • *

        Set i=j and set j=j+1

  • Return p𝑛𝑒𝑎𝑟𝑒𝑠𝑡

We will show in the rest of this section that, given this jump function as a black box, we can correctly answer a nearest neighbor query in O(logn) time.

Figure 3: Two iterations of the Jump procedure. The query point q is shown in red. Points pi=NN(Ti,q), pi+3=NN(Ti+3,q), and edges ei and ei+3 are shown in blue.
Correctness

The loop in the jump procedure will maintain the invariants that p𝑛𝑒𝑎𝑟𝑒𝑠𝑡 is 𝑁𝑁[1,min(f,i+j2)](q), i<j and ji is a power of two. The latter is because ji is set to either 1 or is doubled with each iteration of the loop. As to the first invariant, before the loop runs, the invariant requires p𝑛𝑒𝑎𝑟𝑒𝑠𝑡=𝑁𝑁(T1,q), which p1 and p𝑛𝑒𝑎𝑟𝑒𝑠𝑡 are initialized to. During the loop we subscript i and j by new and old to indicate the value of variables at the beginning and at the end of the loop. Thus p𝑛𝑒𝑎𝑟𝑒𝑠𝑡=𝑁𝑁[1,min(f,i𝑜𝑙𝑑+j𝑜𝑙𝑑2)](q). Also if the loop runs, we know min(j𝑜𝑙𝑑,f)=j𝑜𝑙𝑑. We distinguish between two cases depending on whether the jump function returns failure or a triple.

If the jump function returns failure, we know that

𝑁𝑁(i𝑜𝑙𝑑+j𝑜𝑙𝑑2,min(j𝑜𝑙𝑑,f)](q)𝑁𝑁[1,min(j𝑜𝑙𝑑,f)](q).

Using this fact we can go from the invariant in terms of the old variables to the new ones:

pnearest =𝑁𝑁[1,min(f,i𝑜𝑙𝑑+j𝑜𝑙𝑑2)](q) Invariant
=𝑁𝑁[1,i𝑜𝑙𝑑+j𝑜𝑙𝑑2](q) i𝑜𝑙𝑑+j𝑜𝑙𝑑2f
=𝑁𝑁[1,min(j𝑜𝑙𝑑,f)](q) 𝑁𝑁(i𝑜𝑙𝑑+j𝑜𝑙𝑑2,min(j𝑜𝑙𝑑,f)](q)𝑁𝑁[1,min(j𝑜𝑙𝑑,f)](q)
=𝑁𝑁[1,min(f,i𝑜𝑙𝑑+j𝑜𝑙𝑑+(j𝑜𝑙𝑑i𝑜𝑙𝑑)2)](q) math
=𝑁𝑁[1,min(f,i𝑛𝑒𝑤+j𝑛𝑒𝑤2)](q) i𝑛𝑒𝑤=i𝑜𝑙𝑑;j𝑛𝑒𝑤=j𝑜𝑙𝑑+(j𝑜𝑙𝑑i𝑜𝑙𝑑)

Now we consider the case when a triple (j,pj,ej) is returned by the Jump function. We know that 𝑁𝑁(m,j](q)𝑁𝑁[1,j](q), and using the same logic as from the failure case we can conclude that the old pnearest is 𝑁𝑁[1,j)(q). The code sets pnearest to the point that is closer to q among the old pnearest, equal to 𝑁𝑁[1,j)(q), and pj, which is 𝑁𝑁(Tj,q). Thus the new pnearest is equal to 𝑁𝑁[1,j](q) (note the closed interval) which we can rewrite to get the invariant as follows, using the fact that the subscript of 𝑁𝑁 is only dependent on the integers in the given range:

𝑁𝑁[1,j](q)=𝑁𝑁[1,i𝑛𝑒𝑤](q)=𝑁𝑁[1,i𝑛𝑒𝑤+i𝑛𝑒𝑤+12](q)=𝑁𝑁[1,i𝑛𝑒𝑤+j𝑛𝑒𝑤2](q)

When the loop finishes, we have j>f and thus

pnearest=𝑁𝑁[1,min(f,i+j2)](q)=𝑁𝑁[1,f](q)=argminp{Tk|k[1,f] and k}d(p,q)=𝑁𝑁(S,q)
Running time
Lemma 4.

Given the Jump function, the running time of the nearest neighbor search function in O(logn).

Proof.

We use a potential argument. Let it and jt denote the values of i and j after the loop has run t times, let at denote the runtime of the tth iteration of the loop, and let T denote the number of times the loop runs. The total runtime is thus O(1)+t=1Tat. Define Φtc(2it+jt), for some constant c chosen so that atc2(jjit); as i0=1 and j0=2, Φ0=4c. (recall that the runtime of Jump is O(ji) for constant d).

We will argue that for all t, atΦtΦt1+O(1). Summing, this gives t=1TatΦTΦ0+O(T). Since i, j and T are in the range [1,f], this bounds t=1Tat by 3cf+O(f)=Θ(logn). Thus all that remains is to argue that atΦtΦt1 for the two cases where the tth run of Jump ends in failure or returns a triple.

Case 1, Jump returns failure.

In the case of failure, i remains the same, thus it=it1 but j increases by ji, thus jt=2jt1it1. Thus the potential increases by c(ji).

ΦtΦt1 =c(2it+jt)c(2it1+jt1)
=c(2it1+2jt1it1)c(2it1+jt1)
=c(jt1it1)
at
Case 2, Jump returns a triple.

it is set to j and jt changes to j+1. The key observation is that, due to the invariant in our correctness argument, jtit1+jt12. Thus it=jjt1it1+jt12 and jt=j+1jt1+1it1+jt12+1; the potential change is

ΦtΦt1 =c(2it+jt)c(2it1+jt1)
c(2it1+jt12+it1+jt12+1)c(2it1+jt1)
=c2(jt1iti)+c
at

2.2 The jump function

2.2.1 Basic geometric facts

We begin with our most crucial geometric lemma, the one that we build upon to make our algorithm work. Informally, given point sets A and B which possibly have elements in common, and a query point q, if the closest point to q in B is also in A, then the closest point to q in AB is in A, and not in BA.

Lemma 5.

Given i,j, i<j, suppose that there is a point q such that q𝐶𝑒𝑙𝑙(Ti,pi)𝐶𝑒𝑙𝑙(Tj,pj) for some pj𝑆𝑎𝑚𝑝𝑙𝑒j(ji). Then q𝐶𝑒𝑙𝑙(TiTj,pi), or equivalently 𝑁𝑁(TiTj,q)=pi.

Proof.

Since pj𝑆𝑎𝑚𝑝𝑙𝑒j(ji), pjTi by the definition of Ti. Since q𝐶𝑒𝑙𝑙(Ti,pi), dist(q,pi)dist(q,pj). However, q𝐶𝑒𝑙𝑙(Tj,pj) and pj is the closest point to q in Tj: dist(q,pj)dist(q,pj) for all pj in Tj. Combining dist(q,pi)dist(q,pj) with dist(q,pj)dist(q,pj) for all pj in Tj gives the lemma.

Lemma 6.

If all elements of a point set P are in 𝐶𝑒𝑙𝑙(S,p) for some point set S and pS, then all elements of the convex hull of P are in 𝐶𝑒𝑙𝑙(S,p) as well.

Proof.

Immediate as 𝐶𝑒𝑙𝑙(S,p) is convex.

Lemmata 5 and 6 give us a tool to determine which parts of the Voronoi cell of some pi in Vor(Ti) must also be part of the Voronoi cell of pi in Vor(TiTj). We define this region as 𝐻𝑢𝑙𝑙i(j,pi), and then prove its properties.

Let 𝐻𝑢𝑙𝑙i(j,pi), for some piTi, denote the convex hull of

{pi}(𝐶𝑒𝑙𝑙(Ti,pi)𝐶𝑒𝑙𝑙𝑠(Tj,𝑆𝑎𝑚𝑝𝑙𝑒j(ji))).

See Figure 4 for an example.

Lemma 7.

𝐻𝑢𝑙𝑙i(j,pi)𝐶𝑒𝑙𝑙(TiTj,pi) and thus if q𝐻𝑢𝑙𝑙i(j,pi), 𝑁𝑁(q,S)TjTi

Proof.

The point pi is in its own cell, 𝐶𝑒𝑙𝑙(TiTj,pi), and by Lemma 5, all elements of 𝐶𝑒𝑙𝑙(Ti,pi)𝐶𝑒𝑙𝑙𝑠(Tj,𝑆𝑎𝑚𝑝𝑙𝑒j(ji)) are also in 𝐶𝑒𝑙𝑙(TiTj,pi). Thus the convex hull of these points is a subset of 𝐶𝑒𝑙𝑙(TiTj,pi) by Lemma 6. Thus, if a query point q is in 𝐻𝑢𝑙𝑙i(j,pi), then, by Lemma 7, the set Tj can be ignored because dist(q,pi)dist(q,p) for any pTjTi.

Geometrically, determining whether a point in 𝐶𝑒𝑙𝑙(Ti,pi) is in a 𝐻𝑢𝑙𝑙i(j,pi) and thus a full search in Vor(Tj) can be skipped is what our jump function does. We now need to turn to examining the combinatorial issues surrounding 𝐻𝑢𝑙𝑙i(j,pi) and its interaction with 𝐶𝑒𝑙𝑙(Ti,pi) as we need the complexity of the regions examined to be bounded in such a way to allow efficient searching to see if a point is in 𝐶𝑒𝑙𝑙(Ti,pi). We begin by defining the part of 𝐶𝑒𝑙𝑙(Ti,pi) that is not in 𝐻𝑢𝑙𝑙i(j,pi) as 𝐻𝑢𝑙𝑙¯i(j,pi) and proving a number of properties of this possibly disconnected region.

Figure 4: Illustration of the computation of 𝐻𝑢𝑙𝑙i(j,pi) and 𝐻𝑢𝑙𝑙¯i(j,pi). Observe that 𝐻𝑢𝑙𝑙i(j,pi) is the convex hull of those parts of 𝐶𝑒𝑙𝑙𝑠(Tj,𝑆𝑎𝑚𝑝𝑙𝑒j(ji)) (shaded pink) that are inside 𝐶𝑒𝑙𝑙(Ti,pi) (shaded tan). 𝐻𝑢𝑙𝑙¯i(j,pi) is simply the remainder of 𝐶𝑒𝑙𝑙(Ti,pi), and has two connected components, including a small one at the bottom. By Lemma 7, the closest point in TiTj to all points in 𝐻𝑢𝑙𝑙i(j,pi) is pi.
Lemma 8.

Consider the region

𝐻𝑢𝑙𝑙¯i(j,pi)𝐶𝑒𝑙𝑙(Ti,pi)𝐻𝑢𝑙𝑙i(j,pi)
  1. 1.

    Each connected component of 𝐻𝑢𝑙𝑙¯i(j,pi) is a subset of the union of Voronoi cells in one element of 𝑃𝑖𝑒𝑐𝑒𝑠j(ji); that is, each connected component of 𝐻𝑢𝑙𝑙¯i(j,pi) is a subset of 𝐶𝑒𝑙𝑙𝑠(Tj,𝑃𝑖𝑒𝑐𝑒j(ji)) for some 𝑃𝑖𝑒𝑐𝑒j(ji)𝑃𝑖𝑒𝑐𝑒𝑠j(ji).

  2. 2.

    𝐻𝑢𝑙𝑙¯i(j,pi) intersects each bounding edge of 𝐶𝑒𝑙𝑙(Ti,pi) in at most two connected components, each of which includes a vertex of 𝐶𝑒𝑙𝑙(Ti,pi).

  3. 3.

    Any line segment piq¯, where q is on the boundary of 𝐶𝑒𝑙𝑙(Si,pi) intersects 𝐻𝑢𝑙𝑙¯i(j,pi) in at most one connected component that, if it exists, includes q.

  4. 4.

    Let qr¯ be a boundary edge of 𝐶𝑒𝑙𝑙(Ti,pi). The solid triangle piqr intersects at most dO(ji) edges on the boundary separating 𝐻𝑢𝑙𝑙i(j,pi) from 𝐻𝑢𝑙𝑙¯i(j,pi).

Proof.
  1. 1.

    Suppose one connected component 𝐻𝑢𝑙𝑙¯i(j,pi) contains points p,p in both 𝐶𝑒𝑙𝑙𝑠(Tj,𝑃𝑖𝑒𝑐𝑒j(ji)) and 𝐶𝑒𝑙𝑙𝑠(Tj,𝑃𝑖𝑒𝑐𝑒j(ji)) where 𝑃𝑖𝑒𝑐𝑒j(ji) and 𝑃𝑖𝑒𝑐𝑒j(ji) are different elements of Pieces(Sj,ji). Consider a polyline that connects p and p while remaining in the same connected component of 𝐻𝑢𝑙𝑙¯i(j,pi). Such a polyline must cross, at some point, some cell 𝐶𝑒𝑙𝑙(Tj,pj) of some pjTj where pj𝑃𝑖𝑒𝑐𝑒j(ji) but where the cell 𝐶𝑒𝑙𝑙(Tj,pj) is adjacent to at least one other cell in 𝑉𝑜𝑟(Tj) that is not in 𝑃𝑖𝑒𝑐𝑒j(ji). Thus pj is by definition in 𝑆𝑒𝑝j(ji); thus pj is in 𝐻𝑢𝑙𝑙i(j,pi) by its definition in Lemma 7. But this contradicts pj is in 𝐻𝑢𝑙𝑙¯i(j,pi).

  2. 2.

    If this does not hold, there are points q2 and q3 on 𝐻𝑢𝑙𝑙¯i(j,pi), with q2 closer to j than q3, such that q2𝐻𝑢𝑙𝑙¯i(j,pi) and q3𝐻𝑢𝑙𝑙¯i(j,pi). But this cannot happen: We know pHulli(j,pi) by construction and if p and q2 are in Hulli(j,pi), q2 must be as well because the hull is a convex set.

  3. 3.

    By a similar argument as the last point, if this did not hold, there would be points p1,q1,q2,q3 in order on piq¯, where there are some points q1,q2,q3,q4,q5, in order, such that q1,q3,q5𝐻𝑢𝑙𝑙¯i(j,pi) and q2,q4𝐻𝑢𝑙𝑙i(j,pi). But this cannot happen: if q2 and q4 are in 𝐻𝑢𝑙𝑙i(j,pi), q3 must be as well because the hull is a convex set.

  4. 4.

    The complexity of 𝐻𝑢𝑙𝑙i(j,pi)piqr is at most the complexity of 𝐶𝑒𝑙𝑙𝑠(Tj,Seps¯j(ji)) and 𝐶𝑒𝑙𝑙𝑠(T,Seps¯j(ji)), where q𝐶𝑒𝑙𝑙𝑠(Tj,Seps¯j(ji)) and r𝐶𝑒𝑙𝑙𝑠(Tj,Seps¯j(ji)). By Lemma 3, the complexity of these regions (within the triangle piqr) is at most dO(ji). Taking the convex hull only decreases the complexity of the objects on which a hull is defined.

We can use these geometric facts to present the following corollary, which shows how we can use inclusion in 𝐻𝑢𝑙𝑙i(j,pi) to determine where to find the nearest neighbor of q in Tj or to determine that 𝑁𝑁(S,q), q’s nearest neighbor in S, is not in Tj without needing to find the nearest neighbor of q in Tj. This is our main novel idea, since, as discussed in the introduction, finding the nearest neighbors in every Tj in logarithmic time is not possible.

Corollary 9.

Given:

  • i and j, i<j

  • a query point q

  • a point pi where pi=𝑁𝑁(Ti,q)

  • the edge v1v2¯ on the boundary of 𝐶𝑒𝑙𝑙(Ti,pi) that the ray piq intersects.

then, by testing q against the part of 𝐻𝑢𝑙𝑙i(j,pi) that is inside piv1v2 and has complexity dO(ji) one of the following is true:

  • If q is inside 𝐻𝑢𝑙𝑙i(j,pi), then q is in 𝐶𝑒𝑙𝑙(TiTj,pi) and 𝑁𝑁(S,q) is not in Tj.

  • If q is outside 𝐻𝑢𝑙𝑙i(j,pi) (and thus inside 𝐻𝑢𝑙𝑙¯i(j,pi)): then 𝑁𝑁(Tj,q) is in the same element of Seps¯j(ji) as either v1 or v2.

2.2.2 Implementing the jump function

We now use one additional idea to speed up the jump function: While testing if q is inside or outside of the part of 𝐻𝑢𝑙𝑙i(j,pi) that intersects piv1v2 can be done in time O((ji)logd) (since the complexity of this part of the hull is dO(ji) by Lemma 8, point 4), we can in fact do something stronger. We can test if q is inside or outside of the part of 𝐻𝑢𝑙𝑙i(j,pi) that intersects piv1v2, for all i+j2<j<j, in the same time O((ji)logd). This is because the complexity of the subdivision of the plane induced by ji2 hulls of size dO(ji) has complexity dO(ji)O((ji)2) and thus we can determine in time O((ji)logd) which is the smallest j in the range i<jj where q is not inside 𝐻𝑢𝑙𝑙i(j,pi), or determine that for all j in the range i<jj.

Thus we obtain the final jump procedure:
Method to compute Jump(i,j,q,pi,ei):

  1. 1.

    Test to see what is the smallest j, i+j2<jj such that q is outside of the part of 𝐻𝑢𝑙𝑙i(j,pi) that intersects piv1v2. This will be done in time O(logd(ji)) with the convex hull search structure described in Section 2.3. If it is inside all such hulls, then 𝑁𝑁(S,q)Tj for all j such that i+j2<jj and failure is returned. Otherwise:

  2. 2.

    Let 𝑃𝑖𝑒𝑐𝑒v1 and 𝑃𝑖𝑒𝑐𝑒v2 denote the elements of 𝑃𝑖𝑒𝑐𝑒𝑠j(ji) that contain v1 and v2. These will be precomputed and accessible in constant time with the piece lookup structure described in Section 2.3.

  3. 3.

    Search in 𝑉𝑜𝑟(Tj,𝑃𝑖𝑒𝑐𝑒v1) and 𝑉𝑜𝑟(Tj,𝑃𝑖𝑒𝑐𝑒v2) to find 𝑁𝑁(𝑃𝑖𝑒𝑐𝑒v1𝑃𝑖𝑒𝑐𝑒v2,q), call it pj. Note that, pj is 𝑁𝑁(Tj,q). As both 𝑉𝑜𝑟(Tj,𝑃𝑖𝑒𝑐𝑒v1) and 𝑉𝑜𝑟(Tj,𝑃𝑖𝑒𝑐𝑒v2) have complexity O(dji), this can be done in time O((ji)logd) with the piece interior search structure described in Section 2.3.

  4. 4.

    Find the edge ej bounding 𝐶𝑒𝑙𝑙(Tj,pj) that the ray pjq intersects. As 𝐶𝑒𝑙𝑙(Tj,pj) has complexity dO(ji), this can be done in time O((ji)logd) with binary search.

2.3 Data structures needed

The data structure is split into levels, where level i consists of:

  1. I.

    Si,

  2. II.

    Ti,

  3. III.

    The Voronoi diagram of Ti, 𝑉𝑜𝑟(Ti), and a point location search structure for the cells of 𝑉𝑜𝑟(Ti),

  4. IV.

    The Delaunay triangulation of Ti, G(T).

  5. V.

    Additionally, we keep for each j, 1j<i:

    1. i.

      The partition of Ti into 𝑃𝑖𝑒𝑐𝑒𝑠j(k){𝑃𝑖𝑒𝑐𝑒j1(k),𝑃𝑖𝑒𝑐𝑒j2(k),𝑃𝑖𝑒𝑐𝑒j|𝑃𝑖𝑒𝑐𝑒𝑠j(k)|(k)}

    2. ii.

      The partition of each 𝑃𝑖𝑒𝑐𝑒j(k) into 𝑆𝑒𝑝j(k) and Sep¯j(k)

    3. iii.

      The set 𝑆𝑎𝑚𝑝𝑙𝑒j(k)Sep𝑆𝑒𝑝𝑠j(k)Sep

For any level i, this information can be computed from Ti in time O(|Ti|log|Ti|) using [18][Theorem 3] to compute the partition into pieces, and standard results on Delaunay/Voronoi construction.

Additionally, less elementary data structures are needed for each level, which we describe separately: the convex hull search structure, the piece lookup structure, and the piece interior search structure.

Convex hull search structure

For level i, a convex hull structure is built for every combination of:

  • A point pi in Ti

  • An edge ei of 𝐶𝑒𝑙𝑙(Ti,pi)

  • An index j where i<j, i+j2f and ji is a power of 2.

A convex hull structure answers queries of the form: given a point q in 𝐶𝑒𝑙𝑙(Ti,pi), return the smallest j, i+j2<jj such that q is outside of the part of 𝐻𝑢𝑙𝑙i(j,pi) that intersects piv1v2, where v1 and v2 are endpoints of ei. There are O(|Ti|loglogd|Ti|) such structures as j is at most f=Θ(logdn), and the complexity of a Voronoi diagram 𝑉𝑜𝑟(Ti) is linear in the number of points it is defined on.

The method used is to simply store a point location structure which contains subdivision within piv1v2 formed by the overlay of all boundaries of 𝐻𝑢𝑙𝑙i(j,pi), for i+j2<jj. As previously mentioned the complexity of this overlay is O(dji(ji)2) and thus point location can be done in the logarithm of this, which is O((ji)logd).

As noted earlier there are O(|Ti|loglogdn) structures, each of which takes space at most O(ji)=O(logdn) plus the number of intersections in the point location structure within the triangle. The O(ji) comes from the at most 2 edges from each hull that can pass through the triangle without intersection. For a given j, the sum of the complexities of 𝐻𝑢𝑙𝑙i(j,pi) over all piTi is O(Ti). As each hull edge can intersect at most O(ji)=O(logdn) other hull edges, that bounds the total space needed over one j to be O(nlogdn). The overall space usage is O(nlogd2n).

Piece lookup structure

Level i of the piece lookup structure contains for j, i<jmin(2i,f) and for each vertex vi of 𝑉𝑜𝑟(Ti) the index of which piece 𝑃𝑖𝑒𝑐𝑒j(ji)Piecesj(ji) has q in 𝐶𝑒𝑙𝑙𝑠(Tj,𝑃𝑖𝑒𝑐𝑒j(ji)). This can be precomputed using the point location search structure for 𝑉𝑜𝑟(Tj) in time O(logdj)=O(jlogd)=O(i)=O(logn) for fixed d for each of the O(|Ti|) vertices of 𝑉𝑜𝑟(Ti). Summing over the choices for j gives a total runtime of O(|Ti|log2n) to pre-compute all answers. The space usage is O(|Ti|logn).

Piece interior search structure

For each 1i<jf we store a point location structure that supports point location in time O(ji) in the Voronoi diagram for each set of points in 𝑃𝑖𝑒𝑐𝑒𝑠j(ji). Any standard linear-sized point location structure, such as that of Kirkpatrick [17], suffices since each element of 𝑃𝑖𝑒𝑐𝑒𝑠j(ji) has O(4ji) elements. For any fixed j, there are j1 choices for i, and the sets in 𝑃𝑖𝑒𝑐𝑒𝑠j(ji) partition Ti. Thus the total size of all these structures is O(|Tj|logn). The construction cost, given the 𝑃𝑖𝑒𝑐𝑒𝑠j(ji), incurs another logarithmic factor due to the need to construct the Voronoi diagram and the point location structure (we do not assume each piece is connected). Thus the piece interior search structure for level j is constructed in O(|Tj|log2n) time.

2.4 Insertion time

Our description of the data structures needed can be summarized as follows:

Lemma 10.

Level i of the data structure can be built in time and space O(|Ti|log2n) given all levels j>i.

Insertion is thus handled by the classic logarithmic method of Bentley and Saxe [3] which transforms a static construction into a dynamic structure, and which we briefly summarize. To insert, we put the new point into S1 and rebuild level 1. Every time a set Si exceeds the upper limit of Θ(di), half of the items are moved from Si to Si+1 and all levels from i+1 down to S1 are rebuilt. So long as the upper and lower constants in the big Theta are at least a constant factor apart, the amortized insertion cost is O(logn) times the cost per item to rebuild a level, thus obtaining:

Lemma 11.

Insertion can be performed with an amortized running time of O(log3n).

The performance of our data structure can be summarized as follows:

Theorem 12.

There exists a semi-dynamic insertion-only data structure that answers two-dimensional nearest neighbor queries in O(logn) time and supports insertions in O(log3n) amortized time. The data structure uses O(nlog2n) space.

References

  • [1] Pankaj K. Agarwal and Jirí Matousek. Dynamic half-space range reporting and its applications. Algorithmica, 13(4):325–345, 1995. doi:10.1007/BF01293483.
  • [2] Sarah R. Allen, Luis Barba, John Iacono, and Stefan Langerman. Incremental Voronoi diagrams. Discrete and Computational Geometry, 58(4):822–848, 2017. doi:10.1007/s00454-017-9943-2.
  • [3] Jon Louis Bentley and James B. Saxe. Decomposable searching problems I: static-to-dynamic transformation. Journal of Algorithms, 1(4):301–358, 1980. doi:10.1016/0196-6774(80)90015-2.
  • [4] Timothy M. Chan. A dynamic data structure for 3-d convex hulls and 2-d nearest neighbor queries. Journal of the ACM, 57(3):16:1–16:15, 2010. doi:10.1145/1706591.1706596.
  • [5] Timothy M. Chan. Dynamic geometric data structures via shallow cuttings. Discrete and Computational Geometry, 64(4):1235–1252, 2020. doi:10.1007/s00454-020-00229-5.
  • [6] Timothy M. Chan and Konstantinos Tsakalidis. Optimal deterministic algorithms for 2-d and 3-d shallow cuttings. Discrete and Computational Geometry, 56(4):866–881, 2016. doi:10.1007/s00454-016-9784-4.
  • [7] Bernard Chazelle and Leonidas J. Guibas. Fractional cascading: I. A data structuring technique. Algorithmica, 1(2):133–162, 1986. doi:10.1007/BF01840440.
  • [8] Bernard Chazelle and Ding Liu. Lower bounds for intersection searching and fractional cascading in higher dimension. Journal of Computing & Systems Sciences, 68(2):269–284, 2004. doi:10.1016/j.jcss.2003.07.003.
  • [9] Kenneth L. Clarkson, Kurt Mehlhorn, and Raimund Seidel. Four results on randomized incremental constructions. Computational Geometry, 3:185–212, 1993. doi:10.1016/0925-7721(93)90009-U.
  • [10] Mark de Berg, Otfried Cheong, Marc J. van Kreveld, and Mark H. Overmars. Computational geometry: algorithms and applications, 3rd Edition. Springer, 2008. doi:10.1007/978-3-540-77974-2.
  • [11] Olivier Devillers, Stefan Meiser, and Monique Teillaud. Fully dynamic Delaunay triangulation in logarithmic expected time per operation. Computational Geometry, 2:55–80, 1992. doi:10.1016/0925-7721(92)90025-N.
  • [12] David P. Dobkin and Subhash Suri. Maintenance of geometric extrema. Journal of the ACM, 38(2):275–298, 1991. doi:10.1145/103516.103518.
  • [13] Greg N. Frederickson. Fast algorithms for shortest paths in planar graphs, with applications. SIAM Journal on Computing, 16(6):1004–1022, 1987. doi:10.1137/0216064.
  • [14] Michael T. Goodrich. Planar separators and parallel polygon triangulation. Journal of Computer and Systems Sciences, 51(3):374–389, 1995. doi:10.1006/jcss.1995.1076.
  • [15] John Iacono and Yakov Nekrich. Incremental planar nearest neighbor queries with optimal query time, 2025. arXiv:2504.07366.
  • [16] Haim Kaplan, Wolfgang Mulzer, Liam Roditty, Paul Seiferth, and Micha Sharir. Dynamic planar Voronoi diagrams for general distance functions and their algorithmic applications. Discrete and Computational Geometry, 64(3):838–904, 2020. doi:10.1007/s00454-020-00243-7.
  • [17] David G. Kirkpatrick. Optimal search in planar subdivisions. SIAM Journal on Computing, 12(1):28–35, 1983. doi:10.1137/0212002.
  • [18] Philip N. Klein, Shay Mozes, and Christian Sommer. Structured recursive separator decompositions for planar graphs in linear time. In Symposium on Theory of Computing (STOC), pages 505–514, 2013. doi:10.1145/2488608.2488672.
  • [19] Joseph S.B. Mitchell and Wolfgang Mulzer. Proximity algorithms. In Jacob E. Goodman, Joseph O’Rourke, and Csaba D. Tóth, editors, Handbook of Discrete and Computational Geometry, chapter 32, pages 849–874. CRC Press, 2017.
  • [20] Ketan Mulmuley. Computational geometry – An introduction through randomized algorithms. Prentice Hall, 1994.