Incremental Planar Nearest Neighbor Queries with Optimal Query Time
Abstract
In this paper we show that two-dimensional nearest neighbor queries can be answered in optimal time while supporting insertions in time. No previous data structure was known that supports -time queries and polylog-time insertions. In order to achieve logarithmic queries our data structure uses a new technique related to fractional cascading that leverages the inherent geometry of this problem. Our method can be also used in other semi-dynamic scenarios.
Keywords and phrases:
Data Structures, Dynamic Data Structures, Nearest Neighbor QueriesFunding:
John Iacono: Research supported by the Fonds de la Recherche Scientifique – FNRS.Copyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Computational geometry ; Theory of computation Data structures design and analysisEditors:
Oswin Aichholzer and Haitao WangSeries and Publisher:

1 Introduction
In the nearest neighbor problem a set of points is stored in a data structure so that for a query point the point that is closest to can be found efficiently. The nearest neighbor problem and its variants are among the most fundamental and extensively studied problems in computational geometry; we refer to [19] for a survey. In this paper we study dynamic data structures for the Euclidean nearest neighbor problem in two dimensions. We show that the optimal query time for this problem can be achieved while allowing insertions in time .
Previous Work
See Table 1. In the static scenario the planar nearest neighbor problem can be solved in time by point location in Voronoi diagrams. However the dynamic variant of this problem is significantly harder because Voronoi diagrams cannot be dynamized efficiently: it was shown by Allen et al. [2] that a sequence of insertions can lead to amortized combinatorial changes per insertion in the Voronoi diagram. A static nearest-neighbor data structure can be easily transformed into an insertion-only data structure using the logarithmic method of Bentley and Saxe [3] at the cost of increasing the query time to . Several researchers [11, 9, 20] studied the dynamic nearest neighbor problem in the situation when the sequence of updates is random in some sense (e.g. the deletion of any element in the data structure is equally likely). However their results cannot be extended to the case when the complexity of a specific sequence of updates must be analyzed.
Using a lifting transformation [10], 2-d nearest neighbor queries can be reduced to extreme point queries on a 3-d convex hulls. Hence data structures for the dynamic convex hull in 3-d can be used to answer 2-d nearest neighbor queries. The first such data structure (without assumptions about the update sequence) was presented by Agarwal and Matoušek [1]. Their data structure supports queries in time and updates in time; another variant of their data structure supports queries in time and updates in time. A major improvement was achieved in a seminal paper by Chan [4] where he presented a structure that supports queries in time, insertions in expected time and deletions in expected time. The update procedure can be made deterministic using the result of Chan and Tsakalidis [6]. The deletion time was further reduced to [16] and to [5]. This sequence of papers makes use of shallow cuttings, a general powerful technique, but, alas, all uses of it for the point location problem in 2-d have resulted in query times.
Even in the case of insertion-only scenario, the direct application of the 45-year-old classic technique of Bentley and Saxe [3] remains the best insertion-only method with polylogarithmic update before this work; no data structure with query time and polylogarithmic update time was described previously.
Our Results
We demonstrate that optimal query time and poly-logarithmic update time can be achieved in some dynamic settings. The following scenarios are considered in this paper:
-
1.
We describe a semi-dynamic insertion-only data structure that uses space, supports insertions in amortized time and answers queries in time.
-
2.
In the semi-online scenario, introduced by Dobkin and Suri [12], we know the deletion time of a point when a point is inserted, i.e., we know how long a point will remain in a data structure at its insertion time. We describe a semi-online fully-dynamic data structure that answers queries in time and supports updates in amortized time. The same result is also valid in the offline scenario when the entire sequence of updates is known in advance.
-
3.
In the offline partially persistent scenario, the sequence of updates is known and every update creates a new version of the data structure. Queries can be asked to any version of the data structure. We describe an offline partially persistent data structure that uses space, can be constructed in time and answers queries in time.
All three problems considered in this paper can be reduced to answering point location queries in (static) Voronoi diagrams of different point sets. For example, we can obtain an insertion-only data structure by using the logarithmic method of Bentley and Saxe [3], which we now briefly describe. The input set is partitioned into a logarithmic number of subsets , , of exponentially increasing sizes. In order to find the nearest neighbor of some query point we locate in the Voronoi diagram of each set and report the point closest to among these nearest neighbors. Since each point location query takes time, answering a logarithmic number of queries takes time.
The fractional cascading technique [7] applied to this problem in one dimension decreases the query cost to logarithmic by sampling elements of each and storing copies of the sampled elements in other sets , . Unfortunately, it was shown by Chazelle and Liu [8] that fractional cascading does not work well for two-dimensional non-orthogonal problems, such as point location: in order to answer point location queries in time, we would need space, even in the static scenario.
To summarize, the two obvious approaches to the insertion-only problem are to maintain a single search structure and update it with each insertion, the second is to maintain a collection of static Voronoi diagrams of exponentially-increasing size and to execute nearest neighbor queries by finding the closest point in all structures, perhaps aided by some kind of fractional cascading. The first approach cannot obtain polylogarithmic insertion time due to the lower bound on the complexity change in Voronoi diagrams caused by insertions [2], and the second approach cannot obtain search time due to Chazelle and Liu’s lower bound [8]. Our main intellectual contribution is showing that the lower bound of Chazelle and Liu [8] can be circumvented for the case of point location in Voronoi diagrams. Specifically, a strict fractional cascading approach requires finding the closest point to a query point in each of the subsets ; we loosen this requirement: in each , we either find the closest point or provide a certificate that the closest point in is not the closest point in . This new, powerful and more flexible form of fractional cascading is done by using a number of novel observations about the geometry of the problem. We imagine our general technique may be applicable to speeding up search in other dynamic search problems. Our method employs planar separators to sample point sets and uses properties of Voronoi diagrams to speed up queries. We explain our method and show how it can be applied to the insertion-only nearest neighbor problem in Section 2.
A further modification of our method, that is described in the full version of this paper [15], improves the insertion time and the insertion time to and space usage to . The description of partially persistent and semi-online variants is also in the full version.
2 Basic Insertion-Only Structure
We present our basic insertion only structure in several parts. In the first part, the overview (§2.1), we present the structure where one needed function, jump, is presented as a black box. With the jump function abstracted, our structure is a combination of known techniques, notably the logarithmic method of Bentley and Saxe [3] and sampling. In §2.2 we present the implementation of the jump function and the needed geometric preliminaries. In contrast to combination of standard techniques presented in the overview which require little use of geometry, our implementation of the jump function is novel and requires thorough geometric arguments. We then fully describe the underlying data structures needed in §2.3. Note that all arguments presented here are done with the goal of obtaining queries and polylogarithmic insertion. We opt here for clarity of presentation versus reducing the number of logarithmic factors in the insertion, as the arguments are already complex.
2.1 Overview
We let denote the set of points currently stored in the structure, and use to denote . In this section we set a constant to . Even though for this section is fixed, we will express everything as a function of .
Let denote a partition of into sets of exponentially-increasing size where and . Note that the partition of into is not unique. Let be the nearest neighbor of in a point set , which we assume to be unique. Given a point , the computation of is the query that our data structure will support.
We now define a sequence of point sets . The intuition is that, as in classical fractional cascading [7], the set contains all elements of and a sample of elements from the sets where ; this implies the last sets are equal: . This sampling will be provided by the function which returns a subset of of size ; while it will have other important properties, for now only the size matters.
We now can formally define : . From this definition we have several observations which we group into a lemma, the proof can be found in the full version of this paper:
Lemma 1.
Facts about
-
1.
-
2.
is a function of the , for .
-
3.
-
4.
-
5.
-
6.
For any
A note on notation
We assume the partition of into the sets . Any further notation that includes a subscript, such as , is a function of the , for . This compact notation makes explicit the dependence of only on , for . This dependence in one direction only (i.e., on non-smaller indexes only) is crucial to the straightforward application of the standard Bentley and Saxe [3] rebuilding technique in the implementation of the data structure and the insertion operation described in section §2.3-2.4.
Voronoi and Delaunay
Let be the Voronoi diagram of point set , let be the cell of a point in , that is the locus of points in the plane whose closest element in is . Thus is equivalent to . Let be the complexity of the cell, that is, the number of edges on its boundary. Let refer to the Delaunay graph of , the dual graph of the Voronoi diagram of ; the degree of in is thus and each point in corresponds to a unique vertex in . Delaunay graphs are planar. To simplify the description, we will not distinguish between points in a planar set and vertices of . For example, we will sometimes say that a point has degree or say that a point is a neighbor of . We will find it useful to have a compact notation for expressing the union of Voronoi cells; thus for a set of points , let denote .
Pieces and Fringes
Given a graph, , and a set of vertices , the fringe of (with respect to ) is the subset of incident to edges whose other endpoint is in . Let be a planar graph. For any , Frederickson [13] showed the vertices of can be decomposed111We use the word decomposed to mean a division of a set into into a collection of sets, the decomposition, whose union is the original set, but, unlike with a partition, elements may belong to multiple sets. into pieces, so that: (i) Each vertex is in at least one piece. (ii) Each piece has at most vertices in total and only vertices on its fringe. (iii) If a vertex is a non-fringe vertex of a piece (with respect to ), then it is not in any other pieces. (iv) The total size of all pieces is in . Intuitively, the pieces are almost a partition of where those vertices on the fringe of each piece may appear in multiple pieces. Such a decomposition of can be computed in time [14, 18]. We will apply this decomposition to , which is both a point set and the vertex set of , for exponentially increasing sizes of .
Given integers , let
be a decomposition of into subsets such that each subset has size and a fringe of size with respect to . We let
be defined such that denotes the fringe of , and let be . Thus each is partitioned into its fringe vertices, , and its interior non-fringe vertices ; note that may be empty if all elements of are on its fringe.
Finally, we define to be the union of all the fringe vertices:
thus is a decomposition of .
For any , the decomposition of into , the partition of each into and , and the set can all be computed in time using [18] if the Delaunay triangulation is available; if not it can be computed in time . Thus computing these for all valid takes time and space as .
One property of this sampling technique is that points in with Voronoi cells in of complexity at least are included in if and . By complexity of a region, we mean the number of edges that bound this region.
Lemma 2.
Given , if and then the complexity of is .
Proof.
Suppose that , for a constant chosen later, we will show this implies . Thus the degree of in is greater than . Consider the piece in that contains . This piece has size at most , which is at most for some (here we choose ). Thus in , must have neighbors which are not in . By definition, is thus in the fringe , which implies . From the definition of , and which gives .
While we cannot bound the complexity of any Voronoi cell in a fringe, we can bound the complexity of , the cells inside a fringe. Intuitively, each piece has the fringe cells on its exterior and non-fringe cells on its interior; imagining the fringe cells of a piece as an annulus gives two boundaries, the exterior boundary of the fringes, which is the boundary between the cells of this and other pieces, and the interior boundary of the fringes, which is the boundary between the fringe cells and the interior cells in this piece. Crucially, while the exterior boundary could have high complexity, the interior boundary does not, which we now formalize:
Lemma 3.
The complexity of is .
Proof.
See Figure 2. Each cell in is adjacent to either other cells in or cells in . The adjacency graph of these Voronoi regions is planar, and as , and recalling that gives the lemma.
The Jump function: definition
At the core of our nearest neighbor algorithm is the function , defined as follows. We will find it helpful to use for a range to denote ; for example is the nearest neighbor of in .
Intuitively, a call to is used when trying to find the nearest neighbor of , and assuming we know the nearest neighbor of in determines whether there are any points that could be the nearest neighbor of in . This information could be either a simple no, or it could provide the nearest neighbor of for some prefix of these sets. Additionally, the edge of an the Voronoi cell of the currently known nearest neighbor in the direction of the query point is always passed and returned to aide the search using the combinatorial bounds from Lemma 8, point 4.
-
Input to :
-
–
Integers and , where is required to be a power of 2. We use to refer to , the midpoint.
-
–
Query point .
-
–
Point where .
-
–
The edge on the boundary of that the ray intersects.
-
–
-
Output: Either one of two results, Failure or a triple
-
–
If Failure, this is a certificate that
-
–
If a triple is returned, it has the following properties:
-
*
The integer is in the range and .
-
*
The point is .
-
*
The edge is on the boundary of that the ray intersects.
-
*
-
–
We will show later that runs in time. Implementation details are deferred to Section 2.2.
The nearest neighbor procedure
A nearest neighbor query can be answered through a series of calls to the function:
-
Initialize , to be , and to be the edge of crossed by the ray ; all of these can be found in constant time as . Initialize to .
-
Repeat the following while :
-
–
Run . If the result is failure:
-
*
Set
-
*
-
–
Else a triple is returned:
-
*
If set
-
*
Set and set
-
*
-
–
-
Return
We will show in the rest of this section that, given this jump function as a black box, we can correctly answer a nearest neighbor query in time.
Correctness
The loop in the jump procedure will maintain the invariants that is , and is a power of two. The latter is because is set to either 1 or is doubled with each iteration of the loop. As to the first invariant, before the loop runs, the invariant requires , which and are initialized to. During the loop we subscript and by new and old to indicate the value of variables at the beginning and at the end of the loop. Thus . Also if the loop runs, we know . We distinguish between two cases depending on whether the jump function returns failure or a triple.
If the jump function returns failure, we know that
Using this fact we can go from the invariant in terms of the old variables to the new ones:
Invariant | ||||
math | ||||
Now we consider the case when a triple is returned by the function. We know that , and using the same logic as from the failure case we can conclude that the old is . The code sets to the point that is closer to among the old , equal to , and , which is . Thus the new is equal to (note the closed interval) which we can rewrite to get the invariant as follows, using the fact that the subscript of is only dependent on the integers in the given range:
When the loop finishes, we have and thus
Running time
Lemma 4.
Given the Jump function, the running time of the nearest neighbor search function in .
Proof.
We use a potential argument. Let and denote the values of and after the loop has run times, let denote the runtime of the th iteration of the loop, and let denote the number of times the loop runs. The total runtime is thus . Define , for some constant chosen so that ; as and , . (recall that the runtime of Jump is for constant ).
We will argue that for all , . Summing, this gives . Since , and are in the range , this bounds by . Thus all that remains is to argue that for the two cases where the th run of Jump ends in failure or returns a triple.
Case 1, Jump returns failure.
In the case of failure, remains the same, thus but increases by , thus . Thus the potential increases by .
Case 2, Jump returns a triple.
is set to and changes to . The key observation is that, due to the invariant in our correctness argument, . Thus and ; the potential change is
2.2 The jump function
2.2.1 Basic geometric facts
We begin with our most crucial geometric lemma, the one that we build upon to make our algorithm work. Informally, given point sets and which possibly have elements in common, and a query point , if the closest point to in is also in , then the closest point to in is in , and not in .
Lemma 5.
Given , , suppose that there is a point such that for some . Then , or equivalently .
Proof.
Since , by the definition of . Since , . However, and is the closest point to in : for all in . Combining with for all in gives the lemma.
Lemma 6.
If all elements of a point set are in for some point set and , then all elements of the convex hull of are in as well.
Proof.
Immediate as is convex.
Lemmata 5 and 6 give us a tool to determine which parts of the Voronoi cell of some in must also be part of the Voronoi cell of in . We define this region as , and then prove its properties.
Lemma 7.
and thus if ,
Proof.
The point is in its own cell, , and by Lemma 5, all elements of are also in . Thus the convex hull of these points is a subset of by Lemma 6. Thus, if a query point is in , then, by Lemma 7, the set can be ignored because for any .
Geometrically, determining whether a point in is in a and thus a full search in can be skipped is what our jump function does. We now need to turn to examining the combinatorial issues surrounding and its interaction with as we need the complexity of the regions examined to be bounded in such a way to allow efficient searching to see if a point is in . We begin by defining the part of that is not in as and proving a number of properties of this possibly disconnected region.
Lemma 8.
Consider the region
-
1.
Each connected component of is a subset of the union of Voronoi cells in one element of ; that is, each connected component of is a subset of for some .
-
2.
intersects each bounding edge of in at most two connected components, each of which includes a vertex of .
-
3.
Any line segment , where is on the boundary of intersects in at most one connected component that, if it exists, includes .
-
4.
Let be a boundary edge of . The solid triangle intersects at most edges on the boundary separating from .
Proof.
-
1.
Suppose one connected component contains points in both and where and are different elements of . Consider a polyline that connects and while remaining in the same connected component of . Such a polyline must cross, at some point, some cell of some where but where the cell is adjacent to at least one other cell in that is not in . Thus is by definition in ; thus is in by its definition in Lemma 7. But this contradicts is in .
-
2.
If this does not hold, there are points and on , with closer to than , such that and . But this cannot happen: We know by construction and if and are in , must be as well because the hull is a convex set.
-
3.
By a similar argument as the last point, if this did not hold, there would be points in order on , where there are some points , in order, such that and . But this cannot happen: if and are in , must be as well because the hull is a convex set.
-
4.
The complexity of is at most the complexity of and , where and . By Lemma 3, the complexity of these regions (within the triangle ) is at most . Taking the convex hull only decreases the complexity of the objects on which a hull is defined.
We can use these geometric facts to present the following corollary, which shows how we can use inclusion in to determine where to find the nearest neighbor of in or to determine that , ’s nearest neighbor in , is not in without needing to find the nearest neighbor of in . This is our main novel idea, since, as discussed in the introduction, finding the nearest neighbors in every in logarithmic time is not possible.
Corollary 9.
Given:
-
and ,
-
a query point
-
a point where
-
the edge on the boundary of that the ray intersects.
then, by testing against the part of that is inside and has complexity one of the following is true:
-
If is inside , then is in and is not in .
-
If is outside (and thus inside ): then is in the same element of as either or .
2.2.2 Implementing the jump function
We now use one additional idea to speed up the jump function: While testing if is inside or outside of the part of that intersects can be done in time (since the complexity of this part of the hull is by Lemma 8, point 4), we can in fact do something stronger. We can test if is inside or outside of the part of that intersects , for all , in the same time . This is because the complexity of the subdivision of the plane induced by hulls of size has complexity and thus we can determine in time which is the smallest in the range where is not inside , or determine that for all in the range .
Thus we obtain the final jump procedure:
Method to compute :
-
1.
Test to see what is the smallest , such that is outside of the part of that intersects . This will be done in time with the convex hull search structure described in Section 2.3. If it is inside all such hulls, then for all such that and failure is returned. Otherwise:
-
2.
Let and denote the elements of that contain and . These will be precomputed and accessible in constant time with the piece lookup structure described in Section 2.3.
-
3.
Search in and to find , call it . Note that, is . As both and have complexity , this can be done in time with the piece interior search structure described in Section 2.3.
-
4.
Find the edge bounding that the ray intersects. As has complexity , this can be done in time with binary search.
2.3 Data structures needed
The data structure is split into levels, where level consists of:
-
I.
,
-
II.
,
-
III.
The Voronoi diagram of , , and a point location search structure for the cells of ,
-
IV.
The Delaunay triangulation of , .
-
V.
Additionally, we keep for each , :
-
i.
The partition of into
-
ii.
The partition of each into and
-
iii.
The set
-
i.
For any level , this information can be computed from in time using [18][Theorem 3] to compute the partition into pieces, and standard results on Delaunay/Voronoi construction.
Additionally, less elementary data structures are needed for each level, which we describe separately: the convex hull search structure, the piece lookup structure, and the piece interior search structure.
Convex hull search structure
For level , a convex hull structure is built for every combination of:
-
A point in
-
An edge of
-
An index where , and is a power of 2.
A convex hull structure answers queries of the form: given a point in , return the smallest , such that is outside of the part of that intersects , where and are endpoints of . There are such structures as is at most , and the complexity of a Voronoi diagram is linear in the number of points it is defined on.
The method used is to simply store a point location structure which contains subdivision within formed by the overlay of all boundaries of , for . As previously mentioned the complexity of this overlay is and thus point location can be done in the logarithm of this, which is .
As noted earlier there are structures, each of which takes space at most plus the number of intersections in the point location structure within the triangle. The comes from the at most 2 edges from each hull that can pass through the triangle without intersection. For a given , the sum of the complexities of over all is . As each hull edge can intersect at most other hull edges, that bounds the total space needed over one to be . The overall space usage is .
Piece lookup structure
Level of the piece lookup structure contains for , and for each vertex of the index of which piece has in . This can be precomputed using the point location search structure for in time for fixed for each of the vertices of . Summing over the choices for gives a total runtime of to pre-compute all answers. The space usage is .
Piece interior search structure
For each we store a point location structure that supports point location in time in the Voronoi diagram for each set of points in . Any standard linear-sized point location structure, such as that of Kirkpatrick [17], suffices since each element of has elements. For any fixed , there are choices for , and the sets in partition . Thus the total size of all these structures is . The construction cost, given the , incurs another logarithmic factor due to the need to construct the Voronoi diagram and the point location structure (we do not assume each piece is connected). Thus the piece interior search structure for level is constructed in time.
2.4 Insertion time
Our description of the data structures needed can be summarized as follows:
Lemma 10.
Level of the data structure can be built in time and space given all levels .
Insertion is thus handled by the classic logarithmic method of Bentley and Saxe [3] which transforms a static construction into a dynamic structure, and which we briefly summarize. To insert, we put the new point into and rebuild level 1. Every time a set exceeds the upper limit of , half of the items are moved from to and all levels from down to are rebuilt. So long as the upper and lower constants in the big Theta are at least a constant factor apart, the amortized insertion cost is times the cost per item to rebuild a level, thus obtaining:
Lemma 11.
Insertion can be performed with an amortized running time of .
The performance of our data structure can be summarized as follows:
Theorem 12.
There exists a semi-dynamic insertion-only data structure that answers two-dimensional nearest neighbor queries in time and supports insertions in amortized time. The data structure uses space.
References
- [1] Pankaj K. Agarwal and Jirí Matousek. Dynamic half-space range reporting and its applications. Algorithmica, 13(4):325–345, 1995. doi:10.1007/BF01293483.
- [2] Sarah R. Allen, Luis Barba, John Iacono, and Stefan Langerman. Incremental Voronoi diagrams. Discrete and Computational Geometry, 58(4):822–848, 2017. doi:10.1007/s00454-017-9943-2.
- [3] Jon Louis Bentley and James B. Saxe. Decomposable searching problems I: static-to-dynamic transformation. Journal of Algorithms, 1(4):301–358, 1980. doi:10.1016/0196-6774(80)90015-2.
- [4] Timothy M. Chan. A dynamic data structure for 3-d convex hulls and 2-d nearest neighbor queries. Journal of the ACM, 57(3):16:1–16:15, 2010. doi:10.1145/1706591.1706596.
- [5] Timothy M. Chan. Dynamic geometric data structures via shallow cuttings. Discrete and Computational Geometry, 64(4):1235–1252, 2020. doi:10.1007/s00454-020-00229-5.
- [6] Timothy M. Chan and Konstantinos Tsakalidis. Optimal deterministic algorithms for 2-d and 3-d shallow cuttings. Discrete and Computational Geometry, 56(4):866–881, 2016. doi:10.1007/s00454-016-9784-4.
- [7] Bernard Chazelle and Leonidas J. Guibas. Fractional cascading: I. A data structuring technique. Algorithmica, 1(2):133–162, 1986. doi:10.1007/BF01840440.
- [8] Bernard Chazelle and Ding Liu. Lower bounds for intersection searching and fractional cascading in higher dimension. Journal of Computing & Systems Sciences, 68(2):269–284, 2004. doi:10.1016/j.jcss.2003.07.003.
- [9] Kenneth L. Clarkson, Kurt Mehlhorn, and Raimund Seidel. Four results on randomized incremental constructions. Computational Geometry, 3:185–212, 1993. doi:10.1016/0925-7721(93)90009-U.
- [10] Mark de Berg, Otfried Cheong, Marc J. van Kreveld, and Mark H. Overmars. Computational geometry: algorithms and applications, 3rd Edition. Springer, 2008. doi:10.1007/978-3-540-77974-2.
- [11] Olivier Devillers, Stefan Meiser, and Monique Teillaud. Fully dynamic Delaunay triangulation in logarithmic expected time per operation. Computational Geometry, 2:55–80, 1992. doi:10.1016/0925-7721(92)90025-N.
- [12] David P. Dobkin and Subhash Suri. Maintenance of geometric extrema. Journal of the ACM, 38(2):275–298, 1991. doi:10.1145/103516.103518.
- [13] Greg N. Frederickson. Fast algorithms for shortest paths in planar graphs, with applications. SIAM Journal on Computing, 16(6):1004–1022, 1987. doi:10.1137/0216064.
- [14] Michael T. Goodrich. Planar separators and parallel polygon triangulation. Journal of Computer and Systems Sciences, 51(3):374–389, 1995. doi:10.1006/jcss.1995.1076.
- [15] John Iacono and Yakov Nekrich. Incremental planar nearest neighbor queries with optimal query time, 2025. arXiv:2504.07366.
- [16] Haim Kaplan, Wolfgang Mulzer, Liam Roditty, Paul Seiferth, and Micha Sharir. Dynamic planar Voronoi diagrams for general distance functions and their algorithmic applications. Discrete and Computational Geometry, 64(3):838–904, 2020. doi:10.1007/s00454-020-00243-7.
- [17] David G. Kirkpatrick. Optimal search in planar subdivisions. SIAM Journal on Computing, 12(1):28–35, 1983. doi:10.1137/0212002.
- [18] Philip N. Klein, Shay Mozes, and Christian Sommer. Structured recursive separator decompositions for planar graphs in linear time. In Symposium on Theory of Computing (STOC), pages 505–514, 2013. doi:10.1145/2488608.2488672.
- [19] Joseph S.B. Mitchell and Wolfgang Mulzer. Proximity algorithms. In Jacob E. Goodman, Joseph O’Rourke, and Csaba D. Tóth, editors, Handbook of Discrete and Computational Geometry, chapter 32, pages 849–874. CRC Press, 2017.
- [20] Ketan Mulmuley. Computational geometry – An introduction through randomized algorithms. Prentice Hall, 1994.