Quickest Visibility Queries in Polygonal Domains

Let $s$ be a point in a polygonal domain $\mathcal{P}$ of $h-1$ holes and $n$ vertices. We consider a quickest visibility query problem. Given a query point $q$ in $\mathcal{P}$, the goal is to find a shortest path in $\mathcal{P}$ to move from $s$ to see $q$ as quickly as possible. Previously, Arkin et al. (SoCG 2015) built a data structure of size $O(n^22^{\alpha(n)}\log n)$ that can answer each query in $O(K\log^2 n)$ time, where $\alpha(n)$ is the inverse Ackermann function and $K$ is the size of the visibility polygon of $q$ in $\mathcal{P}$ (and $K$ can be $\Theta(n)$ in the worst case). In this paper, we present a new data structure of size $O(n\log h + h^2)$ that can answer each query in $O(h\log h\log n)$ time. Our result improves the previous work when $h$ is relatively small. In particular, if $h$ is a constant, then our result even matches the best result for the simple polygon case (i.e., $h=1$), which is optimal. As a by-product, we also have a new algorithm for a shortest-path-to-segment query problem. Given a query line segment $\tau$ in $\mathcal{P}$, the query seeks a shortest path from $s$ to all points of $\tau$. Previously, Arkin et al. gave a data structure of size $O(n^22^{\alpha(n)}\log n)$ that can answer each query in $O(\log^2 n)$ time, and another data structure of size $O(n^3\log n)$ with $O(\log n)$ query time. We present a data structure of size $O(n)$ with query time $O(h\log \frac{n}{h})$, which also favors small values of $h$ and is optimal when $h=O(1)$.


Introduction
Let P be a polygonal domain with h − 1 holes and a total of n vertices, i.e., there is an outer simple polygon containing h − 1 pairwise disjoint holes and each hole itself is a simple polygon. If h = 1, then P becomes a simple polygon. For any two points s and t in P, a shortest path from s to t is a path in P connecting s and t with the minimum Euclidean length. Two points p and q are visible to each other if the line segment pq is in P. For any point q in P, its visibility polygon consists of all points of P visible to q, denoted by Vis(q).
We consider the following quickest visibility query problem. Let s be a source point in P. Given any point q in P, the query asks for a path to move from s to see q as quickly as possible. Such a "quickest path" is actually a shortest path from s to any point of Vis(q). The problem has been recently studied by Arkin et al. [1], who built a data structure of size O(n 2 2 α(n) log n) that can answer each query in O(K log 2 n) time, where K is the size of Vis(q). In this paper, we present a new data structure of O(n log h + h 2 ) size with O(h log h log n) query time. Our result improves the previous work when h is relatively small. Interesting is that the query time is independent of K, which can be Θ(n) in the worst case. Our result is also interesting in that when h = O(1), the data structure has O(n) size and O(log n) query time, which even matches the best result for the simple polygon case [1] and is optimal.
As in [1], in order to solve the quickest visibility queries, we also solve a shortest-path-to-segment query problem (or segment query for short), which may have independent interest. Given any line segment τ in P, the segment query asks for a shortest path from s to all points of τ . Arkin et al. [1] gave a data structure of size O(n 2 2 α(n) log n) that can answer each query in O(log 2 n) time, and another data structure of size O(n 3 log n) with O(log n) query time. We present a new data structure of O(n) size with O(h log n h ) query time. Our result again favors small values of h and attains optimality when h = O (1), which also matches the best result for the simple polygon case [1,13]. Given the shortest path map of s, our quickest visibility query data structure can be built in O(n log h + h 2 log h) time and our segment query data structure can be built in O(n) time. Arkin et al.'s quickest visibility query data structure and their first segment query data structure can both be built in O(n 2 2 α(n) log n) time, and their second segment query data structure can be built in O(n 3 log n) time [1].
Throughout the paper, whenever we talk about a query related to paths in P, the query time always refers to the time for computing the path length, and to output the actual path, it needs additional time linear in the number of edges of the path by standard techniques (we will omit the details about this).

Related Work
The traditional shortest path query problem has been studied extensively, which is to compute a shortest path to move from s to "reach" a query point. Each shortest path query can be answered in O(log n) time by using the shortest path map of s, denoted by SPM (s), which is of O(n) size. To build SPM (s), Mitchell [28] gave an algorithm of O(n 3/2+ǫ ) time for any ǫ > 0 and O(n) space, and later Hershberger and Suri [22] presented an algorithm of O(n log n) time and space. If P is a simple polygon (i.e., h = 1), SPM (s) can be built in O(n) time, e.g., see [17].
For the quickest visibility queries, Arkin et al. [1] also built a "quickest visibility map" of O(n 7 ) size in O(n 8 log n) time, which can answer each query in O(log n) time. In addition, Arkin et al. [1] gave a conditional lower bound on the problem by showing that the 3SUM problem on n numbers can be solved in O(τ 1 + n · τ 2 ) time, where τ 1 is the preprocessing time and τ 2 is the query time. Therefore, a data structure of o(n 2 ) preprocessing time and o(n) query time would lead to an o(n 2 ) time algorithm for 3SUM.
In the simple polygon case (i.e., h = 1), better results are possible for both the quickest visibility queries and the segment queries. For the quickest visibility queries, Khosravi and Ghodsi [24] first proposed a data structure of O(n 2 ) size that can answer each query in O(log n) time. Arkin et al. [1] gave an improved result and they built a data structure of O(n) size in O(n) time, with O(log n) query time. For the segment queries, Arkin et al. [1] built a data structure of O(n) size in O(n) time, with O(log n) query time. Chiang and Tamassia [13] achieved the same result for the segment queries and they also gave some more general results (e.g., when the query is a convex polygon).
Similar in spirit to the "point-to-segment" shortest path problem, Cheung and Daescu [12] considered a "point-to-face" shortest path problem in 3D and approximation algorithms were given for the problem.

Our Techniques
We first propose a decomposition D of P by O(h) shortest paths from s to certain vertices of SPM (s). The decomposition D, whose size is O(n), has O(n) cells with the following three key properties. First, any segment τ in P can intersect at most O(h) cells of D. Second, for each cell ∆ of D, τ ∩∆ consists of at most two sub-segments of τ . Third, after O(n) time preprocessing, for each sub-segment τ ′ of τ in any cell of D, the shortest path from s to τ ′ can be computed in O(log n) time. With D, we can easily answer each segment query in O(h log n h ) time by a "pedestrian" algorithm.
To solve the quickest visibility queries, an observation is that the shortest path from s to see q is a shortest path from s to a window of Vis(q), i.e., an extension of the segment qu for some reflex vertex u of P. Hence, the query can be answered by calling segment queries on all O(K) windows of Vis(s) and returning the shortest path. This leads to the O(K log 2 n) time query algorithm in [1].
If we follow the same algorithmic scheme and using our new segment query algorithm, then we would obtain an algorithm of O(K · h · log n h ) time for the quickest visibility queries. We instead present a "smarter" algorithm. We propose a "pruning algorithm" that prunes some "unnecessary" portions of the windows such that it suffices to consider the remaining parts of the windows. Further, with the help of the decomposition D, we show that a shortest path from s to the remaining windows can be found in O((K + h) log h log n) time. We refer to it as the preliminary result. To achieve this result, we solve many other problems, which may be of independent interest. For example, we build a data structure of O(n log h) size such that given any query point t and line segment τ in P, we can compute in O(log h log n) time the intersection between τ and the shortest path from s to t in P (or report none if they do not intersect). Our above pruning algorithm is based on a new and interesting technique of using "bundles".
To further reduce the query time to O(h log h log n), the key idea is that by using the extended corridor structure of P [8,11], we show that there exists a set S(q) of O(h) candidate windows such that a shortest path from s to see the query point q must be a shortest path from s to a window in S(q). This is actually quite consistent with the result in the simple polygon case, where only one window is needed for answering each quickest visibility query [1]. Once the set S(q) is computed, we can apply our pruning algorithm discussed above on S(q) to answer the quickest visibility query in additional O(h log h log n) time. To compute S(q), we give an algorithm of O(h log n) time, without having to explicitly compute Vis(s). The algorithm is based on a modification of the algorithm given in [9] that can compute Vis(q) in O(K log n) time for any point q, after O(n + h 2 ) space and O(n + h 2 log h) time preprocessing.
The rest of the paper is organized as follows. In Section 2, we introduce notation and review some concepts. In Section 3, we introduce the decomposition D of P, and present our algorithm for the segment queries. We present our preliminary result for the quickest visibility queries in Section 4 and give the improved result in Section 5. Section 6 concludes the paper.

Preliminaries
For any subset A of P, we say that a point p is (weakly) visible to A if p is visible to at least one point of A. For any point t ∈ P, we use π(s, t) to denote a shortest path from s to t in P, and in the case where the shortest path is not unique, π(s, t) may refer to an arbitrary such path. With a little abuse of notation, for any subset A of P, we use π(s, A) to denote a shortest path from s to all points of A; we use d(s, A) to denote the length of π(s, A), i.e., d(s, A) = min t∈A d(s, t).
Let V denote the set of all vertices of P.
The shortest path map SPM (s). SPM (s) is a decomposition of P into regions (or cells) such that in each cell σ, the sequence of obstacle vertices along π(s, t) is fixed for all t in σ [22,28]. Further, the root of σ, denoted by r(σ), is the last vertex of V ∪ {s} in π(s, t) for any point t ∈ σ (hence π(s, t) = π(s, r(σ)) ∪ r(σ)t; note that r(σ) is s if s is visible to t). We classify each edge of a cell σ into three types: a portion of an edge of P, an extension segment, which is a line segment extended from r(σ) along the opposite direction from r(σ) to the vertex of π(s, t) preceding r(σ), and a bisector curve/edge that is a hyperbolic arc. For each point t on a bisector edge of SPM (s), t is on the common boundary of two cells and there are two different shortest paths from s to t through the roots of the two cells, respectively. The vertices of SPM (s) include V ∪ {s} and all intersections of edges of SPM (s). The intersection of two bisector edges is called a triple point, which has more than two shortest paths from s. The map SPM (s) has O(n) vertices, edges, and cells [22,28].
For differentiation, we call the vertices and edges of the polygonal domain P the obstacle vertices and the obstacle edges, respectively. The holes and the outer polygon of P are also called obstacles.
The shortest path tree SPT (s) is the union of shortest paths from s to all obstacle vertices of P. SPT (s) has O(n) edges [22,28]. Given SPM (s), SPT (s) can be obtained in linear time. We somethings consider a further decomposition of SPM (s) by having all edges of SPT (s) in it.
For ease of exposition, we make a general position assumption that no obstacle vertex has more than one shortest path from s and no point of P has more than three shortest paths from s. Hence, no bisector edge of SPM (s) intersects an obstacle vertex and no three bisector edges intersect at the same point.
For any polygon P , we use |P | to denote the number of vertices of P and use ∂P to denote the boundary of P .
Ray-shooting queries in simple polygons. Let P be a simple polygon. With O(|P |) time and space preprocessing, each ray-shooting query in P (i.e., given a ray in P , find the first point on ∂P hit by the ray) can be answered in O(log |P |) time [6,21]. The result can be extended to curved simple polygons or splinegons [26].
The canonical lists and cycles of planar trees. We will often talk about certain planar trees in P (e.g., SPT (s)). Consider a tree T with root r. A leaf v is called a base leaf if it is the leftmost leaf of a subtree rooted at a child of r (e.g., see Fig. 1). Denote by L(T, v) the post-order traversal list of T starting from such a base leaf v, and we call it a canonical list of T . The root r must be the last node in L(T, v). We remove r from L(T, v) and make the remaining list a cycle by connecting its rear to its front, and let C(T ) denote the circular list. Although T may have multiple base leaves, C(T ) is unique and we call C(T ) the canonical cycle of T . We further use L l (T, v) (e.g., see Fig. 1) to denote the list of the leaves of T following their relative order in L(T, v) and use C l (T ) to denote the circular list of L l (T, v). One reason we introduce these notation is the following. Let e be any edge of T . All nodes of T whose paths to r in T contain e must be consecutive in L(T, v) and C(T ). Similarly, all leaves of T whose paths to r in T contain e must be consecutive in L l (T, v) and C l (T ).
The following observation on shortest paths will be frequently referred to in the paper.
Observation 1 1. Suppose π 1 and π 2 are two shortest paths from s to two points in P, respectively; then π 1 and π 2 do not cross each other. 2. Suppose π 1 is a shortest path from s to a point in P and τ is a line segment in P; then the intersection of π 1 and τ is a sub-segment of τ (which may be a single point or empty).

The Decomposition D and the Segment Queries
In this section, we introduce a decomposition D of P and use it to solve the segment query problem.
The decomposition D will also be useful for solving the quickest visibility queries. We first define a set V of points. Let p be an intersection between a bisector edge of SPM (s) and an obstacle edge. Since p is on a bisector edge, it is in two cells of SPM (s) and has two shortest paths from s. We make two copies of p in the way that each copy belongs to only one cell (and thus corresponds to only one shortest path from s). We add the two copies of p to V . We do this for all intersections between bisector edges and obstacle edges. Consider a triple point p, which is in three cells of SPM (s) and has three shortest paths from s. Similarly, we make three copies of p that belong to the three cells, respectively. We add the three copies of p to V . We do this for all triple points. This finishes the definition of V .
By definition, each point of V has exactly one shortest path from s. Let Π V denote the set of shortest paths from s to all points of V . Let T V be the union of all shortest paths of Π V . We consider points of V distinct although some of them are copies of the same physical point. In this way, we can consider T V as a "physical" tree rooted at s. Definition 1. Define D to be the decomposition of P by the edges of T V .
In the following, we assume the shortest path map SPM (s) has already been computed. We have the following lemma about the decomposition D.
3. Each cell of D is simply connected. 4. For any segment τ in P, τ can intersect at most O(h) cells of D. Further, for each cell ∆ of D, the intersection τ and ∆ consists of at most two (maximal) sub-segments of τ . 5. After O(n) time preprocessing, for any segment τ ′ in a cell ∆ of D, the shortest path from s to τ ′ can be computed in O(log |∆|) time, where |∆| is the combinatorial size of ∆. 6. For each cell ∆ of D, ∆ has at most two vertices r 1 and r 2 (both in V ∪ {s}), called "superroots", such that for any point t ∈ ∆, π(s, t) is the concatenation of π(s, r) and the shortest path from r to t in ∆, for a super-root r in {r 1 , r 2 }. 7. Given the shortest path map SPM (s), D can be computed in O(n) time.
We will prove Lemma 1 later in Section 3.2. Below we first give our data structure for answering segment queries by using Lemma 1.

The Segment Queries
As preprocessing, we first compute the decomposition D. Then, we build a point location data structure on D [14,25], which can be done in O(n) time and O(n) space since the size of D is O(n) by Lemma 1 (2); the data structure can answer each point location query in O(log n) time.
In addition, for each cell ∆ of D, by Lemma 1(3), ∆ is a simple polygon; we build a ray-shooting data structure on ∆ [6,21]. Since the total size of all cells of D is O(n) by Lemma 1(2), the total preprocessing time and space for the ray-shooting queries on all cells of D is O(n).
Finally, we do the preprocessing in Lemma 1 (5). Hence, given SPM (s), the total preprocessing time and space is O(n). The following lemma gives our query algorithm. Lemma 2. Given any segment τ in P, we can compute a shortest path from s to τ in O(h log n h ) time.
Proof. Let a and b be the two endpoints of τ , respectively. Our algorithm works in a "pedestrian" way, as follows.
By using a point location query, we find the cell ∆ a of D that contains a. Then, we check whether τ is contained in ∆ a . This can be done by using a ray-shooting query as follows. We shoot a ray ρ from a towards b and compute the first point p of ∂∆ a hit by the ray. The segment τ is in ∆ a if and only if b is before p on the ray.
If τ is in ∆ a , then we can immediately compute the shortest path π(s, τ ) from s to τ in O(log |∆ a |) time by Lemma 1 (5).
Otherwise, we compute the shortest path π(s, ap) from s to the sub-segment ap of τ in O(log |∆ a |) time by Lemma 1(5). Next, based on the edge of D containing p, we can determine in constant time the next cell ∆ of P that the ray ρ enters. We process the cell ∆ in the similar way as the above for ∆ a . The algorithm finishes once we process a cell that contains b.
The above computes π(s, τ ′ ) for multiple sub-segments τ ′ of τ such that these sub-segments constitute exactly τ and each sub-segment is in a single cell of D. Clearly, among all shortest paths from s to these sub-segments, the one with the minimum length is the shortest path from s to τ .
To analyze the running time of the above algorithm, let k be the number of the above subsegments τ ′ of τ . Suppose τ ′ 1 , τ ′ 2 , . . . , τ ′ k are these sub-segments ordered from a to b. For each 1 ≤ i ≤ k, let ∆ i be the cell of D that contains τ ′ i . First of all, the point location query for a takes O(log n) time. For each 1 ≤ i ≤ k, determining each sub-segment τ ′ i needs a ray-shooting query in ∆ i , which takes O(log |∆ i |) time; computing the length of π(s, τ ′ i ) also takes O(log |∆ i |) time by Lemma 1(5). Hence, the total time of the algorithm is O(log n + k i=1 log |∆ i |). By Lemma 1(4), k = O(h). Also, by Lemma 1(4), each cell may contain two of the above k sub-segments of τ , and thus it is possible that ∆ i and ∆ j refer to the same cell for i = j. Let S be the set of the distinct cells of ∆ i for i = 1, 2, . . . , k. Since each cell contains at most two of the above k sub-segments of τ , k i=1 log |∆ i | ≤ 2 · ∆∈S log |∆|. Further, since the cells of S are distinct, we . Therefore, the total time of the algorithm is bounded by O(h log n h ).

⊓ ⊔
We summarize our result for segment queries in the following theorem.
Theorem 1. Given the shortest path map SPM (s), we can build a data structure of O(n) size in O(n) time, such that each segment query can be answered in O(h log n h ) time.

The Decomposition D and Proving Lemma 1
In this section we provide the details for D and prove Lemma 1.
Let O denote the obstacle space, which is the complement of the free space of P. More specifically, O consists of the h − 1 simple polygonal holes of P and the (unbounded) region outside the Illustrating the bisector edges of shortest path map (the back area is the obstacle space): the green point is the source s and the red curves are the bisector edges. The figure is generated by the applet in [20] outer boundary of P. Let B denote the union of all bisector edges of SPM (s). Mitchell [27] proved that O ∪ B is simply connected and P \ B is also simply connected (e.g., see Fig. 2). We consider O ∪ B as a planar graph G, defined as follows.
The vertex set of G consists of all obstacles of O and all triple points of SPM (s). For any two vertices of G, if they are connected by a chain of bisector edges in SPM (s) such that the chain does not contain any other vertex of G, then G has an edge connecting the two vertices, and further, we call the above chain of bisector edges a bisector super-curve (e.g., in Fig. 2, each red curve is a bisector super-curve). We have the following observation about G.
Observation 2 G is a simple graph, i.e., G does not have a self-loop and no two vertices have more than one edge. G has O(h) vertices, edges, and faces.
Proof. The first part of the observation can be proved easily from Mitchell's observation in [27] that P \ B is simply connected, as follows.
Indeed, assume to the contrary that G has a self-loop at a vertex v. According to our definition, the self-loop corresponds to a bisector super-curve that connects the vertex v (either a triple point or an obstacle) to itself. Let R be region bounded by bisector-super curve and v. Hence, R is closed, which contradicts with that P \ B is simply connected.
Similarly, assume to the contrary that two vertices u and v have two edges. Then, the two edges correspond to two bisector super-curves. Thus, the region bounded by the two bisector super-curves and the two vertices is closed, incurring contradiction again.
To prove the second part of the observation, note that G is a planar graph. First, it is known that the number of triple points is O(h) [15]. Since there are h obstacles in O, the number of vertices of G is O(h).
Second, the faces of G correspond exactly to the faces of the (≤ 1) − SPM of P defined in [15], whose total number is proved to be O(h) [15] (see Lemma 4.3 with k = 1). Therefore, the number of faces of G is O(h).
Finally, since both the number of vertices and the number of faces of G are O(h), the number of edges of G is also O(h).
⊓ ⊔ Let V 1 be the set of all triple points. It is known that |V 1 | = O(h) [15]. Let V 2 be the set of intersections between obstacle edges and bisector edges of SPM (s). It is not difficult to see that each point of V 2 corresponds to an intersection between an obstacle and a bisector super-curve. Since G has O(h) edges, there are O(h) bisector super-curves. Thus, |V 2 | = O(h). Recall that V consists of three copies of each point of V 1 and two copies of each vertex of V 2 . Since both |V 1 | and |V 2 | are O(h), we have |V | = O(h). This proves Lemma 1(1).
Since |V | = O(h), Π V is the set of O(h) shortest paths. Note that each edge of any path of Π V except the last edge (i.e., the one connecting a point of V ) is an edge of the shortest path tree SPT (s). Hence, the total number of edges of the tree T V is O(n). Since D is the decomposition of P by the edges of T V , the combinatorial size of D is O(n). This proves Lemma 1 (2).
To prove the rest of Lemma 1, we introduce another decomposition D ′ as follows.
Definition 2. Define D ′ to be the decomposition of P by the edges of T V ∪ B.
By definition, D can be obtained from D ′ by removing all bisector edges of B.
Lemma 3. Each cell of D ′ is simply connected.
Proof. Let Q 0 be the decomposition of P by the edges of B. Note that Q 0 is exactly P \ B, which is simply connected [27]. Let the points of V be v 1 , v 2 , . . . , v h * , ordered arbitrarily. Consider the decomposition Q 1 of Q 0 by the shortest path π(s, v 1 ). Note that Q 1 may have more than one connected cell. Recall that v 1 is on a bisector edge of B. Since Q 0 is simply connected, π(s, v 1 ) does not cross any bisector edges of SPM (s), and π(s, v 1 ) itself does not form any cycle, each cell of Q 1 is simply connected.
Similarly, consider the decomposition Q 2 of Q 1 by the shortest path π(s, v 2 ). Again, π(s, v 2 ) does not cross any bisector edge of B. Further, by Observation 1(1), π(s, v 2 ) and π(s, v 1 ) do not cross each other. Hence, π(s, v 2 ) does not cross any edge of Q 1 . Since each cell of Q 1 is simply connected, each cell of Q 2 is also simply connected.
We keep considering the rest of the paths π(s, v i ) for i = 3, 4, . . . , h * one by one in the same way as above. By the similar argument we can obtain that each cell of D h * , which is D ′ , is simply connected.
⊓ ⊔ It is known that P \ B is simply connected and π(s, t) is in P \ B for any point t ∈ P [27]. To simplify the discussion, together with the copies of the points of V , we consider P ′ = P \ B as a simple polygon (with some curved edges) by making two copies for each interior point of every bisector super-curve such that they respectively belong to the two sides of the curve. In this way, for any point t ∈ P ′ , it has a unique shortest path π(s, t) from s in P ′ , which is also a shortest path in P. In this way, D ′ becomes a decomposition of P ′ by the tree T V .
Consider any cell ∆ ′ of D ′ . Recall that V is the set of all vertices of P. We consider the points of V ∪ V ∪ {s} on the boundary ∂∆ ′ of ∆ ′ as vertices of ∆ ′ . Then, the boundary portion between any two adjacent vertices of ∆ ′ is an obstacle edge, an edge of T V , or a bisector super-curve. Let p be any point of ∆ ′ . Let r ∆ ′ be the point of ∆ ′ ∩ π(s, p) closest to s. We call r ∆ ′ the super-root of ∆ ′ , which is unique (i.e., independent of p) due to the following lemma.
, it is either s or an obstacle vertex. 2. π(s, r ∆ ′ ) is a sub-path of a shortest path in Π V . 3. For any point t ∈ ∆ ′ , the concatenation of π(s, r ∆ ′ ) and the shortest path from r ∆ ′ to t in ∆ ′ is the shortest path π(s, t) from s to t in P ′ .
Proof. We prove the lemma by induction in a similar way as in Lemma 3. We use the same terminology as in the proof of Lemma 3. Let the points of V be v 1 , v 2 , . . . , v h * , ordered arbitrarily.
Initially, consider the decomposition Q 0 . Note that there is only one cell ∆ ′ in Q 0 . Clearly, r ∆ ′ = s and all three statements hold for Q 0 and Π 0 . We assume the lemma statements hold for Q i−1 and Π i−1 . Our goal is to prove that the lemma statements hold for Q i and Π i .
Let ∆ ′ be the cell of Q i−1 containing v i . By induction, π(s, v i ) is the concatenation of π(s, r ∆ ′ ) and the shortest path π(r ∆ ′ , v i ) from r ∆ ′ to v i in ∆ ′ . Also by induction, π(s, r ∆ ′ ) is a sub-path of Π i−1 . Hence, π(s, v i ) does not partition any cell of Q i−1 other than ∆ ′ . In other words, for any cell ∆ ′′ of Q i−1 , if ∆ ′′ = ∆ ′ , then ∆ ′′ is still in Q i , and thus the lemma statements still hold on ∆ ′′ and Π i .
For the cell ∆ ′ , π(r ∆ ′ , v i ) partitions ∆ ′ into multiple sub-cells. Consider any sub-cell δ of ∆ ′ . Our goal is to show that the lemma statements hold on δ and Π i . Depending on whether δ contains r ∆ ′ , there are two cases.
The case r ∆ ′ ∈ δ. We first consider the case where δ contains r ∆ ′ . Consider any point p in δ. Since δ ⊆ ∆ ′ , r ∆ ′ ∈ δ, and the point of ∆ ′ ∩ π(s, p) closest to s is r ∆ ′ , the point of δ ∩ π(s, p) closest to s is also r ∆ ′ . Hence, r δ = r ∆ ′ . By induction, the first and second statements of the lemma hold for δ and Π i .
For the third statement, consider any point t ∈ δ. Since t ∈ ∆ ′ , π(s, t) is a concatenation of π(s, r ∆ ′ ) and π(r ∆ ′ , t), and the latter path is in ∆ ′ . To prove the third statement, it sufficient to show that π(r ∆ ′ , t) is in δ. Indeed, assume to the contrary that π(r ∆ ′ , t) is not in δ. Then, since δ is a cell of the decomposition of ∆ ′ by π(r ∆ ′ , v i ), π(r ∆ ′ , t) must cross π(r ∆ ′ , v i ). However, this is not possible due to Observation 1(1). Hence, π(r ∆ ′ , t) must be in δ.
The case r ∆ ′ ∈ δ. Suppose δ does not contain r ∆ ′ . Let a be the point of π(r ∆ ′ , v i ) ∩ δ closest to r ∆ ′ . We first show that for any point p ∈ δ, a is the point of π(s, p) ∩ δ closest to s.
Indeed, since p ∈ ∆ ′ , π(s, p) contains r ∆ ′ and π(r ∆ ′ , p) is in ∆ ′ . Since r ∆ ′ is not in δ, let b be the first point in δ we encounter if we traverse on π(r ∆ ′ , p) from r ∆ ′ to p. Clearly, b is not r ∆ ′ since otherwise r ∆ ′ would be in δ. Since δ is a cell of the decomposition of ∆ ′ by π(r ∆ ′ , v i ), b must be on π(r ∆ ′ , v i ). In other words, b ∈ δ ∩ π(r ∆ ′ , v i ).
Since b is on both π(r ∆ ′ , v i ) and π(r ∆ ′ , p), b is also the first point in δ we encounter if we traverse on π(r ∆ ′ , v i ) from r ∆ ′ to v i . Thus, b is the point of π(r ∆ ′ , v i ) closest to r ∆ ′ . Hence, we obtain b = a.
On the other hand, the definition of b implies that b is the point of π(s, p) ∩ δ closest to s. Therefore, a is the point of π(s, p) ∩ δ closest to s. This implies that r δ = a.
Note that a is a vertex of π(r ∆ ′ , v i ) and a cannot be v i . Thus, a must be either s or an obstacle vertex (in fact, a cannot be s either due to a = r ∆ ′ ), which proves the first statement of the lemma.
Since a is on π(r ∆ ′ , v i ) and thus is on π(s, v i ), π(s, a) is a sub-path of π(s, v i ) ∈ Π i . This proves the second statement of the lemma.
For the third statement, consider any point t ∈ δ. Since t ∈ ∆ ′ , by induction, π(s, t) is the concatenation of π(s, r ∆ ′ ) and π(r ∆ ′ , t), and π(r ∆ ′ , t) is in ∆ ′ . Using the same analysis as above, we can show that π(r ∆ ′ , t) must contain a. Further, the portion of π(r ∆ ′ , t) between a and t must be in δ, since otherwise π(r ∆ ′ , t) would cross π(r ∆ ′ , v i ), incurring contradiction. Hence, the portion of π(r ∆ ′ , t) between a and t is the shortest path from a to t in δ. Thus, π(s, t) is the concatenation of π(s, a) and the shortest path from a to t in δ. This proves the third statement.
This proves that all lemma statements hold for δ and Π i , and thus hold for Q i and Π i . The lemma thus follows. ⊓ ⊔ Observation 3 Each cell ∆ ′ of D ′ has at most one bisector super-curve on its boundary.
Proof. Assume to the contrary there are two bisector super-curves on the boundary of ∆ ′ . Then, there must exist an endpoint p of one of these two bisector super-curves such that the shortest path π(s, p) partitions ∆ ′ into two cells that contain the two bisector super-curves, respectively. This implies that π(s, p) is not in Π V . Since the two endpoints of every bisector super-curve are in V , we obtain p ∈ V and π(s, p) is not in Π V , a contradiction.

⊓ ⊔
Since T V is a planar tree, we can define its canonical lists as discussed in Section 2. Let v 1 be an arbitrary base leaf of T V , which can be found in O(n) time. Let the leaf list L l (T V , v 1 ) be v 1 , v 2 , . . . , v h * , which follow the counterclockwise order along ∂P ′ .
For each 1 ≤ i ≤ h * , let α i denote the portion of ∂P ′ counterclockwise from v i to v i+1 (let v h * +1 refer to v 1 ). Note that α i is either a bisector super-curve or a chain of obstacle edges. Suppose we move a point t on α i from v i to v i+1 . The shortest path π(s, t) will continuously change with the same topology since π(s, t) is always in P ′ (which is simply connected). Let R i be the region of P ′ that is "swept" by π(s, t) during the above movement of t. More specifically, let p i be the common point on π(s, v i ) ∩ π(s, v i+1 ) that is farthest to s. Then, R i is bounded by π(p i , v i ), π(p i , v i+1 ), and α i . For convenience of discussion, we let R i also contain the common sub-path π(s, p i ) = π(s, v i )∩π(s, v i+1 ) and we call π(s, p i ) the tail of R i . We call the region bounded by π(p i , v i ), π(p i , v i+1 ), and α i the cell of R i . We consider π(s, v i ), π(s, v i+1 ), and α i as the three portions of the boundary ∂R i of R i . The definition implies that for any point t in R i , π(s, t) is in R i . In fact, if t is in the cell of R i , then π(s, t) is the concatenation of π(s, p i ) and the shortest path from p i to t in the cell. Clearly, The next lemma is proved with the help of the regions of R. The set R will also be quite useful in Section 4. Recall that each edge of ∂∆ ′ is either an obstacle edge, a bisector super-curve, or an edge of T V (also called a shortest path edge). Proof. By the definitions of the regions of R, ∆ ′ is contained in the cell of a region R i of R. Therefore, each shortest path edge of ∂D ′ belongs to either π(s, v i ) or π(s, v i+1 ).

⊓ ⊔
Observe that the decomposition D can be obtain from D ′ by removing all bisector super-curves. For any bisector super-curve α, the two cells of D ′ incident to α are merged into one cell of D. Due to Observation 3, a cell of D ′ can be merged into at most one cell of D. Therefore, for each cell ∆ of D, either ∆ is also in D ′ or ∆ is a merged cell merged by exactly two cells of D ′ . Since every cell of D ′ is simply connected, each cell of D is also simply connected. This proves Lemma 1(3).
Consider any line segment τ ∈ P. By Observation 1(2), τ can cross any shortest path of Π V at most once. Hence, τ can cross the shortest paths of Π V at most O(h) times in total. Whenever τ crosses the boundary of a cell of D, it must cross a shortest path of Π V . Thus, τ can intersect O(h) cells of D. This proves the first part of Lemma 1 (4). For the second part, consider any cell ∆. By Lemma 5, if ∆ is not a merged cell, then τ can cross the boundary of ∆ at most twice; otherwise, τ can cross the boundary of ∆ at most four times. Therefore, the intersection τ ∩ ∆ consists of at most two (maximal) sub-segments of τ . This proves the second part of Lemma 1(4).
In the sequel, we prove Lemma 1 (5). Consider any cell ∆ of D. According to our discussion above, ∆ is either in D ′ or a merged cell of two cells ∆ 1 and ∆ 2 of D ′ . If it is the former case, then we also call r ∆ the super-root of ∆; otherwise, we call r ∆ 1 and r ∆ 2 the two super-roots of ∆. Lemma 4 leads to the following lemma, which proves Lemma 1(6). Lemma 6. For any cell ∆ of D, the following hold.
1. Its two super-roots are in V ∪ {s}. 2. For each super-root r of ∆, π(s, r) is a sub-path of a shortest path in Π V . 3. For any point t ∈ ∆, π(s, t) is the concatenation of π(s, r) and the shortest path from r to t in ∆, for a super-root r of ∆.
Proof. By Lemma 4, the proof is straightforward because either ∆ is a cell of D or a merge of two cells of D. ⊓ ⊔ Recall that for any simple polygon P and a fixed source point, each segment query can be answered in O(log |P |) time after O(|P |) time preprocessing [1]. As preprocessing, for each cell ∆ of D, since it is a simple polygon, we compute the above segment query data structure with respect to each super-root of ∆. This takes O(n) time and space in total by Lemma 1 (2).
Consider any segment τ ′ in a cell ∆ of D. By Lemma 6, π(s, τ ′ ) is the concatenation of π(s, r) from s to a super-root r of ∆ and the shortest path π(r, τ ′ ) from r to τ ′ in ∆. As r is in V ∪ {s} by Lemma 6(1), π(s, r) is available from SPM (s), and π(r, τ ′ ) can be found in O(log |∆|) time. Hence, our query algorithm works as follows. For each super-root r of ∆, we compute π(s, r) and π(r, τ ′ ) to obtain a "candidate" shortest path from s to τ ′ . Then, we return the shorter one of the at most two candidates paths as the solution. The total time is O(log |∆|). This proves Lemma 1(5).
Remark. One may wonder why we do not use D ′ instead of D to answer the segment queries. The reason is that the boundaries of cells of D ′ contain bisector super-curves and the query segment τ may intersect a bisector super-curve multiple times, and thus a similar observation as Lemma 1(4) cannot be guaranteed on D ′ . Finally, we prove Lemma 1(7) in the following lemma. Proof. Let D 1 be the decomposition of SPM (s) by the edges of SPT (s). As discussed before, we can easily obtain SPT (s) from SPM (s) and thus obtain D 1 in O(n) time. Further, for each point v ∈ V , we add to D 1 the last edge of the shortest path π(s, v), which is also the edge connecting v to the root of the cell of SPM (s) containing v. Let D 2 be the resulting decomposition, which can be obtained in O(n) time. Note that each edge of T V is also an edge of D 2 .
Since D is a decomposition of P by the edges of T V , D can be obtained from D 2 by removing those edges that are not in D. To this end, we first remove all bisector edges from D 2 . Then, we remove the edges of SPT (s) that are not in T V . This can be done by first marking all edges of T V in D 2 and then removing all unmarked edges of SPT (s) from D 2 . Below we only discuss how to mark all edges of T V in O(n) time since the latter step is trivial.
For each vertex v of V , we mark the edges of π(s, v) in D 2 as follows. We start from v and traverse along π(s, v) from v to s, marking every edge that has not been marked yet; we stop the traversal either when we encounter s or we encounter an edge that has been marked. In this way, every edge of T V is marked exactly once. Since T V has O(n) edges, the above marking algorithm runs in O(n) time.
Thus, the decomposition D can be computed in O(n) time. ⊓ ⊔ q u q(u)

The Quickest Visibility Queries: The Preliminary Result
In this section, we give our preliminary result on quickest visibility queries, which sets the stage for our improved result in Section 5.
For any subset A of P, a point p ∈ A is called a closest point of A (with respect to s) if d(s, A) = d(s, p).
Given any query point q in P, our goal is to find a shortest path from s to Vis(q). Let q * be a closest point of Vis(q). To answer the query, it is sufficient to determine q * . Thus we will focus on finding q * . Note that if q is visible to s, then q * = s. We can determine whether s is visible to q in O(log n) time by checking whether q is in the cell of SPM (s) whose root is s. In the following, we assume s is not visible to q.
We define the windows of q and Vis(q), which were used for studying the visibility polygons, e.g., [5,10]. Consider an obstacle vertex u that is visible to q such that the two incident obstacle edges of u are on the same side of the line through q and u (e.g., see Fig. 3). Let q(u) denote the first point on ∂P hit by the ray from u along the direction from q to u. Then uq(u) is called a window of q; we say that the window is defined by u. Further, we call qq(u) the extended window of uq(u).
Each window of q is an edge of Vis(q), and thus the number of windows of q is O(K), where K = |Vis(q)|. Further, there must be a closest point q * that is on a window of q [1]. Hence, as in [1], a straightforward algorithm to compute q * is to compute shortest paths from s to all windows of s and the path of minimum length determines q * . To compute shortest paths from s to all windows, if we apply our segment queries on all windows using Theorem 1, then the total time would be O(K · h · log n h ). In the rest of this section, we present an algorithm that can compute q * in O((K + h) log h log n) time, without having to compute shortest paths to all windows. The key idea is to prune some (portions of) windows such that q * is still in the remaining windows and the shortest paths from s to all remaining windows can be computed efficiently.

The Algorithm Overview
As the first step, we compute Vis(q), which can be done in O(K log n) time after O(n + h 2 log h) time and O(n + h 2 ) space preprocessing [9]. Then, we can find all windows and extended-windows in O(K) time. For ease of exposition, we make a general position assumption for q that q is not collinear with any two obstacle vertices. The assumption implies that q is in the interior of P and no two windows are collinear.
Let u 0 be the root of the cell of SPM (s) containing q (if q is on the boundary of multiple cells, then we take an arbitrary such cell). Hence, π(s, u 0 )∪u 0 q is a shortest path π(s, q) from s to q. Note that u 0 must define a window u 0 q(u 0 ) of q [27]. Let u 0 q(u 0 ), u 1 q(u 1 ), . . . , u k q(u k ) be all windows of q ordered clockwise around q. Clearly, k = O(K). For each 0 ≤ i ≤ k, let q i = q(u i ).
Note that the window u 0 q 0 is special in the sense that u 0 is in π(s, q). So we first apply our algorithm in Theorem 1 on u 0 q 0 to compute the closest point q * 0 of u 0 q 0 . Clearly, if q * ∈ u 0 q 0 , then q * = q * 0 . In the following, we assume q * ∈ u 0 q 0 . Let Q = {q, q 1 , q 2 , . . . , q k }. Note that Q does not contain q 0 but q. If q * ∈ Q, then we can find q * by computing d(s, p) for all p ∈ Q, which can be done in O(k log n) time using SPM (s). In the following, we assume q * ∈ Q. Note that the above assumption that q * ∈ u 0 q 0 ∪ Q is only for arguing the correctness of our following algorithm, which actually proceeds without knowing whether the assumption is true or not.
For convenience of discussion, we assume that each w i of W does not contain its two endpoints q and q i (but the endpoints of w i still refer to q and q i ). Since q * ∈ u 0 q 0 ∪ Q, q * must be on an extended window of W . Clearly, q * is also a closest point of W . Since no two windows of q are collinear, no extended-window of W contains another. We assign each window w i ∈ W a direction from q to q i , so that we can talk about its left or right side.
Suppose q * is on w i ∈ W . Since w i is an open segment, by the definition of q * , the shortest path π(s, q * ) must reach q * from either the left side or the right side of w i . Formally, we say that π(s, q * ) reaches q * from the left side (resp., right side) of w i if there is a small neighborhood of q * such that all points of π(s, q * ) in the neighborhood are on the left side (resp., right side) of w i . Let w l i (resp., w r i ) denote the set of points p on w i whose shortest path from s to p is from the left (resp., right) side of w i . Hence, q * is either on w l i or on w r i . Our algorithm will find two points q * l and q * r such that if q * is on w l i for some i ∈ [1, k], then q * = q * l , and otherwise (i.e.,q * is in w r i for some i ∈ [1, k]), q * = q * r . In the following, we will only present our algorithm for finding q * l since the case for q * r is symmetric. In the following discussion, we assume q * is on w l i for some i ∈ [1, k]. Note that this assumption is only for arguing the correctness of our algorithm, which actually proceeds without knowing whether the assumption is true.
The rest of this section is organized as follows. In Section 4.2, we discuss some observations, based on which we describe our pruning algorithm in Section 4.3 to prune some (portions of) segments of W such that q * (= q * l ) is still in the remaining segments of W . In Section 4.5, we will finally compute q * l (which will be q * ) on the remaining segments of W . Some implementation details of the algorithm are given in Sections 4.4 and 4.6. Section 4.7 summarizes the overall algorithm.
As will be clear later, our algorithm uses extended windows instead of windows because extended windows can help us with the pruning.

Observations
For any point t ∈ P with s = t, and its shortest path π(s, t), we use t + to denote a point on π(s, t) infinitely close to t (but t + = t). If t is on w l i for some i ∈ [1, k], then t + must be on the left side of w i .
For any segment w of W , we say that w or a sub-segment of w can be pruned if it does not contain q * . Our pruning algorithm, albeit somewhat involved, is based on the following simple observation.
Observation 4 For any point t ∈ w l i for some i ∈ [1, k], if π(s, t + ) intersects any segment w ∈ W or an endpoint of it, then t can be pruned (i.e., t cannot be q * ).
Proof. Let t ′ be a point on π(s, t + ) that is a point on any segment w ∈ W or an endpoint of it. Clearly, t ′ ∈ Vis(s) and d(s, t ′ ) < d(s, t). Thus, t cannot be q * .
, and f (6) = 3. Note that the paths could be "below" π0, but for ease of exposition, we "flip" them above π0, and this flip operation does not change the topology of these paths.
Consider the shortest paths π(s, q i ) for i = 1, 2, . . . , k. To simplify the notation, let π i = π(s, q i ) for each i ∈ [1, k]. In particular, let π 0 = π(s, q) (not π(s, q 0 )). Recall that Q = {q, q 1 , . . . , q k }. The union of all paths π i for 0 ≤ i ≤ k forms a planar tree, denoted by T Q , with root at s. Consider the canonical cycle C(T Q ) as defined in Section 2. Let C Q be the circular list of the points of Q following their relative order in C(T Q ). We further break C Q into a list L Q at q, such that L Q starts from q and all other points of L Q follow the counterclockwise order in Later in Section 4.6 we will give the implementation details for the following lemma. Observation 5 For any i ∈ [1, k], π 0 does not contain q i and π i does not contain q.
Proof. Assume to the contrary that π 0 contains q i for some i ∈ [1, k]. Since q is in π 0 , by Observation 1(2), π 0 = π i ∪ q i q. Recall that qq 0 ∈ π 0 . This implies that either qq 0 contains q i or qq i contains q 0 , which further implies the two windows u 0 q 0 and u i q i are collinear. This incurs contradiction since no two windows are collinear. Hence, π 0 does not contain q i . Assume to the contrary that π i contains q. Then, since both q and q i are in π i , by Observation 1(2), qq i is in π i . Hence, π i = π 0 ∪qq i . Recall that u 0 is the root of the cell of SPM (s) containing q, and π 0 = π(s, u 0 ) ∪ u 0 q. Since q is in the interior of P, u 0 q and qq i must be collinear since otherwise there would be a shorter path from u 0 to q i without containing qq i . Recall that u i ∈ qq i . Since u 0 q and qq i are collinear, the three points q, u 0 , and u i are collinear. But this contradicts with our general position assumption that q is not collinear with any two obstacle vertices. ⊓ ⊔ Lemma 9. Suppose π j contains q i with i = j and i, j ∈ [1, k]. If i < j, then w j can be pruned; otherwise, w i can be pruned.
Proof. We first discuss the case i < j. Consider the region D bounded by the closed curve that is the union of w i , w j , and the subpath of π j between q i and q j (e.g., see Fig. 5(a)). By Observation 1(1), π j does not cross π 0 . Since i < j, w j is clockwise from w i with respect to w 0 (which is the last edge of π 0 ). Hence, D must be locally on the left side of w j . Consider any point t ∈ w l j . We show that t cannot be q * . Recall that w j is an open segment, so t is not q or q j . Since t ∈ w l i , the point t + must be in D. By the definition of D, s is not in the interior of D. Hence, π(s, t + ) must intersect the boundary of D. Since π(s, t + ) cannot cross the subpath of π j between q i and q j , π(s, t + ) must intersect w i , w j , or a point of {q, q i , q j }. By Observation 4, t cannot be q * .
The above shows that t cannot be q * . Thus, w j can be pruned. For the case i > j, the argument is similar (e.g., see Fig. 5(b)). Since i > j, D must be locally on the left side of w i . For any point t ∈ w l i , using the similar argument as above, we can show that t cannot be q * . Thus, w i can be pruned.

⊓ ⊔
Lemma 10 provides an algorithm to remove all extended-windows of W that can be pruned by Lemma 9.
Lemma 10. Given SPM (s) and with O(n) time preprocessing, we can find in O(k log n) time all segments of W that can be pruned by Lemma 9.
Proof. The task is to determine those indices i and j such that q i is contained in π j for i = j in [1, k], after which we can determine whether w i or w j should be pruned by Lemma 9. Recall that f (1), f (2), . . . , f (k) is a permutation of the indices of {1, 2, . . . , k}. Therefore, equivalently we can determine those indices f (i) and We actually do not need to explicitly find all such pairs, as shown below.
A key observation is Based on the above observation, our algorithm works as follows. We consider the points q f (i) in the order of i = 1, 2, . . . , k. Suppose we are about to process q f (i) . The algorithm maintains a stack S of indices in [1, i − 1] in increasing order (from bottom to top of S) such that for each j ∈ [1, i − 1], if j ∈ S, then w f (j) has been pruned. Initially we set S = ∅ before we process q f (1) . In general, our algorithm processes q f (i) for any i ≥ 1 as follows.
If S = ∅, then we push i on top of S and proceed to process q f (i+1) . Otherwise, we first check whether q f (i) is contained in π f (m) , where m is the top index on S.
1. If q f (i) ∈ π f (m) , then q f (i) is not in any path π f (j) with j < m by the above observation. We push i on top of S and then proceed on processing q f (i+1) .
2. If q f (i) ∈ π f (m) , then depending on whether f (i) < f (m), there are two cases.
(a) If f (i) < f (m), then by Lemma 9, we prune w f (m) and pop m from S. Then, we repeat the same algorithm as above (i.e., first check whether S = ∅, and if not, check whether , then by Lemma 9, we prune w f (i) and proceed on processing q f (i+1) .
The algorithm finishes once q f (k) has been processed. It is not difficult to see that if we can check whether q f (i) is in π f (m) in O(c) time, then the algorithm runs in O(k ·c) time since each index of [1, k] can be pushed or popped from S at most once. In the following, we show that c = O(log n) after O(n) time preprocessing, and this will prove the lemma.
First of all, if both q f (i) and q f (m) are in the same cell σ of SPM (s), then of v and r is v. We can build a data structure on SPT (s) in O(n) time such that given any two nodes of the tree, the lowest common ancestor can be found in constant time [3,18].
Hence, we can determine whether The lemma thus follows.

⊓ ⊔
We apply the algorithm in Lemma 10 to prune the segments of W . But to simplify the notation, we assume that none of the segments of W is pruned since otherwise we could re-index all segments of W . So now W has the following property.
Observation 6 For any i ∈ [1, k], q i is not contained in any π j with j ∈ [0, k] and j = i.
Proof. Suppose to the contrary that q i is contained in π j for some j ∈ [0, k] and i = j. On the one hand, due to Observation 5, j = 0. On the other hand, if j ∈ [1, k], then by Lemma 9 either w i or w j would have already been removed from W .

⊓ ⊔
For each i ∈ [1, k], since π 0 does not cross π i , π 0 ∪ π i ∪ w i forms a closed curve that separates the plane into two regions, one locally on the left of w i and the other locally on the right w i . We let D i denote the region locally on the left side of w i including π 0 ∪ π i ∪ w i as its boundary (it is possible that D i is unbounded). If π 0 ∩ π i is a sub-path including at least one edge, then it is also considered to be in D i . We have the following observation for D i .
Proof. Let t = q * that is on w l i . Then, t + is in the interior of D i . By Observation 4, π(s, t + ) cannot intersect w i . Also, π(s, t + ) cannot cross either π 0 or π i , and s is on the boundary of D i . Hence, π(s, t + ) must be inside D i . Thus, π(s, q * ) is in D i .

⊓ ⊔
Our pruning algorithm mainly relies on the following lemma, whose proof in turn boils down to Observation 4.
Lemma 11. Suppose i and j are two indices with 1 Further, in the former case (e.g., see Fig. 6(b)), w f (i) can be pruned, and in the latter case (e.g., see Fig. 6 Depending on whether p ∈ π f (j) , there are two cases.
1. If p ∈ π f (j) , then since p ∈ w f (j) , we obtain π f (j) = π(s, p) ∪ pq f (i) by Observation 1 (2). Since q f (j) is in the interior of D f (i) , we further obtain that π f (i) is counterclockwise from π f (j) with respect to π 0 . Thus, we have i > j, a contradiction. 2. If p ∈ π f (j) , then since i < j and π f (j) is counterclockwise from π f (i) with respect to π 0 , π f (j) must cross an interior point p ′ of qp before reaching q f (j) . This implies that π f (j) = π(s, p ′ ) ∪ p ′ q f (j) by Observation 1(2), and thus, π f (j) contains p since p ∈ p ′ q f (i) . Hence, we again obtain contradiction.
This proves that q f (j) cannot be in the interior of the region D f (i) . By Observations 5 and 6, q f (j) cannot be in π 0 or π f (i) . Since no segment of W contains another, Indeed, since both q f (j) and p f (j) are outside D f (i) , in order for π f (i) to cross w f (j) , π f (i) must cross w f (j) at least twice, which is not possible by Observation 1 (2). Similarly, in order for π f (j) to cross w f (i) , it would have to cross w f (i) at least twice, which is not possible. s π 0 q Fig. 8. The thick (red) segments are the remaining parts of the segments of W after the pruning algorithms (so that q * l must be on the left side of a red segment). Note that the paths could be "below" π0, but for ease of exposition, we "flip" them above π0, and this flip operation actually does not change the topology of these paths.
This proves that π f (i) does not cross w f (j) and π f (j) does not cross This proves the first part of the lemma. For the second part of the lemma, we assume f (i) > f (j). By the same analysis as above, q f (i) cannot be on the boundary of D f (j) . Depending on whether q f (i) is in the interior of D f (j) or outside it, there are two cases.
Further, since π f (i) and π f (j) do not cross each other and π f (i) does not contain q (by Observation 5), π f (i) must cross w f (j) . Let p be the point of w f (j) where π f (i) crosses. Let D be the open region bounded by w f (i) , qp, and the subpath π ′ of π f (i) between p and q f (i) . Consider any point t on w l f (i) (if any). The point t + must be in the interior of D. Clearly, s is not in D. Hence, π(s, t + ) must cross the boundary of D. Since π(s, t + ) cannot cross π ′ , it must cross either pq or w f (i) . By Observation 4, t can be pruned. Thus, w f (i) can be pruned.
by the same analysis as before, Consider the region D bounded by qp, w f (j) , and the subpath of π f (j) between p and q f (j) . Consider any point t on qp ∩ w l f (i) . By the similar argument as above, we can show that t can be pruned. Thus, qp can be pruned.
The lemma thus follows.
⊓ ⊔ For any 1 ≤ i < j ≤ k, we say π i and π j are consistent if f (i) < f (j). By Lemma 11, if π i and π j are not consistent, then we can do some pruning, based on which we present our pruning algorithm in Section 4.3. Figure 8 gives an example showing the remaining parts of the segments of W after the pruning algorithm.
f (8) f (13) f (15) f (16) f (20) Note that the paths could be "below" π0, but for ease of exposition, we "flip" them above π0, and this flip operation actually does not change the topology of these paths.

A Pruning Algorithm for Pruning the Segments of W
We process the paths π f (1) , π f (2) , . . . , π f (k) in this order. Assume that π f (i−1) has just been processed and we are about to process π f (i) . Our algorithm maintains a sequence of bundles, denoted by is defined recursively as follows. Essentially B is a list of sorted indices of a subset of {1, 2, . . . , i − 1}, but the indices are grouped in a special and systematic way.
There are two types of bundles: atomic and composite. If B has only one index, then it is an atomic bundle. Otherwise, B is a composite bundle consisting of a sequence of at least two bundles B ′ 1 , . . . , B ′ g ′ (with g ′ ≥ 2) such that the last bundle B ′ g ′ must be atomic (others can be either atomic or composite), and we call the index contained in B ′ g ′ the wrap index of B. We consider the bundles B , and this is why we call j a "wrap" index). Refer to Fig. 9 for an example.
For convenience, if the context is clear, we also consider a bundle B as a set of sorted indices. So if an index j is in B, we can write "j ∈ B".
Remark. We use the word "bundle" because each index j of B refers to the shortest path π f (j) . Therefore, B is a "bundle" of shortest paths.
In addition, the bundle sequence B = {B 1 , B 2 , . . . , B g } maintained by our algorithm has the following two B-properties.  10. Illustrating the proof of Lemma 12.
Proof. We only prove the first part since the second part is similar.
⊓ ⊔ In the following, we discuss our algorithm for processing the shortest path π f (i) , during which B will be updated. Initially when i = 1, we simply set B to contain the only atomic bundle B = {1} and this finishes our processing for π f (1) . In general when i > 1, we do the following.
We first find the index β such that f max (B β ) < f (i) < f max (B β+1 ). Later in Section 4.4 we will give a data structure to maintain the bundle sequence B such that β can be found in O(log n) time.
If β = g (so B β+1 does not exist in this case), then we add a new atomic bundle B g+1 = {i} to the rear of B and we are done with processing π f (i) . Note that the two B-properties are maintained.
Otherwise, we check whether f min (B β+1 ) < f (i). We have the following lemma.
Lemma 12. If f min (B β+1 ) < f (i), then the extended-window w f (i) can be pruned.
, which also implies that B β+1 is a composite bundle. Let r be the wrap index of B β+1 . Due to f (r) = f min (B), it follows that f (r) < f (i). Since every index of B is smaller than i, r < i. By Lemma 11, π f (r) does not cross w f (i) . Consider the index j ∈ B with f (j) = f max (B). Hence, f (j) > f (i). By the third bundleproperty, π f (r) crosses w f (j) , say, at a point p (e.g., see Fig. 10). Consider the region D bounded by w f (r) , pq, and the subpath of π f (r) between p and q f (r) . Since r < i and f (r) < f (i) < f (j), q f (i) must be in D since otherwise π f (r) would cross w f (i) , contradicting with Lemma 11 (1). Also, by Observation 6, q f (i) is not on π f (r) . Therefore, q f (i) is in the interior of D. This implies that the shortest path from s to any point t of w f (i) must intersect w f (r) , w f (j) , or their endpoints. Therefore, no point of w f (i) can be q * . Thus, w f (i) can be pruned.
In the following, we assume f (i) < f min (B β+1 ) (note that f (i) = f min (B β+1 ) is not possible since i ∈ B). Next, we are going to find all such indices j of B that π f (j) crosses w f (i) . To this end, the following two lemmas are crucial. q s Fig. 11. Illustrating the proof of Lemma 13.

For any index
Proof. We prove the four parts of the lemma separately.
to Lemma 11(2), either π f (j) crosses w f (i) or π f (i) crosses w f (j) . If π f (j) crosses w f (i) , by Lemma 11 (2), w f (j) can be pruned. Otherwise, π f (i) must cross w f (j) . 3. Let j and j ′ be the indices as in the lemma statement. Our goal is to show that π f (j ′ ) crosses w f (i) . Clearly, j ′ < j and f (j ′ ) < f (j). By Lemma 11(1), D f (j ′ ) is contained in D f (j) (e.g., see Fig. 11).
Since f (i) < f (j ′ ) and f (i) < f (j), if we move from q to q f (i) along w f (i) , we will enter the interior of both D f (j) and D f (j ′ ) . If we keep moving, note that we cannot encounter any point in either w f (j ′ ) or w f (j) . Since π f (j) crosses w f (i) , if we move as above on w f (i) , we will encounter a point on π f (j) , which is part of the boundary of D f (j) . Since D f (j ′ ) is contained in D f (j) , the above moving will also encounter a point p on D f (j ′ ) (e.g., see Fig. 11). Due to Observation 6, p cannot be q f (i) . Hence, π f (j ′ ) must cross w f (i) at p. 4. This part is equivalent to the above third part.
⊓ ⊔ For any bundle B in {B β+1 , B β+2 , . . . , B g }, if B has two indices j and j ′ such that w f (i) crosses π f (j) but does not cross π f (j ′ ) , then we say that B is a mixed bundle, which is necessarily a composite bundle.

Lemma 14. For any mixed bundle
and π f (j) does not cross w f (i) , then π f (j ′ ) does not cross w f (i) for any j ′ ∈ B ′ b ′ and any b ′ ∈ [b + 1, g ′ − 1]. 4. If a bundle B ′ of B has two indices j and j ′ such that w f (i) crosses π f (j) but does not cross π f (j ′ ) , then we also say that B ′ is a mixed bundle. This lemma applies to B ′ recursively.
q s Fig. 12. Illustrating the proof of Lemma 14 (1): the path π f (i) is marked with red color in (b).
Proof. 1. Suppose j is an index of B such that π f (j) crosses w f (i) . If j = r, then we are done with the proof. In the following, we assume j = r. Hence, f (j) > f (r).
Assume to the contrary that π f (r) does not cross w f (i) . Since r is the wrap index, π f (r) crosses w f (j) , say, at a point p (e.g., see Fig. 12(a)). Consider the region D bounded by π f (j) , pq f (j) , and the subpath of π f (r) between s and p, such that D is on the right side of the directed segment pq f (j) from p to q f (j) . Since f (i) < f (r) < f (j) and w f (i) crosses π f (j) but does not cross π f (r) , q f (i) must be in the region D. Since i > j and i > r, if we go from q f (i) to s along π f (i) , we will get out of D by crossing pq f (j) , after which we get into the interior of the region D f (j) since π f (i) cannot cross π f (r) (e.g., see Fig. 12(b)). If we keep moving towards s along π f (i) , before reaching s we will need to get out of the interior of D f (j) through w f (j) again. However, due to Observation 1 (2), since π f (i) already crosses w f (j) somewhere on pq f (j) , it cannot intersect w f (j) again. Thus, we obtain contradiction. 2. This part follows the similar proof as the third part of Lemma 10 and we omit the details. 3. This part is equivalent to the second part of the lemma. 4. Using the same analysis, we can prove that the same lemma applies to B ′ recursively.
⊓ ⊔ In light of the preceding two lemmas, in the following we will find the indices j of B such that π f (j) crosses w f (i) and then prune w f (j) by Lemma 13(2) (i.e., remove j from B); we say that such an index j is prunable.
Before describing our algorithm, we first discuss an operation that will be used in the algorithm. Consider a composite bundle B = {B ′ 1 , B ′ 2 , . . . , B ′ g ′ } of B. Let r be a wrap index of B, i.e., B ′ g ′ = {r}. Suppose w f (i) crosses π f (r) . Our algorithm will remove r from B and thus from B. This is done by a wrap-index-removal operation. Further, suppose B is the j-th bundle of B, i.e., B = B j . After r is removed, the operation will implicitly insert the bundles B ′ 1 , B ′ 2 , . . . , B ′ g ′ −1 into the position of B in the bundle list B, i.e., after the operation, B becomes B 1 , . . . , B j−1 , B ′ 1 , . . . , B ′ g ′ −1 , B j+1 , . . . , B g . Note that this new bundle list still has the two B-properties.
. Later in Section 4.4 we will give a data structure to maintain the bundles of B so that each wrap-index-removal operation can be implemented in O(log n) time.
Another operation that is often used in the algorithm is the following. Given any i, j ∈ [1, k], we want to determine whether w f (i) crosses π f (j) . We call it the shortest path segment intersection (or SP-segment-intersection) query. Later in Section 4.6 we will present an algorithm that can answer each such query in O(log h log n) time, after O(n log h) time and space preprocessing.
We are ready to describe our algorithm for removing all prunable indices from B. By Lemma 13(1), each bundle B b of B for 1 ≤ b ≤ β does not contain any prunable index. For each bundle B of B β+1 , B β+2 , . . . , B g in order, we call a procedure prune(B) until the procedure returns "false".
If all indices of B are prunable, then prune(B) will return "true" and the entire bundle B will be removed from B. Otherwise, the procedure will return false. Further, if B is a mixed bundle, then all prunable indices of B will be removed (and the procedure returns false).
The procedure prune(B) works as follows (see Algorithm 1 for the pseudocode). It is a recursive procedure, which is not surprising since the bundles are defined recursively. As a base case, if B is an atomic bundle {j}, then we call an SP-segment-intersection query to check whether π f (j) crosses w f (i) . If yes, we remove the bundle B and return true; otherwise, we return false. If B is a composite bundle {B ′ 1 , B ′ 2 , . . . , B ′ g ′ } with r as the wrap index (i.e., B ′ g ′ = {r}), then we first call an SP-segment-intersection to check whether π f (r) crosses w f (i) . If not, by Lemma 14(1), B does not have any prunable index and thus we simply return false. If yes, then we call a wrap-index-removal operation to remove B ′ g ′ . Afterwards, for each b ′ = 1, 2, . . . , g ′ − 1 in order, we call prune(B ′ b ′ ) recursively. If prune(B ′ b ′ ) returns false, then we return false (without calling prune(B ′ b ′ +1 )). If it returns true, we remove B ′ b ′ (in fact all children bundles of B ′ b ′ have been removed by prune(B ′ b ′ )). If b ′ = g ′ − 1, then we return true (since all children bundles of B have been removed); otherwise, we proceed on calling prune(B ′ b ′ +1 ).
We show that B has the three properties of composite bundles as follows.
1. Indeed, recall that every index of the original B is smaller than i. Note that although some indices have been removed from B, we never change any relative order of two indices of B. Further, i is the last index of B. Therefore, the indices of B are sorted increasingly by their order in B. Hence, B has the first property. 2. To show the second property, again the bundles B ′ 1 , B ′ 2 , . . . , B ′ g ′ −1 , which are from the original B, never change their relative orders. By the recursive definition of bundles, it holds that Thus, the second property also holds on B.

For the third property, recall that
. Therefore, f min (B) = f (i). Further, for each j ∈ B \ {i}, since j is not prunable (otherwise j would have already been pruned), π f (j) does not cross w f (i) (by Lemma 13 (2)). By Lemma 13(2), π f (i) must cross w f (j) . Hence, the third property holds on B.
To see that the updated bundle sequence B maintains the two B-properties, by using the similar analysis as above, the first property holds. For the second property, we have proved above that f min (B) = f (i). Further, recall that f max (B β ) < f (i). Therefore, we obtain f max (B β ) < f min (B). Consequently, the second property also holds on B.
⊓ ⊔ To analyze the running time of the above algorithm, let m be the number of indices that have been removed from B. Then, the algorithm makes at most m + 1 SP-segment-intersection queries. To see this, once the query discovers an index j that is not prunable, the algorithm will stop without making any more such queries. On the other hand, each wrap-index-removal operation removes an index, and thus the number of such operations is at most m. Further, observe that for each bundle B, whenever we make a recursive call on a child bundle of B, the wrap index of B is guaranteed to be removed. Therefore, the number of total recursive calls is at most m as well. Hence, the running time of the algorithm is O((m + 1) log h log n).
This finishes our algorithm for processing the path π f (i) . The total time for processing π f (i) is O((m + 1) log h log n). Since once an index is removed from B, it will never be inserted into B again, the sum of all such m in the entire algorithm for processing all paths π f (i) for i = 1, 2, . . . , k is at most k. Hence, the total time of the entire algorithm is O(k log h log n).
Again, Fig. 8 gives an example showing the remaining parts of the segments of W after the pruning algorithm.

The Data Structure for Maintaining the Bundles
In this section, we give a data structure for maintaining the bundle sequence B such that our algorithm runs in the time as claimed above. In particular, we show that during our algorithm for  Fig. 13. Illustrating the bundle tree T B for the bundle sequence in Fig. 9.
processing π f (i) each of the following operations can be performed in O(log k) (= O(log n)) time: inserting a new bundle {i} at the end of B, the bundle-creation operation, the wrap-index-removal operation, and finding the index β. We first present our data structure and then discuss the operations.

The Data Structure
Let B = {B 1 , B 2 , . . . , B g }. It is not difficult to see that the bundles of B naturally form a tree structure. So we use a bundle tree T B to represent it, as follows. The tree T B has a root γ, whose children from left to right are exactly the bundles B 1 , B 2 , . . . , B g in this order. For each such bundle B, if B is atomic, then B is a leaf of T B and the index of B is stored at the leaf. Otherwise, suppose Then, we store the wrap index of B at the node B and B has g ′ − 1 children from left to right corresponding to B ′ 1 , B ′ 2 , . . . , B ′ g ′ −1 in this order. If one of these bundles is composite, then its subtree is defined recursively. Refer to Fig. 13 for an example.
For each node µ of T B , let T B (µ) denote the subtree rooted at µ. It is easy to see that if µ is a leaf, then T B (µ) represents an atomic bundle; otherwise, T B (µ) represents a composite bundle. Each node of the tree except the root stores an index. Further, the post-order traversal of each subtree T B (µ) gives exactly the sequence of indices in the bundle represented by T B (µ).
We implement the bundle tree T B as follows. In general, consider any internal node µ. We let µ have two pointers f ront and rear pointing to the leftmost and rightmost children of µ, respectively. In this way, from µ, we can access its leftmost and rightmost children in O(1) time. All children of µ are organized by a doubly linked list: Each child of µ maintains a left (resp., right) pointer pointing to its left (resp. right) sibling, so that we can remove a node in constant time; the left (resp., right) pointer of the leftmost (resp., rightmost) child is empty. In this way, from the leftmost child of µ, we can visit all children of µ in order from left to right in linear time.
In order to compute the index β in O(log k) time, we use another balanced binary search tree This completes our data structure for maintaining the bundles of B, which consists of two trees T B and T f . In the following, we show how to use our data structure to implement the operations on B needed in our algorithm for processing π f (i) .

Performing Operations
First of all, finding the index β can be easily done in O(log k) time by searching the tree T f . Further, by using the cross pointer, we can immediately access the node µ of T B whose subtree T B (µ) represents B β .
If β = g, then our algorithm adds B = {i} at the end of B. To implement it, we first insert B to T f as the rightmost leaf with the range [f (i), f (i)], which can be done in O(log k) time. Then, we add the atomic bundle B to the rear of B by adding a leaf to T B as the rightmost child of the root γ. The tree T B can be updated in constant time with the help of the rear pointer of γ.
If β = g, then we check whether f min (B β+1 ) < f (i) (note that we can find the leaf for B β+1 in T f in O(log k) time). If f min (B β+1 ) < f (i), then we are done for processing π f (i) . In the following, we assume f min (B β+1 ) > f (i).
Our algorithm first calls the procedure prune(B β+1 ). To implement it, note that B β+1 is represented by the subtree T B (µ ′ ), where µ ′ is the right sibling of µ. Since we already have the access to µ, by using the right pointer of µ, we can access µ ′ in constant time. The procedure prune(B β+1 ) begins with checking whether B β+1 is atomic, which can be done in constant time by checking whether µ ′ is a leaf.
If yes, then the procedure stops after an SP-segment-intersection query. Further, if B β+1 needs to be removed, then we simply remove the leaf µ ′ , which can be done in constant time (recall that the children of any node of T B are organized by a doubly linked list). Further, we also remove the corresponding leaf from T f in O(log k) time.
If B β+1 is not atomic, let We can obtain the wrap index of B β+1 in constant time since it stored at the node µ ′ . To implement wrap-index-removal operation, essentially, we need to replace the node µ ′ by its children. This can be done in constant time by using the left, right, front, and rear pointers of µ ′ . Depending on whether µ ′ is the leftmost or rightmost child of γ, we may also need to update the front or rear pointer of γ, which can also be easily done in constant time. We omit these details.
Next, our algorithm calls the procedure prune(B ′ 1 ). We can access the node of T B whose subtree represents B ′ 1 in constant time after the above wrap-index-removal operation (i.e., by following the front pointer of µ ′ ). The algorithm then works recursively. Note that B ′ 1 now becomes a bundle of B. Hence, the above algorithm description on B β+1 applies to B ′ 1 recursively. The algorithm stops when either we are at the end of B or the procedure prune(B ′ ) returns false for a bundle B ′ in the current B. In the former case, we add {i} to the rear of the current list B in the same way as before. In the latter case, we preform a bundle creation operation by creating a composite bundle B including all bundles of the current B after B β as well as {i} in the rear of B. We implement this bundle creation operation as follows.
Note that we have the access of the node µ 1 whose subtree represents B ′ after prune(B ′ ) returns false. Let µ 2 be the rightmost child of γ, which can be accessed in constant time from the root γ. Next, in constant time, we construct a subtree T representing the bundle B and use T to replace the subtrees of γ from µ 1 to µ 2 (e.g., see Fig. 14), as follows. First, we create a new node µ 3 storing the single index i. Second, we set the front pointer of µ 3 to µ 1 and set the rear pointer of µ 3 to µ 2 . Third, if µ 1 has a left sibling, denoted by µ 4 , then we set the left pointer of µ 3 to µ 4 and set γ · · · · · · Fig. 14. Illustrating the bundle creation operation. Left: the bundle tree before the operation. Right: the bundle tree after the operation (the subtree T represents the bundle B).
the right pointer of µ 4 to µ 3 ; otherwise, we set the front pointer of γ to µ 3 . Fourth, we set the rear pointer of γ to µ 3 . Fifth, we set the left pointer of µ 1 to empty. Finally we update the tree T f as follows. Recall that the algorithm stops when either we are at the end of B or prune(B ′ ) returns false for a bundle B ′ in the current B. In the former case, we let B = {i}, and in the latter case, we let B denote the new bundle created by the bundle creation procedure. In either case, we update as T f as follows. Note that the original B is In addition, we set the cross pointer of the new leaf to the node µ ′′ of T B whose subtree represents B, which is done in constant time since we have the access of µ ′′ after T B is updated (e.g., µ ′′ is µ 3 in the case of Fig. 14).

Computing the Closest Point q *
Recall that we have assumed that q * is on w l i for some i ∈ [1, k], i.e., q * = q * l . According to our pruning algorithm for computing the bundle sequence B, the point q * must be on w l f (j) for some index j ∈ B. In this section, we will compute q * by using the bundle sequence B. For example, in Fig 8, our goal is to compute q * on the left sides of those (red) thick segments.
Recall that we have defined in Section 3.2 that R i is the region of P bounded by π(s, v i ), π(s, v i+1 ), and α i , where α i is either a bisector super-curve whose endpoints are v i and v i+1 or a chain of obstacle edges. Also recall that R i consists of a tail and a cell.
Let τ be any segment in P such that R i contains π(s, τ ). With the help of the decomposition D proposed in Section 3, we propose a region-processing algorithm to compute π(s, τ ) in the following lemma.
Lemma 16. Suppose τ is a segment of P such that R i contains π(s, τ ) and R i is known. Then π(s, τ ) can be computed in O(log h log n) time, after O(n log h) time and space preprocessing.
Proof. We first present our region-processing algorithm for computing π(s, τ ), and then argue its correctness. Finally, we will analyze the running time of the algorithm.
The algorithm. For each of π(s, v i ), π(s, v i+1 ), and α i , we check whether it crosses τ . Note that this step is not necessary for α i if α i is a chain of obstacle edges since τ cannot cross any obstacle edge. By Observation 1(2), τ intersect π(s, v i ) (resp., π(s, v i+1 )) at most once.
To avoid the tedious case analysis, by Observation 1(2), we assume that if τ intersects π(s, v i ) or π(s, v i+1 ), then the intersection is a single point (i.e., not a general sub-segment of τ ). Let a (resp., b) be the intersection between τ and π(s, v i ) (resp., π(s, v i+1 )); if there is no intersection, we simply let a (resp., b) refer to ∅. In general, if α i is a bisector super-curve, τ may intersect α i multiple times, and we let c be an arbitrary such intersection; similarly, if there is no intersection let c refer to ∅.
If a = b and a = ∅, then a is a point on the tail of R i . By Observation 1(2), τ can only intersect the tail once. By the definition of R i , for any point t in the cell of R i , d(s, a) ≤ d(s, t). This implies that π(s, a) is π(s, τ ). So we can finish the algorithm in this case.
Otherwise (i.e., a = b or a = b = ∅), if at least one element of {a, b, c} is not ∅, then for each point p of {a, b, c} and p = ∅, we do the following. Observe that p is not on the tail of R i . By the definition of the decomposition D, regardless of whether p is on π(s, v i ), π(s, v i+1 ), or α i , there is a cell ∆ p of D such that ∆ p contains p and ∆ p is in R i . By Lemma 1(4), ∆ p ∩ τ consists of at most two maximal sub-segments τ 1 and τ 2 . Since ∆ p is a simple polygon, we can build a ray-shooting data structure on each of the inside and the outside of ∆ p . Then, we can compute τ 1 and τ 2 in O(log n) time by using ray-shooting queries. Next, we compute π(s, τ 1 ) and π(s, τ 2 ) in O(log n) time by Lemma 1(5). In this way, we obtain at most six candidate paths (for the at most three non-empty points of {a, b, c}) and return the shortest one as π(s, τ ).
The remaining case is when every element of {a, b, c} is ∅, i.e., τ does not cross any of the three parts of ∂R i . In this case, τ is contained in a single cell ∆ of D. We can determine ∆ by locating the cell of D that contains an arbitrary endpoint of τ . Then, we compute π(s, τ ) by Lemma 1(5).
The correctness. Recall that R i contains π(s, τ ). Let t a closest point of τ (i.e., π(s, τ ) = π(s, t)). Thus, R i contains t. If t is on the tail of R i , then our algorithm correctly computes π(s, τ ) as discussed above. Otherwise, if τ is in R i , then τ must be in a single cell of D. Clearly, our algorithm correctly computes π(s, τ ) in this case. If τ is not in R i , then since R i contains t, τ must cross the boundary of R i . Suppose we move from t along τ until we cross the boundary of R i at a point p. Let ∆ p be the cell of D that is in R i and contains p. Be definition, ∆ p also contains t. If p is on π(s, v i ) (resp., π(s, v i+1 )), then since τ intersects π(s, v i ) (resp., π(s, v i+1 )) at a single point, our algorithm correctly computes π(s, τ ). If p is on α i , then all intersections between τ and α i are in ∆ p since α i is contained in ∆ p . Hence, our algorithm also correctly computes π(s, τ ).
The time analysis. The algorithm needs at most six calls of Lemma 1(5), which take O(log n) time. It also has at most two SP-segment-intersection queries for computing the intersections of τ with π(s, v i ) and π(s, v i+1 ). Again, we will show that each such query can be answered in O(log h log n) time with O(n log h) time and space preprocessing.
In addition, if α i is a bisector super-curve, our algorithm also needs to compute an intersection between τ and α i . This can be done in O(log n) time after linear time preprocessing on α i using the ray-shooting data structure on curved simple polygons or splinegons [26] (indeed, each bisector edge of α i is convex, and thus it is straightforward to make α i a splinegon [26], e.g., by the standard technique as detailed in the proof of Lemma 20). Thus, the total preprocessing time on all such curves α i for i = 1, 2, . . . , h * is O(n). Also, we have mentioned before that we need a constant number of ray-shooting queries on the cells ∆ p to determine the at most two sub-segments of ∆ p ∩ τ . The query time is O(log n) and the total preprocessing time on all cells of D is O(n).
Hence, our region-processing algorithm runs in O(log h log n) time, and the total preprocessing time and space is O(n log h).
⊓ ⊔ Recall that R = {R 1 , R 2 , . . . , R h * }. Due to our general position assumption that q is not collinear with any two obstacle vertices, none of {q, q 1 , . . . , q k } is an obstacle vertex. Then, for each k ′ ∈ [0, k], there is a unique region R i of R whose cell contains q f (k ′ ) , such that the shortest path π f (k ′ ) is contained in R i , and we let z(k ′ ) refer to the index i of R i . Computing z(0), z(1), . . . , z(k) can be done in O(k log n) time by point location queries on the cells of the regions of R.
For any two indices k 1 and Recall that the regions R 1 , R 2 , . . . , R h * are counterclockwise around s. We actually use [k 1 , k 2 ] R to refer to the set of indices of the regions of R from R k 1 to R k 2 counterclockwise around s.
Next we compute q * on w l f (j) for j ∈ B, by using our region-processing algorithm in Lemma 16. Consider the bundles of B = {B 1 , B 2 , . . . , B g }. For each b with 1 ≤ b ≤ g, we call a procedure path(B b , z(i)), where i is the last index of B b−1 if b ≥ 2 and i = 0 otherwise. Note that given the access of B b , we can obtain i in constant time by using our data structure in Section 4.4. Also note that i < j for any index j ∈ B b . The procedure path(B b , z(i)) works as follows.
Depending on whether B b is atomic or composite, there are two cases.
The atomic case. If B b is atomic, let j be the only index of B b . According to the bundle-properties, i < j and f (i) < f (j). So π f (j) and π f (i) are consistent. By Lemma 11(1), D i is contained in D j . Let D be D j minus the interior of D i . We have the following observation.

Observation 9
If q * is on w l f (j) , then π(s, q * ) must be in D (e.g., see Fig. 15).
Proof. Suppose q * is on w l f (j) . Let t = q * . By definition, the point t + is in the interior of D. Since t = q * , π(s, t + ) does not intersect any point of w f (i) or w f (j) and it does not contain q either. Also, π(s, t + ) cannot cross either π f (i) or π f (j) . Hence, π(s, t) must be in D.
⊓ ⊔ Observation 9 leads to the following lemma.
Lemma 17. If q * is on w l f (j) , then π(s, q * ) is in R k ′ for some index k ′ ∈ [z(i), z(j)] R , and further, any shortest path π(s, w f (j) ) from s to w f (j) is π(s, q * ).
Proof. Suppose q * is on w l f (j) . Since q * is also a closest point of w f (j) , π(s, w f (j) ) must be π(s, q * ). Note that π(s, q * ) must be contained in a region of R. By Observation 9, π(s, q * ) is in D. Hence, π f (j) is counterclockwise from π(s, q * ) with respect to π f (i) around s. Since π f (j) is in R z(j) , and For each k ′ ∈ [z(i), z(j)] R , we apply our region-processing algorithm on R k ′ and w f (j) to obtain a path, and we keep the shortest path π among all such paths; let q l f (j) be the endpoint of π on w f (j) . According to Lemma 17, if q * is on w l f (j) , then q * must be q l f (j) . For the purpose of analyzing the total running time of our algorithm, as will be seen later, for each k ′ ∈ [z(i), z(j)] R with k ′ = z(i) and k ′ = z(j), the region-processing algorithm will not be called on R k ′ again in the entire algorithm for computing q * l . On the other hand, we charge the two algorithm calls on R k ′ for k ′ = z(i) and k ′ = z(j) to the index j of B. In this way, the total number of calls to the region-processing procedure in the entire algorithm is O(h * + k) since the total number of indices of B is at most k and the total number of regions R k ′ is h * .
The composite case. If B b is composite, the algorithm is more complicated. Let j be the wrap index of B b . Observation 9 and Lemma 17 still hold on j. However, since now the region D also contains a portion of w f (j ′ ) for each j ′ ∈ B b and j ′ = j (e.g., see Fig. 16), D may also contain the shortest path from s to w f (j ′ ) . In order to avoid calling the region-processing procedure on the same region of R too many times, we use the following approach to process w f (j) .
For any two different indices of k ′ and k ′′ in a range [k 1 , k 2 ] R of indices of the regions of R, we say that k ′′ is ccw-larger than k ′ if [k ′ , k ′′ ] R is a subset of [k 1 , k 2 ] R (e.g., if k 1 < k 2 , then k ′ < k ′′ ).
Define z ij to be the ccw-largest index in [z(i), z(j)] such that w f (j) crosses ∂R z ij (if no such index exists, then let z ij = z(i)). We first compute z ij (to be discussed later). Then, we call the region-processing procedure on R k ′ for all k ′ ∈ [z(i), z ij ] and return the shortest path π that is found; let q l f (j) be the endpoint of π on w f (j) . By the following lemma, if q * is on w l f (j) , then q l f (j) is q * .
Lemma 18. If q * is on w l f (j) , then π(s, q * ) is in R k ′ for some index k ′ ∈ [z(i), z ij ] R , and further, any shortest path π(s, w f (j) ) from s to w f (j) is π(s, q * ).
Proof. By Lemma 17, the lemma statement holds for some k ′ ∈ [z(i), z(j)] R . In the following we Assume to the contrary that k ′ is not in [z(i), z ij ] R . Then, k ′ is ccw-larger than z ij and w f (j) does not cross ∂R k ′ . This implies that w f (j) and q are in R k ′ . Since i < j, π f (j) is counterclockwise from π(f (i)) with respect to π 0 = π(s, q). This implies that z The lemma thus follows.
The following lemma makes sure that when we process w f (j ′ ) for any other index j ′ of B b with j ′ = j, we do not need to consider the regions Proof. Consider any such j ′ as in the lemma statement. Since j is the wrap index of B b , π f (j) crosses w f (j ′ ) at a point p (e.g., see Fig. 17). By Lemma 11 (2), the portion qp of w f (j ′ ) can be pruned, i.e., q * cannot be on qp. Let D 1 be the region bounded by qp, w f (j) , and the subpath π(p, q f (j) ) of π f (j) between p and q f (j) . Note that D 1 ⊆ D f (j ′ ) .
Since q * ∈ w l f (j ′ ) , π(s, q * ) must be in D f (j ′ ) . We claim that π(s, q * ) is in D 2 = D f (j ′ ) \ D 1 (e.g., see Fig. 17). To see this, D 2 is one of the two sub-regions of D f (j ′ ) partitioned by w f (j) ∪ π(p, q f (j) ). Since q * is not on qp, q * must be in the interior of pq f (j ′ ) , which is in D 2 . Hence, to prove that π(s, q * ) is in D 2 , it is sufficient to show that π(s, q * ) does not cross either w f (j) or π(p, q f (j) ). Indeed, π(s, q * ) does not cross π(p, q f (j) ). On the other hand, π(s, q * ) does not intersect w f (j) since otherwise q * would not be a closest point of Vis(q). This shows that π(s, q * ) is in D 2 .
Since z(i) = z ij , z ij is ccw-larger than z(i). By the definition of z ij , w f (j) crosses ∂R z ij , say, at a point t (e.g., see Fig. 17). Hence, the region R z ij contains a shortest path π(s, t) from s to t. Further, since z ij ∈ [z(i), z(j ′ )] R , π(s, t) is also in D 2 . Since both s and t are on the boundary of D 2 , π(s, t) partitions D 2 into two sub-regions and one of them, denoted by D 3 , contains q * . Since π(s, q * ) does not cross π(s, t), π(s, q * ) is in D 3 , which implies that π(s, q * ) must be in some region This proves the lemma.

⊓ ⊔
In order to compute the index z ij , we will use a R-region range query. Namely, given the index range [z(i), z(j)] R as well as w f (j) , the query can be used to compute z ij . In Section 4.6 we will give a data structure that can answer each such query in O(log h log n) time (after O(n log h) time and space preprocessing).
After w f (j) is processed as above, q l f (j) is computed. By Lemma 19, to process w f (j ′ ) for other indices j ′ of B b \ {j}, we only need to consider the indices of the regions of R after z ij . Let Remark. For the procedure path(B ′ 1 , z ij ), the above algorithm still works by replacing z(i) by z ij . To argue the correctness, the region D in Observation 9 and Lemma 17 should be defined to be the region D 3 in the proof of Lemma 19 (with respect to j ′ ); then all observations above (after replacing z(i) by z ij ) still hold for path(B ′ 1 , z ij ).
After w f (j) is processed for each j ∈ B, q l f (j) is computed for every j ∈ B; among these at most k points, we return the point q ′ whose value d(s, q ′ ) is the smallest as q * l , which is q * based on our above analysis (and also due to our assumption that q * is on w l i for some i ∈ [1, k]). The total number of calls on the region-processing procedures is O(k + h * ). The total number of R-region range queries is O(k) since each such query is for a composite bundle and there are at most k bundles in total. Hence, the total time of the algorithm is O ((h + k) log h log n). Recall that k ≤ K.

The Algorithm Implementation
In this section, we discuss some implementation details left out above. Specifically, we will give our algorithm for computing the map f (·), and give our data structures for answering the SP-segmentintersections queries and the R-region range queries.

Computing the Map f (·)
Recall the definitions of Q, C Q , and L Q in Section 4.2. Computing the map f (·) is to compute the list L Q = {q, q f (1) , . . . , q f (k) }. Intuitively, we want to order the paths π 1 , . . . , π k counterclockwise around s with respect to π 0 . Our goal is to prove Lemma 8.
We begin with our preprocessing algorithm. Let Σ(s) denote the decomposition of SPM (s) by the edges of SPT (s), which can be constructed in O(n) time after SPM (s) is given. For each cell σ of Σ(s), we pick an arbitrary point in the interior of σ as the representative point of σ. Let X denote the set of all such representative points. Let T X be the tree that is the union of the shortest paths from s to all points of X, and let s be the root of T X . Clearly, T X has O(n) nodes and can be computed in O(n) time once we have Σ(s). The points of X are exactly the leaves of T X . We find a base leave p * of T X in O(n) time. Then, we compute in O(n) time the list L l (T X , p * ) of all leaves and the cycle L l (T X ). To simplify the notation, let L X = L l (T X , p * ) and let C X = L l (T X ). This finishes our preprocessing, which takes O(n) time.
In the sequel, we discuss our algorithm for computing the list L Q in O(k log n) time. It is sufficient to compute the circular list C Q since we can obtain L Q from C Q in O(k) time by breaking the cycle at q.
Let q 0 = q (temporarily only for the discussion in this subsection). Recall that for each point q i ∈ Q with 0 ≤ i ≤ k, u i is the root of the cell of SPM (s) that contains q i and determines the shortest path π i , and note that q i u i is in a cell of Σ(s), denoted by σ i (which can be determined in O(log n) time by a point location in Σ(s)). If all cells σ 0 , σ 1 , . . . , σ k are distinct, then the order of the points of Q following the relative order of the representative points of the cells σ 0 , σ 1 , . . . , σ k in C X is exactly C Q , which can be computed in O(k log n) time with help of the circular list C X .
If σ 0 , σ 1 , . . . , σ k are not distinct, then we first compute the circular list of the cells by the above algorithm. To simplify the notation, let σ 0 , σ 1 , . . . , σ k be the circular list. Then, two cells are the same only if they are adjacent in the list. Hence, we can determine in O(k) time the cycle of unique cells σ ′ 0 , σ ′ 1 , . . . , σ ′ k ′ for k ′ < k, and further, for each cell σ ′ i , the set Q(σ ′ i ) of points of Q in σ ′ i can also be determined. Consider a cell σ ′ i and let u ′ i be the root. Let T (σ ′ i ) be the union of the segments u ′ i q ′ for all q ′ ∈ Q(σ ′ i ), and we consider T (σ ′ i ) as a tree rooted at u ′ i . Since u ′ i is an obstacle vertex, u ′ i is a node in T X . If u ′ i is not s, then let p be the parent of u ′ i in T X ; otherwise let p be the child of s in T X that is an ancestor of the base leave p * (we compute that particular child of s in the preprocessing). Starting from the counterclockwise first child of u ′ i in T (σ ′ i ) with respect to u ′ i p, and let L(σ ′ i ) be the list of the children of u ′ i in T (σ ′ i ) ordered counterclockwise. It can be verified that the concatenation of L(σ ′ 0 ), L(σ ′ 1 ), . . . , L(σ ′ k ′ ) is exactly the circular list C Q . Following the above description, the circular list C Q can be computed in O(k log n) time.

The SP-segment-intersection Queries
In this section, we present our data structure for answering the SP-segment-intersection queries. Specifically, given any i, j ∈ [1, k], we want to determine whether w f (i) crosses π f (j) , and if yes, compute an intersection. Here we consider a more general problem. Given a point t and a segment τ in P, we want to compute an intersection between τ and the shortest path π(s, t) (or report none if they do not intersect). In the case where t has multiple shortest paths (and thus π(s, t) is not unique), the root r of a cell of SPM (s) should also be provided so that π(s, t) refers to the one that contains rt. But to simplify the discussion, we assume t always has a unique shortest path (the other case can be solved by our algorithm too). We will show that with O(n log h) time and space preprocessing (with a given SPM (s)), each such query can be answered in O(log h log n) time. When h = O(1), the result is optimal.
Recall the definitions of V , Π, T V , and the list L l (T V , v 1 ) = {v 1 , v 2 , · · · , v h * } in Section 3. In the following, we build up our data structure incrementally: We will first show how to answer queries when t is in V , then show how to answer queries when t a vertex of T V , and finally discuss the general case where t can be any point in P.
We build a complete binary search tree T 1 as follows. The leaves of T 1 from left to right correspond to the points v 1 , v 2 , . . . , v h * of V in this order. In the following we will consider the points of V and the leaves of T 1 interchangeably. Note that each point of V is also a leaf in the tree T V . Consider any node u of T 1 . We maintain a path P (u) of edges of T V , defined as follows. Let T 1 (u) be the subtree of T 1 rooted at u and let S(u) be the set of the leaves of T 1 (u). If u is the root, then P (u) is the common sub-path (i.e., the intersection) of the shortest paths π(s, p) for all p ∈ S(u) (note that π(s, p) is also the path of T V from p to the root s). Otherwise, P (u) is the portion of the common sub-path of π(s, p) for all p ∈ S(u) that is not stored in P (u ′ ) for any ancestor u ′ of u. In this way, for each leave v i , the edges of P (u) of all nodes u in the path of T 1 from v i to the root are pairwise disjoint and comprise exactly π(s, v i ). Further, for each node u of T 1 , since P (u) is a path of edges, we build a ray-shooting data structure on P (u) by standard techniques as detailed in the following lemma. Proof. This can be easily done by using the ray-shooting data structure for simple polygons [6,21]. We provide the details below.
Let R be a big rectangle in the plane that contains all edges of P (u). Let p be the topmost point of P (u). We shoot a ray from p upwards until it hits ∂R at a point p ′ . Then, we can consider P (u), pp ′ , and R bounds a simple polygon P . We build a ray-shooting data structure in P in O(m) size and space [6,21].
Consider any ray-shooting query for P (u). Given a ray ρ, we compute the first point a of ∂P hit by ρ in O(log m) time by using the ray-shooting data structure on P . If a is on P (u), then we are done and return a as the answer. If a is on ∂R, then we are also done and report that there is no intersection between ρ and P (u). If a is on pp ′ , then we keep shooting the ray after a and using the ray-shooting data structure again to compute the next point a ′ ∈ ∂P hit by the ray. Similarly as above, if a ′ is on P (u), then we are done and return a ′ . If a ′ is on ∂R, then we report that there is no intersection. Note that a ′ cannot be on pp ′ . Hence, we can answer the ray-shooting query on P (u) in O(log m) time by making at most two ray-shooting queries on P .
⊓ ⊔ We call the information associated with each node u of T 1 the auxiliary data structure at u. Proof. Recall that the number of edges of T V is O(n). In the following, we first show that each edge e of T V is stored in P (u) of at most two nodes u in each level of T 1 .
Assume to the contrary that there are three such nodes u in the same level of T 1 that all store the same edge e of T V in P (u). Let the three nodes be u 1 , u 2 , u 3 from left to right. If u 1 , u 2 , u 3 are consecutive, then two of them, say, u 1 and u 2 , must share the same parent u. Since e is in both P (u 1 ) and P (u 2 ), by definition, e should be in P (u ′ ) for an ancestor u ′ of u (including u itself). Thus, e should not be in either P (u 1 ) or P (u 2 ), incurring contradiction.
In the following we assume u 1 , u 2 , u 3 are not consecutive. If two of them share the same parent, then we can apply the same argument as above. Otherwise, we show below that the sibling u ′ of u 2 (i.e., u and u ′ share the same parent) has P (u ′ ) including e. Consequently, the above proof applies.
Let V e be the set of points of V whose paths from s in T V contain the edge e. Note that V e consists of exactly the leaves in the subtree of T V separated by e. By the definition of According to the definition of T 1 , the leaves of T 1 corresponding to the points of V e are consecutive in T 1 . Since e is in both P (u 1 ) and P (u 3 ), all leaves of the subtrees of T 1 (u 1 ) and T 2 (u 3 ) are in V e . Since u 2 is between u 1 and u 3 , u ′ is also between u 1 and u 2 . Thus, all leaves of T 1 (u ′ ) must also be in V e , implying that e is in the common sub-path of π(s, p) for all p ∈ S(u ′ ). Since e is in P (u 2 ), e is not in P (u ′′ ) for any proper ancestor u ′′ of u 2 . Because u ′ and u 2 share the same parent, we obtain that e is also in P (u ′ ).
This proves that each edge e of T V is stored in at most two nodes in each level of T 1 . Since T 1 has O(log h * ) levels and h * = O(log h), each edge e is stored in O(log h) nodes. Hence, the size of T 1 is O(n log h).
In the following, we construct the tree T 1 in O(n log h) time. The key is to compute P (u) for each node u of T 1 , after which constructing the ray-shooting data structure on P (u) can be done in linear time by Lemma 20. For each edge e of T V , we compute the range [l e , r e ] ⊆ [1, h * ] that consists of all indices i such that e is contained in the path from v i to s in T V . This can be done in O(n) time as follows. For each vertex v of T V , we define the range [l v , r v ] as the set of all indices i such that v is contained in the path from v i to s in T V . We first compute the ranges for all vertices of T V . This can be easily done a post-order traversal of T V starting from the leaf v 1 . Specifically, during the traversal for each vertex v, if v is a leaf containing v i ∈ V , we set l v = i and r v = i; otherwise, all children of v have been visited and we set l v (resp., r v ) to be the smallest (resp., largest) l v ′ of all children v ′ of v. After the traversal, the ranges for all vertices of T V are computed. Then, for each edge e of Π, it is not difficult to see that the range of e is the same as that of v, where v is the endpoint of e such that the path from s to v in T V contains e.
Next we compute P (u) for all nodes u of T 1 as follows. We consider the edges of T V following the post-order traversal from v 1 . For each edge e, by using the range [l e , r e ], we find those nodes u of T 1 whose P (u) contains e. This can be done in the similar way as the standard insertion operation in segment trees [4]. Specifically, for each node u of T 1 , let [l u , r u ] be the range consists of all indices i such that v i is S(u). Starting from the root of T 1 , for each node u, if [l u , r u ] ⊆ [l e , r e ], then we insert e to P (u); otherwise, for each child u ′ of u, if [l e , r e ]∩[l u ′ , r u ′ ] = ∅, then we proceed on u ′ recursively. As the standard insertion operations on segment trees, each edge e is processed in O(log h) time since the height of T 1 is O(log h). Hence, the total time of the algorithm is O(n log h). Note that since we consider the edges of T V by following the post-order traversal from v 1 , whenever we insert an edge e to P (u), e is always the edge adjacent to the first edge of the current P (u) and e is then appended to P (u) as the new first edge. After the algorithm finishes, the sub-path P (u) is readily available by following the edges in the order they have been inserted and the first edge is the one closest to s.
This proves the lemma.

⊓ ⊔
We show how to answer SP-segment-intersection queries by using the tree T 1 . We begin with a special case where the query point t is in V , say t = v i for some i ∈ [1, h * ]. Our goal is to compute an intersection between τ and π(s, v i ). To answer the query, we follow the path of T 1 from the root to the leaf v i . For each node u in the path, we use a ray-shooting query to compute an intersection between P (u) and τ . If we find an intersection, then we report the intersection and stop the algorithm; otherwise, we proceed on the next node. The correctness of the algorithm is based on the fact that the union of P (u) of all nodes u in the above path is exactly π(s, v i ). The query time is O(log h log n) since each ray-shooting query takes O(log n) time and the height of T 1 is O(log h).
We then consider a more general case where the query point t is a vertex v of T V (v is not necessarily in V ). To answer the query, we first pick an arbitrary leave v i in the subtree of T V rooted at v (for this, in the preprocessing step we need to associate with v ′ an arbitrary leaf in its subtree for each node v ′ of T V ). Clearly, v must be in the path π(s, v i ). We follow the path of T 1 from the root to the leaf v i . For each node u in the path, we compute an intersection between P (u) and τ by using a ray-shooting query. If there is an intersection p, we check whether p is in the sub-path of π(s, v i ) between s and v (see below for more details about this). If yes, then we report p and stop the algorithm. Otherwise, since τ can only cross π(s, v i ) once, there cannot be any intersection between τ and π(s, v); thus, in this case we simply return none. If there is no intersection between τ and P (u), then we proceed on the next node in the path. If we do not find any intersection after we reach v i , then we report none.
It remains to discuss how to determine whether p is between s and v. The point p is on an edge e of π(s, v i ), which is also in T V . Let v ′ be the endpoint of e that is farther to s in T V . Observe that p is between s and v if and only if v ′ is between s and v. To determine the latter, observe that v ′ is between s and v if and only if v ′ is after v in the canonical list L(T V , v 1 ), which can be determined in O(log n) time (e.g., by binary search) after L(T V , v 1 ) is computed in the preprocessing.
Hence, the total time for answering the query is O(log h log n). In the following, by making use of the above result, we consider the most general case where t can be any point in P. We first present the result for the simple polygon case. Proof. Given any query segment τ and a point t in P , the query asks for the intersection between τ and the shortest path π(s, t) from s to t in P (or report none if there is no intersection). In the preprocessing, we compute the shortest path tree SPT (s) and shortest path map SPM (s) from s in P , which can be done in O(m) time [17]. We then build a point location data structure on SPM (s) in O(n) time [14,25]. Further, we compute the canonical cycle C(SPT (s)) in O(m) time.
Let r t be the root of the cell of SPM (s) containing t such that π(s, t) contains r t t. We first check whether r t t intersects τ . If yes, we return the intersection. Otherwise, we proceed to compute the intersection between τ and the shortest path π(s, r t ) from s to r t .
Let a and b be the two endpoints of τ , respectively. We first check whether a is on π(s, r t ), as follows. If a ∈ π(s, r t ), then a must be on an edge e of π(s, r t ) ⊆ SPT (s), and further, r t must be a descendent of v e , where v e is the endpoint of e farther to s in π(s, r t ). Therefore, to check whether a is on π(s, r t ), we can use the following approach. First, we determine whether a is on an edge of SPT (s), which can be done in O(log m) time by a point location query on the decomposition of SPM (s) by the edges of SPT (s). If a is not on an edge of SPT (s), then we know that a cannot be in π(s, r t ). Otherwise, we proceed on determining whether r t is a descendent of v e . To this end, observe that r t is a descendent of v e if and only if the lowest common ancestor of v e and r t in SPT (s) is v e , which can be computed in O(1) time after O(m) time preprocessing on SPT (s) [3,18].
Hence, we can check whether a is in π(s, r t ) in O(log m) time. Similarly we can check whether b is in π(s, r t ) in O(log m) time. If either a or b is on π(s, r t ), then we stop the algorithm and return it as an intersection of τ and π(s, t). Below, we assume neither a nor b is in π(s, r t ). Thus, our goal is to compute the intersection between π(s, r t ) and the interior of τ .
Let r a be the root of the cell of SPM (s) containing a. Define r b similarly. Let r c be the lowest common ancestor or r a and r b in SPM (s) (e.g., see Fig. 18), which can be found in constant time by a lowest common ancestor query. Let F denote the funnel that is the region of P bounded by π(r c , a), π(r c , b), and ab. Note that both π(r c , a) and π(r c , b) are convex with the convexity towards the interior of F . We assume that if we traverse from r c counterclockwise around ∂F we will be on π(r c , a) before arriving at τ (otherwise we exchange the notation a and b). Observe that π(s, r t ) intersects the interior of τ if and only if there is an edge e of π(s, r t ) such that e intersects the interior of τ and one endpoint of e is in F and the other one is outside F (e.g., see Fig. 18). Let v e be the endpoint of e in F and u e be the endpoint of e outside F . Observe that such an edge e exists if and only if r t is between r a and r b counterclockwise in the circular list C(SPT (s)), which can be determined in O(log m) time by binary search on the list.
Further, if such an edge e = u e v e exists, then we further compute the intersection e ∩ τ . To determine the edge e, we first find the vertex v e as follows. We find the lowest common ancestor of r t and r a , denoted by v 1 . If v 1 is not r c , then v 1 must be on π(r c , r a ) and v e is v 1 . Otherwise, the lowest common ancestor of r t and r b is v e . After v e is found, e is the first edge in the shortest path π(v e , r t ) from v e to v t , which can be found in O(log m) time using a two-point shortest path query on the vertex pair (v e , r t ) with O(m) time preprocessing [16,19].
⊓ ⊔ Combining all our results above, the following lemma gives our final result.
Lemma 23. Given SPM (s), we can build a data structure of O(n log h) size in O(n log h) time that can answer each SP-segment-intersection query in O(log h log n) time.
Proof. In the preprocessing, we build the tree T 1 , which takes O(n log h) time and space. For each cell ∆ of the decomposition D, since it is a simple polygon, we build the data structure in Lemma 22 with respect to each super-root of ∆; this takes O(n) time and space in total. Given τ and t, our query algorithm works as follows. We first determine the cell ∆ of D that contains t. We also determine the super-root r of ∆ such that π(s, t) = π(s, r) ∪ π(r, t). All this can be done in O(log n) time. Note that r is a vertex in T V . Hence, we can compute an intersection between τ and π(s, r) in O(log h log n) time using the tree T 1 . If there is an intersection, we return it and stop the algorithm. Otherwise, we compute an intersection between τ and π(r, t) in the cell ∆. To this end, we first compute the at most two sub-segments of τ ∩ ∆ by using the ray-shooting queries inside and outside ∆. For this, in the preprocessing, for each cell ∆ of D, we compute ray-shooting data structures on both the inside and outside of ∆ (e.g., by the similar techniques as in Lemma 20). Computing these ray-shooing data structure on all cells of D takes O(n) time. Then, for each sub-segment τ ′ of τ ∩ ∆, we compute the intersection (if any) between τ ′ and π(r, t) in O(log n) time by Lemma 22. Hence, the overall query algorithm runs in O(log h log n) time.
The lemma thus follows. ⊓ ⊔

The R-Region Range Queries
In the following, we give our data structure for answering the R-region queries. Specifically, given a range [i, j] R of indices of the regions of R and an extended-window τ ∈ W , the query asks for the ccw-largest index r ∈ [i, j] R such that τ crosses the region boundary ∂R r (or report none if such an index does not exist). We actually consider a more general query where τ can be any segment in P (not necessarily in W ). Our goal is to show the following result.
Lemma 24. Given SPM (s), we can build a data structure in O(n log h) time and space such that each R-region range query can be answered in O(log h log n) time.
Recall that for each region R r ∈ R, its boundary ∂R r consists of three portions: π(s, v r ), π(s, v r+1 ), and α r .
We build a complete binary search tree T 2 as follows. Like T 1 in Section 4.6.2, the leaves of T 2 from left to right correspond to v 1 , v 2 , . . . , v h * . For each node u of T 2 , we construct the same auxiliary data structure P (u) as in T 1 . In addition, we build another auxiliary data structure U (u) for each internal node u of T 2 as follows.
We use T 2 (u) to denote the subtree of T 2 rooted at u and use S(u) to denote the set of the leaves of T 2 (u). As in T 1 in Section 4.6.2, each point of V corresponds to a leaf of S(u) and is also a leaf of T V . Let p u be the point of the path P (u) in T V that is farthest from s. In the case where P (u) is empty, let p u be p u ′ for the parent u ′ of u if u = s and p u = s otherwise. Note that p u is a node of T V . Let U be the union of the paths of T V from p u to all leaves of S(u) in T V , Since the total size of all bisector super-curves is O(n), the space of U (u) in T 2 used to store the bisector super-curves is O(n log h).
Combining the above discussions, the size of T 2 is O(n log h).
For each node u of T 2 , constructing U (u) can be done in linear time in the size of U (u) as follows. Let v a , v a+1 , . . . , v b be the leaves in T 2 (u). We consider the paths from p u to these leaves in T V one by one in a bottom-up manner. Initially we let U (u) contain the only path π(p u , v a ). In general, suppose π(p u , v c−1 ) has been considered (initially, c − 1 = a). Then we process π(p u , v c ) as follows. We traverse on π(p u , v c ) from v c to p u in T V until we meet an obstacle vertex that is on the current U (u), and then add all traversed edges of π(p u , v c ) to U (u). We continue the algorithm as above until π(p u , v b ) is processed. Finally, for each c ∈ [a, b − 1] (if a < b), if α c is a bisector super-curve, then we add α c to U (u). The above algorithm constructs U (u) in linear time.
Then, we construct the ray-shooting data structures for the cells of U (u), which can also be done in linear time in the size of U (u).
Since the total size of U (u) of all nodes u of T 2 is O(n log h), the total time for constructing the second auxiliary data structures is O(n log h). Therefore, T 2 can be computed in O(n log h) time.
⊓ ⊔ By using the tree T 2 , the following lemma gives our query algorithm, which proves Lemma 24.
Lemma 26. Each R-region range query can be answered in O(log h log n) time.
Proof. Given a range [i, j] R of indices of the regions of R and a segment τ ∈ P, we want to compute the ccw-largest index r ∈ [i, j] R such that τ crosses the boundary ∂R r (if no such index r exists, then we return none). Let r * be the sought index.
Recall that both i ≤ j and i > j are possible. We first consider the case where i ≤ j. In this case, [i, j] R consists of {i, i + 1, . . . , j}. We begin with finding the lowest common ancestor of the two leaves v i and v j in T 2 , denoted by w. Our algorithm consists of four procedures.
The first procedure. The first procedure considers the nodes in the path of T 2 from the root to w. For each node u in the path, we check whether τ crosses P (u) by a ray-shooting query. If yes, then τ crosses the shortest path π(s, v j ) and thus crosses ∂R j . Hence, we can simply return r * = j and stop the algorithm. Otherwise, we proceed on the next node until w is considered.
After w is considered, if r * is not found, then we go to the second procedure.
The second procedure. The second procedure considers the nodes in the path of T 2 from u j up to w in a bottom-up fashion. For each node u, there are three cases.
We check whether τ intersects P (u ′ ). If yes, we return r * as the rightmost index of the leaves in the subtree T 2 (u ′ ). Otherwise, we check whether τ intersects U (u ′ ) by first locating the cell C of U (u ′ ) containing an endpoint of τ and then calling a ray-shooting query on C. If not, we proceed on the parent of u (not u ′ ). Otherwise, we set u = u ′ and go to the fourth procedure.
The third procedure. In this procedure, we consider the vertices on the path of T 2 from the left child of w down to u i , which is symmetric to the second procedure. For each node u, there are two cases.
1. If u = u i , we first check whether τ intersects P (u) by a ray-shooting query. If yes, we return the index of the rightmost leaf of T 2 (u) as r * . Otherwise, if u i is at the right subtree of u, then we proceed on the right child of u.
If u i is at the left subtree of u, let u ′ be the right child of u (if u does not have a right child, then we proceed on the left child of u). We first check whether τ intersects P (u ′ ). If yes, we return the index of the rightmost leaf of T 2 (u ′ ) as r * . Otherwise, we check whether τ intersects U (u ′ ). If not, we proceed on the left child of u. Otherwise, we set u = u ′ and go to the fourth procedure. 2. If u = u i , then we check whether τ intersects P (u). If yes, we return r * = i. Otherwise, we return none, i.e., τ does not intersect ∂R r for any r ∈ [i, j] R .
The fourth procedure. In the fourth procedure, we have a vertex u of T 2 such that τ does not intersect P (u) but intersects U (u). Starting from u, the procedure works as follows. If u is a leaf, then we simply return the index of the leaf as r * . Otherwise, let u ′ be the right child of u. If τ intersects P (u ′ ), then we return r * as the index of the rightmost leaf of T 2 (u ′ ). Otherwise, we check whether τ intersects U (u ′ ). If yes, we set u to u ′ and proceed as above. Otherwise, we set u to the left child of u and proceed as above.
For the running time of the algorithm, observe that the algorithm only visits O(log h) vertices of T 2 and makes O(log h) ray-shooting queries as the height of T 2 is O(log h). Each ray-shooting query is either on P (u) or U (u) for some node u of T 2 , which runs in O(log n) time. Hence, the total time of the algorithm is O(log h log n).
The above gives the query algorithm for the case i ≤ j. If i > j, then the index range [i, j] R consists of {i, i + 1, . . . , h * , 1, 2, . . . , j}. For this case, we first apply the above query algorithm on the range [1, j] R . If the query does not return none, then we return r * as the answer to the original query on [i, j] R . Otherwise, if α h * is a bisector super-curve, then we check whether τ intersects α h * by a ray-shooting query; if there is an intersection, then we return r * = h * . Otherwise, we apply the above query algorithm on the range [i, h * ], and the result of the query is the answer to the original query on [i, j] R . The total time of the query algorithm is still O(log h log n).
The lemma thus follows. ⊓ ⊔

Wrapping Things Up
We summarize our overall result in the following theorem. Proof. In the preprocessing, we compute the visibility polygon query data structure in [9] for computing Vis(q), which is of O(n + h 2 ) size and can be built in O(n + h 2 log h) time. The rest of the preprocessing work includes building the decomposition D and the segment query data structure as in Section 3, performing the preprocessing in Lemmas 8, 10, 16, 23, and 24; these work takes O(n log h) time and space in total. Given any query point q, we first compute Vis(q) in O(K log n) time by the query algorithm in [9]. Then, we obtain the extended window set W . Let k = |W |, which is O(K). Next, we compute a closest point q * on a segment of W in O(k log h log n) time. To this end, we compute a set S of O(k) candidate points as follows. We first add q, q 1 , . . . , q k to S. Then, we compute the closest point q * 0 of u 0 q 0 and add q * 0 to S. Next we compute the point q * l in O((k + h) log h log n) time by using our pruning algorithm in Sections 4.3 and 4.5. By a symmetric algorithm, we can also compute q * r . We add both q * l and q * r to S. By our analysis, q * must be one of the points of S. Since |S| = O(k), we can find q * in S in additional O(k log n) time by using the shortest path map SPM (s).
⊓ ⊔ In fact, we have the following more general result, which might have independent interest. Proof. The preprocessing step is the same as in Theorem 2 except that the visibility polygon query data structure [9] is not necessary any more. Hence, the total preprocessing time and space is O(n log h). Given a set S of k segments intersecting at the same point, denoted by p, we break each segment at p to obtain two segments and we still use S to denote the new set of at most 2k segments. Next we compute a closest point p * on the segments of S. To do so, we can apply the same algorithm as in Theorem 2 for computing q * on the extended-windows of W . Indeed, the only key property of the segments of W we need is that all segments of W have a common endpoint at q. Now that all segments of S have a common endpoint p, the same algorithm still works (some degenerate cases may happen, but can be handled easily).

The Quickest Visibility Queries: The Improved Result
In this section, we reduce the query time of Theorem 2 to O(h log h log n), independent of K. The key idea is the following. First, we show that for any query point q, there exists a subset S(q) of O(h) windows such that a closest point q * is on a segment of S(q). Second, we give an algorithm that can compute S(q) in O(h log n) time, without computing Vis(q). Our idea relies on the extended corridor structure [8,9,11] and modifying the query algorithm for computing Vis(q) in [9]. Below we first review the extended corridor structure in Section 5.1. We then introduce the set S(q) in Section 5.2. Finally we present our algorithm for computing S(q) in Section 5.3.

The Extended Corridor Structure
The corridor structure has been used for solving shortest path problems, e.g., [7,23]. Later some new concepts such as "bays," "canals," and the "ocean" were introduced, e.g., [8,11], referred to as the "extended corridor structure". We review it here for the completeness of this paper and also for introducing the notation that will be needed later. Fig. 19. Illustrating a triangulation of the free space among two obstacles and the corridors (with red solid curves). There are two junction triangles indicated by the large dots inside them, connected by three solid (red) curves. Removing the two junction triangles results in three corridors.  (see the left figure in Fig. 20). Both c and d must be on the same side of the corridor C. The region enclosed by cd and the side of C between c and d is called a bay. We call cd the gate of the bay, which is a common edge of the bay and M.
If the hourglass H C is closed, let x and y be the two apices of its two funnels. Consider two consecutive vertices c and d on a side of a funnel such that cd is not an obstacle edge. If c and d are on the same side of the corridor C, then cd also defines a bay. Otherwise, one of c and d must be a funnel apex, say, c = x, and we call xd a canal gate (see Fig. 20). Similarly, there is also a canal gate at the other funnel apex y, say yz. The region of C bounded by the two canal gates xd and yz that contains the corridor path is the canal of H C .
Each bay or canal is a simple polygon. While the total number of all bays is O(n), the total number of all canals is O(h) since the number of corridors is O(h). The two obstacle vertices of each bay/canal gate are called gate vertices.

Defining the Window Set S(q)
We consider the source point s as an obstacle and build the extended corridor structure. This means that s is on the boundary of the ocean M and thus is not in any bay or canal.
Consider any query point q. For any bay, if q is not in the bay, since the bay has only one gate, q cannot see any point outside the bay "through" its gate. Although a canal has two gates, the next lemma, proved in [11], gives an important property that if q is outside a canal, then q cannot see any point outside the canal through the canal (and its two gates).
If u is on ∂M, then q ′ u is in M this is because q ′ u cannot traverse through the interior of a canal due to the opaque property of Lemma 27. If we move from q ′ to q(u) on qq(u), since w u is not an ocean window, after we pass u, we must move into the inside of a bay/canal A, and further, regardless of whether A is a bay or a canal, we will never get out of A due to the opaque property, which implies that w u = uq(u) must be in A. In this case, we say that w u is an inner-bay/inner-canal window defined by A (we use "inner" because w u is in A).
If u is not on ∂M, then u is a non-gate vertex of a bay/canal A. This implies that if we move from q ′ to u on qq(u), we must cross a gate of A. Again, regardless of whether A is a bay or a canal, w u = uq(u) must be in A. In this case, we also call w u an inner-bay/inner-canal window (e.g., see Fig. 22 and Fig. 23).
As a summary, a window w u may be an ocean window, an outer-bay/canal window, or an inner-bay/canal window.
A window of q is called a closest window if it contains a closest point q * of Vis(q).
The set S(q) is defined as follows. We first add all O(h) ocean windows to S(q). We will show several observations. First, no inner-bay window can be a closest window. Second, among all innercanal windows defined by the same canal, there are at most two that can be closest windows and we add them to S(q). Since there are O(h) canals, S(q) has O(h) inner-canal windows. Third, among all outer-bay windows, there are at most two that can be closest windows; we add them to S(q). Fourth, among all outer-canal windows, there are at most four that can be closest windows; we add them to S(q). This finishes the definition of S(q). In summary, S(q) has O(h) ocean windows, O(h) inner-canal windows, at most two outer-bay windows, and at most four outer-canal windows. Thus, the size of S(q) is O(h).
For a window w u = uq(u), we assume it is directed from u to q(u) and also assume qq(u) is directed from q to q(u).
Observation 10 Suppose w u is a closest window, i.e., q * ∈ w u . If the two obstacle edges incident to u are on the left (resp., right) side of qq(u), then the shortest path from s to q * must be from the left (resp., right) side of w u .
Proof. As discussed before, π(s, q * ) is either from the left or from the right side of w u . Without loss of generality, we assume that the two obstacle edges incident to u are on the left side of qq(u).
Assume to the contrary that π(s, q * ) is from the right side of w u . Let p be a point on π(s, q * ) infinitely close to q * but p = q * . Since the two obstacle edges incident to u are on the left side of qq(u), p is visible to q, i.e., p ∈ Vis(q). Since d(s, p) < d(s, q * ), q * cannot be a closest point of Vis(q), a contradiction.
⊓ ⊔ Lemma 28. None of the inner-bay windows is a closest window.
Proof. Suppose w u = uq(u) is an inner-bay window defined by a bay A. By definition, w u is in A.
Assume to the contrary that w u is a closest window. Without loss of generality, assume the two obstacle edges of P incident to u is on the left side of qq(u) (e.g., see Fig. 22). Since both u and q(u) are on the boundary of A, w u partitions A into two sub-polygons and one of them contains the only gate g of A. Let A ′ be the sub-polygon that does not contain g. Observe that A ′ must be locally on the left side of w u . By Observation 10, since q * ∈ w u , π(s, q * ) must be from the left side of w u , implying that p must be in the interior of A ′ , where p is a point on π(s, q * ) infinitely close to q * . Clearly, s is not in A ′ . Thus, π(s, p) must cross w u , but this is not possible since q * is on w u . Thus, w u cannot be an closest window.
⊓ ⊔ u q(u) q A ′ A g Fig. 22. Illustrating an inner-bay window wu = uq(u) in a bay A. Lemma 29. For any canal A that defines an inner-canal window w u , if u is not an endpoint of the corridor path of A, then w u cannot be a closest window.
Proof. Since w u is an inner-canal window defined by A, w u must be in A and both u and q(u) are on the boundary of A. Further, qu(q) has a point q ′ ∈ M and q ′ u crosses a gate g of A. Let g = xd such that x is the endpoint of the corridor path of A on g (e.g., see Fig. 23). Let C be the corridor that defines the canal A.
Assume without loss of generality that the two obstacle edges of P incident to u are on the left side of qq(u). Since u is not x, according to the results in [11] (see the proof of Lemma 3) that u and q(u) must be on the same side of C that contains d (e.g., see Fig. 23). This implies that w u partitions A into two sub-polygons one of which contains both gates of A, and let A ′ be the sub-polygon that does not contain the gates. Then, as in the proof of Lemma 28, A ′ must be locally on the left side of w u , and by the similar analysis we can show that w u cannot be a closest window.

⊓ ⊔
Since each canal has one corridor path, the preceding lemma implies that every canal can define at most two inner-canal windows that are possibly closest windows.
Consider a bay A with gate g that defines an outer-bay window w u . By definition, qu is in A. Let u 1 be the vertex of A such that qu 1 is in the shortest path in A from q to an endpoint of g; similarly, define u 2 with respect to the other endpoint of g.
Lemma 30. If w u is an outer-bay window defined by A and u is neither u 1 nor u 2 , then w u cannot be a closest window.
Proof. By the definitions of u 1 and u 2 , since A is a simple polygon and u is neither u 1 nor u 2 , q(u) must be in ∂A \ {g}. Hence, the window w u partitions A into two sub-polygons and one of them contains g. Let A ′ be the sub-polygon that does not contain g. Then, by using the same analysis as in Lemma 28, w u cannot be a closest window.
⊓ ⊔ Consider a canal A that defines an outer-canal window w u . This case is similar to the above bay case except that we need to consider both gates of A. Again, qu is in A. Define u 1 , u 2 , u 3 , and u 4 similarly as in the bay case but with respect to the four gate vertices of A, respectively.
Lemma 31. If w u is an outer-bay window defined by A and u is not in {u 1 , u 2 , u 3 , u 4 }, then w u cannot be a closest window.
Proof. By the definitions of u i for 1 ≤ i ≤ 4, since A is a simple polygon and u ∈ {u 1 , u 2 , u 3 , u 4 }, q(u) must be in ∂A and q(u) is not on a gate of A. Further, it can be verified that the window w u partitions A into two sub-polygons and one of them contains both gates of A. Let A ′ be the sub-polygon that does not contain the gates of A. Then, by using the same analysis as in Lemma 28, w u cannot be a closest window.

⊓ ⊔
The above discussions lead to the following lemma.
Lemma 32. Given any query point q, there is a set S(q) of windows of q such that |S(q)| = O(h) and S(q) contains a closest window.

Computing the Window Set S(q)
In this section we present our algorithm for computing S(q), by modifying the query algorithm in [9] for computing Vis(q). Our result is summarized in the following lemma.
Lemma 33. With O(n + h 2 log h) time and O(n + h 2 ) space preprocessing, given any query point q in P, we can compute the set S(q) in O(h log n) time.
We first do the same preprocessing as in [9], which takes O(n + h 2 log h) time and O(n + h 2 ) space. In the following, we give our query algorithm for computing S(q). Depending on whether q is in the ocean M, a bay, or a canal, there are three cases. In each case, we will first briefly review the algorithm in [9] for computing Vis(q) and then modify it to compute S(q).

The Ocean Case
Suppose q is in M. The algorithm in [9] first computes the region of M that is visible to q, denoted by Vis(q, M), which is also the visibility polygon of q in M due to the opaque property of canals. Then, the algorithm computes the region in all bays and canals visible to q. To this end, it traverses on the boundary of Vis(q, M). If a gate g of a bay/canal A is encountered, then the region of A visible to q through e is computed, where e is a maximal portion of g on the boundary of Vis(q, M). The visible regions computed above for all such e's are pairwise disjoint. Hence, Vis(q) is a trivial union of Vis(q, M) and the visible regions in all bays and canals.
We modify the above algorithm to compute S(q), as follows. The algorithm in [9] computes Vis(q, M) by using the visibility complex [29,30]. More specifically, it uses the approach of crossing faces [30] such that all rays originating from q in the plane define a curve γ in the visibility complex and each intersection of γ and the boundary of a cell of the visibility complex corresponds to an outer tangent in M from q to a convex chain of ∂M. Note that such tangents correspond exactly to our ocean windows. If we traverse the curve γ in the visibility complex, each such intersection can be computed in O(log n) time. Hence, if there are h ′ convex chains of ∂M that are visible to q, then the endpoints of the maximal sub-chains ξ of these convex chains that are visible to q can be computed in O(h ′ log n) time by using the approach of crossing faces. Note that h ′ = O(h) [9]. After this, all ocean windows are computed.
Remark. Traversing each such sub-chain ξ can explicitly construct Vis(q, M). But for our problem of computing S(q), we can avoid this step; indeed, this is part of the reason our algorithm avoids the Ω(K) time.
If u 1 = u 2 , then for each u i with i = 1, 2, the intersection of g with the supporting line of qu i is an endpoint of g ′ [17]. Hence, g ′ can be determined immediately once u 1 and u 2 are available. Similarly as in the above ocean case, the algorithm in [9] uses the approach of crossing faces to compute Vis(q, M) through g ′ , which is actually a "cone" visibility query since the visibility of q in M is delimited by the cone bounded by the ray from q to u 1 and the ray from q to u 2 . All rays from q in the cone define a segment γ ′ of the curve γ (discussed in the ocean case) in the visibility complex. To use the approach of crossing faces, the algorithm in [9] first finds the cell σ of the visibility complex that contains an endpoint of γ ′ , which is done in O(log n) time by a point location data structure on the visibility complex. After this, the rest of the algorithm is the same as the ocean bases. This is also the case for our problem for computing S(q). After locating the cell σ, we can use the crossing face approach to compute the O(h) maximal sub-chains ξ of the convex chains of ∂M that are visible to q through g ′ . As in the ocean case, this will also compute all ocean windows of S(q). After that, we use the same approach as in the ocean case to compute all inner-canal windows. The total time is O(h log n).
Finally, we compute the two outer-bay windows defined by u 1 and u 2 . Namely, we need to compute q(u 1 ) and q(u 2 ). For each i = 1, 2, let ρ i be the ray originating from q and along the direction from q to u i . The above algorithm for computing the sub-chains will also determine the point p i on ∂M first hit by ρ i . If p i is on an obstacle edge of P, then p i is q(u i ). Otherwise, p i is on a bay/canal gate g i of a bay/canal A. Then, we use a ray-shooting query on A to find the first point p ′ i on the boundary of A hit by ρ i . Regardless of whether A is a bay or a canal, p ′ i is always on an obstacle edge, and thus p ′ i is q(u i ). Since the ray-shooting query on A takes O(log n) time, the two outer-bay windows can be computed in O(log n) time.
In summary, the window set S(q) can be computed in O(h log n) time for the bay case.

The Canal Case
If q is in a canal A, then the algorithm is similar to the bay case with the difference that we apply the same algorithm on the two gates of the canal separately. Specifically, let g = ab be a gate of A. We first compute the vertices u 1 and u 2 with respect to a and b, respectively. Then, we apply exactly the same algorithm as in the bay case. After that, we consider the other gate of A and apply the same algorithm. Then S(q) is computed and the total time is O(h log n) time. This proves Lemma 33. After S(q) is computed, we can apply the query algorithm of Theorem 2 (or Corollary 1) on the windows of S(q) to compute q * . Thus we can obtain the following result.

Conclusions
In this paper, we present a new data structure for answering quickest visibility queries. Our result is particularly interesting when h, the number of holes of P, is relatively small. For example, when h = O(1), our result matches the best result for the simple polygon case (i.e., h = 1) and is optimal. To achieve the result, we also solve many other problems that may be interesting in their own right. We highlight some of them below. We assume that the shortest path map SPM (s) of the source point s has been given.