BFS and Reverse Shortest Paths for Ball Intersection Graphs in Three and Higher Dimensions
Abstract
Let be a collection of arbitrary balls in , and let be their intersection graph. We provide an algorithm for performing BFS on , which runs in time, where the notation hides subpolynomial factors. For , let be the intersection graph of the set , where is the ball concentric with whose radius is larger by than the radius of . We provide an efficient algorithm for the reverse shortest path (RSP) problem, where we are given two designated balls , of and a parameter , and seek the smallest value for which contains a path from to of at most edges. For the special case of congruent balls (equivalently, for points in ), the algorithm runs in time. For the general case, the algorithm runs in time. We also extend the technique to handle other measures of expansion and higher dimensions.
Keywords and phrases:
Computational geometry, reverse shortest paths, breadth-first search, shrink-and-bifurcate, intersection graphsFunding:
Matthew J. Katz: Work partially supported by Grant 2019715/CCF-20-08551 from the US-Israel Binational Science Foundation/US National Science Foundation, and by Grant 495/23 from the Israel Science Foundation.Copyright and License:
2012 ACM Subject Classification:
Theory of computation Computational geometryEditors:
Ho-Lin Chen, Wing-Kai Hon, and Meng-Tsung TsaiSeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
The reverse shortest path (RSP) problem has received considerable attention recently. In general, we are given a set of geometric objects, and a parameter . We define a graph on , whose edges are all the pairs that satisfy some property, given by a predicate , which is monotone in , meaning that if holds then holds for all . That is, the graphs are monotone increasing in . We are given two designated objects and a parameter , and wish to find the smallest value for which contains a path from to with at most edges.
Among the simplest examples is the case where is a set of points in the plane, and , under the Euclidean distance . This can also be interpreted as the intersection graph of the disks of radius centered at the points of (the so-called unit-disk graph, with the unit being ). This case and many variants thereof are reviewed later in the introduction.
One generalization, studied here, is to the case of points in three dimensions. That is, in this case is the intersection graph of congruent balls of radius centered at the points of . This is a special instance of the general problem studied in this work, where the balls have arbitrary radii. That is, the input consists of a set of balls in of arbitrary radii, and is a parameter that measures how the balls of are expanded. The simplest case is where we increase the radius of each ball by adding to it. Let denote the intersection graph of the expanded balls, so is the intersection graph of the original balls. Our technique also applies to any other well-behaved measure of expansion, e.g., multiplying each radius by , but for conciseness we will mainly stick to the additive measure of expansion, as just introduced, and discuss the general setup later in the paper.
A standard high-level approach to the RSP problem is to design a decision procedure which, with as input, determines whether contains a path from to of length at most . If it does then , and if not then . The decision procedure is typically implemented using Breadth-First Search (BFS) on . Once we have such a procedure, we can use binary search on to home in on the correct value of .
There are two major problems with this approach. First, the cost of a naïve implementation of BFS is linear in the number of edges of , which can be in the worst case. Second, BFS appears not to be parallelizable, and thus it is not amenable to parametric search, which is the standard technique for turning the decision procedure into an efficient optimization procedure for solving the RSP problem. In this work we overcome these issues, for the special case of ball-intersection graphs in (and in higher dimensions), and obtain efficient algorithms for solving the problem, albeit they are (not surprisingly) less efficient than in the planar case.
Related work.
Breadth-First Search (BFS) is recognized as one of the fundamental graph algorithms. On general graphs , its running time is , which is inefficient when is large. Its importance has led authors to seek families of graphs in which BFS can be implemented more efficiently. In the geometric context, one such example is the family of disk graphs in the plane. The vertices of a disk graph represent disks in the plane and there exists an edge between two vertices if and only if the corresponding disks intersect. Notice that the number of edges in a disk graph can be quadratic in , the number of disks. Nevertheless, for unit-disk graphs (where all the disks are congruent), Cabello and Jejčič [8] presented an implementation of BFS, and subsequently Chan and Skrepetos [10] presented an alternative implementation (after pre-sorting the points by their - and -coordinates). For arbitrary disks, Klost [14] described an implementation of BFS (see also [11, 12]).
The proximity graph of a set of segments in the plane, for a real parameter , is , where and is the Euclidean distance between and (which is 0 if they intersect). Agarwal et al. [5] devised an implementation of BFS in . (As in the abstract, the notation hides subpolynomial factors.) For the special case where the segments in are pairwise disjoint, Agarwal et al. [4] have later provided an implementation.
The bottleneck path problem (also known as the minimax path problem) and its complementary problem, i.e., the widest path problem (or the maximum capacity problem), are well-known problems in graph theory. In suitable contexts, the bottleneck path problem is strongly related to the RSP problem: Given and , if is a path from to of at most edges with the minimum bottleneck , then is the solution to the RSP problem with , , as input, and vice versa.
Abu-Affash et al. [1] studied the problem of placing at most Steiner points to minimize the bottleneck of a path between two designated points in the plane, for a given integer parameter , and developed an -time algorithm for the problem.
The RSP problem for unit disks (as mentioned above) was studied by Wang and Zhao [16], who proposed an -time solution. An improved solution with running time , using the shrink-and-bifurcate technique (briefly reviewed later; see [7]), was subsequently presented by Kaplan et al. [11]. A very recent improvement of this technique by Chan and Huang [9] yields an implementation with running time , which can be further improved, for the case of unit-disk graphs, to time. For arbitrary disks, Kaplan et al. [11] obtained a solution with randomized expected time , which can be improved to randomized expected time , using Chan and Huang’s technique [9].
The RSP problem was also studied for other objects, including wedges of some fixed angle, which are viewed as directional antennas, and unit-height towers placed on a 1.5-dimensional terrain. The former version was solved in time by Agarwal et al. [5], and the latter version was studied in Katz et al. [13] (see also [9]).
Our results.
Our main contributions are efficient algorithms for the BFS and RSP problems in ball-intersection graphs in three and higher dimensions. We first consider the problem of efficient implementation of BFS, as a decision procedure for guiding the search for the optimum in the RSP problem (and also an interesting problem in itself). In three dimensions we can perform BFS on , for a set of (congruent or arbitrary) balls in , in time, and in dimensions in time, where . As these algorithms are not parallelizable (at least we do not know how to run them in small parallel depth),111This is a standard issue in BFS on any graph. we need to use a different approach. Such an approach, already mentioned above, known as the shrink-and-bifurcate technique, has originally been developed in Ben Avraham et al. [7] for computing a rather special case of Fréchet distances. But it has recently found applications to the RSP problem in the plane [11]. Very recently, as already mentioned, it has been dramatically improved by Chan and Huang [9].
Once a BFS procedure (serving as the decision procedure) is available, our technique follows closely the improved technique in [9], which however requires certain nontrivial modifications and enhancements to fit into higher-dimensional contexts. The technique of [9] consists of two parts, a general-purpose part, and an additional enhancement which improves the running time further for the special case of unit-disk graphs. We follow the general-purpose part for arbitrary balls, and both parts for congruent balls.
In three dimensions, for the special case of congruent balls (equivalently, of a set of points in ), we obtain an algorithm that runs in randomized expected time. For the case of balls of arbitrary radii, we obtain an algorithm that runs in randomized expected time. More generally, in dimensions, the algorithm for arbitrary balls runs in
randomized expected time, and, for the special case of congruent balls, the algorithm runs in randomized expected time
2 BFS in ball-intersection graphs
Let be a collection of balls in , of arbitrary radii.222The case of congruent balls does not lead to an improved BFS implementation, although the algorithm becomes considerably simpler; see a discussion at the end of the section. We denote a ball with center and radius as . Let be the intersection graph of , i.e., the vertices of are the balls of , and its edges consist of all the intersecting pairs of balls. The condition for two balls , to intersect is .
Let be a start ball in . Our goal is to run BFS on starting from . We follow the standard approach, of constructing the layers of the BFS in order, starting from layer that contains only . Consider the step of passing from some layer to the next layer . Let denote the set of all balls that the BFS has not yet reached up to this step. We then need to find, for each ball in , all the balls of that it intersects. Each such ball is added to and is immediately deleted from , to ensure that it is not detected again by other balls of (or by balls in future layers). We continue in this way until no more balls of intersect any ball in . At this point has been computed, and we go on to construct the next layer. (For our decision procedure, executed on , we stop when either the target ball has been reached or layer has been constructed, whichever happens first. If has been reached, we report success (i.e., ), otherwise we report failure (i.e., ).)
To implement this algorithm efficiently, we therefore need a dynamic data structure for storing , the set of unreached balls, that supports (efficiently) the following two kinds of operations.
Intersection detection queries: Given a query ball , detect whether it intersects any ball of , and, if so, report such a ball.
Deletions: Delete a ball from .
An intersection query with a ball can be rephrased as a range emptiness detection query. For this, we map each ball of to the point in , and map each ball of to the range
Then a ball intersects iff .
Using a standard lifting transform, we further map each point to the point in . Each range is lifted to the halfspace
As is easily checked, iff lies in .
In other words, in the transformed problem we have a set of points in , which we want to process into a dynamic data structure (under deletions) for answering halfspace range emptiness queries. The algorithm of Agarwal and Matoušek [6] provides such a structure. Specifically, for any given parameter , the structure can be constructed to have size , an initial version of it can be constructed in time, each query takes time, and each deletion takes amortized time. Choosing , we obtain a data structure of size , so that each operation (query or deletion) on the structure takes time.
The number of operations is , since each query either detects a new intersecting ball, which is promptly removed from , or is the last query performed with a ball, and each ball becomes a query only at the layer it belongs to. We conclude that the overall running time of the BFS is , because the cost of maintaining and searching in the data structure dominates the cost of the other steps of the BFS. We thus obtain:
Theorem 1.
BFS on the ball intersection graph of a set of balls in can be performed in time.
The case of congruent balls.
We first state the following theorem, whose results will be useful for handling congruent balls. Its proof is an easy consequence of the analysis in [6, 15], applied to the lifted points and halfspaces in , as above.
Theorem 2.
(a) Given two sets , of and points, respectively, in three dimensions, and a prameter , we can determine, for each whether there exists a point such that , in overall time
(b) Given two sets , , as above, we can compute, for each , the nearest neighbor of in , in overall time
(Note that (b) subsumes (a), but the algorithm in (a) is simpler, so we state it separately.)
When the balls of are congruent, say all of radius , we do not need the above dynamic (and rather involved) structure. Instead, we follow the approach in the planar setup of unit-disk graphs [8, 10]. That is, we form a grid of cell size , and distribute the ball centers among its cells. When we perform the step of passing from to , we process each active grid cell, i.e., a cell that contains centers of balls in . For each such cell , all unreached balls with centers in are immediately placed in . Then, for each cell in a suitable punctured constant-size neighborhood of (only such cells, and itself, can contain centers of balls in that can be reached, in one step, from balls with centers in ), we take the set of unreached balls with centers in and test each such ball whether its center lies within distance from the center of some ball in , the set of balls of with centers in . Theorem 2(a) implies that this reverse search can be implemented in time
The remaining straightforward implementation details, and the analysis of correctness and performance of the structure, as given in [8, 10, 11], carry over to the three-dimensional case, and imply that the overall cost of this implementation of the BFS is time.
3 Reverse shortest paths in unit-ball graphs in three dimensions
We can now solve the corresponding optimization problem, which is the reverse shortest path (RSP) problem, or, as it is also called, the bounded-hop bottleneck path problem, for unit-ball graphs in : Given a set of points in , two designated points , and a parameter , find the minimum value of for which the associated graph contains a path from to of at most edges.
We can solve the RSP problem using the aforementioned shrink-and-bifurcate technique [7, 9, 11]. We begin with a brief overview of the technique. It applies in more general setups, but let us restrict it to our setup of unit-ball graphs (and later also to intersection graphs of arbitrary balls, although the description below assumes that the balls are congruent). The shrinking stage constructs an interval that contains and at most other critical values of , each of which is the distance between two centers, where is a prespecified parameter. It does so, in the improved approach of [9], by running a distance selection algorithm on various pairs of random samples of (that is, on the sets of centers of the balls in the samples).
The distance selection algorithm, on which the shrinking procedure is based, takes, ignoring the shrinking aspect, time. More generally, it takes time for selecting distances between two sets of and points, respectively, which is the setup that arises when applying the technique of [9]. Using (a suitable adaptation to three dimensions of) the shrinking mechanism of Chan and Huang [9], the construction of the desired interval can be performed in expected time.
This is followed by a bifurcation stage, in which we simulate the decision procedure (in our setup, the BFS procedure of the preceding section). When we reach a comparison, we resolve it right away if the critical value that it induces lies outside . If however lies in , we bifurcate, exploring both outcomes and . When we have either accumulated enough unresolved comparisons, or moved forward at least some number of steps in the simulation, along each path in the bifurcation tree, we stop and resolve all comparisons using binary search, guided by the (unsimulated) decision procedure. This shrinks further, and we start a new phase of bifurcating simulation. As shown in [7, 11], with a suitable choice of parameters, the overall cost of the bifurcation is , where is the cost of the decision procedure (i.e., of the BFS).
Hence the overall cost of the RSP algorithm is . We choose to balance these terms, that is, , and then the running time is . That is, we have our initial, albeit weaker, result:
Theorem 3.
For a set of points in , two designated points , and an integer parameter , we can find the minimum radius for which the associated graph contains a path from to of at most edges, in time.
3.1 An improved solution
To obtain a faster algorithm, we follow the high-level approach of the second improvement in [9], with many nontrivial adjustments that cater to the three-dimensional nature of the problem. We follow the same setup as in the second implementation of the BFS algorithm. However, for technical reasons that will become clear later on, we first perform the following steps, having to do with the cell size of the grid. We first note that the precise size of the cells (which we have set to in the BFS algorithm) is not very important, as long as it is smaller than (but not much smaller), so that the propagation within a cell is immediate. We can therefore proceed as follows. Note that, by the nature of the RSP problem, must be between and (or, more precisely, ). We therefore run binary search through this interval, using any of the BFS decision procedures of Section 2 to guide the search, and in steps we locate within a range , for some suitable small . We then set to be333Note that this choice might force us to change the definition of the neighborhood of a cell to potentially contain more cells, but still only a constant number thereof. , and keep this size throughout the simulation. See below how this is used in the algorithm. We emphasize that this step precedes the actual simulation, as well as the step of distributing the points of in the grid cells.
We introduce a new threshold parameter , whose concrete value will be set later, and classify each nonempty grid cell as either heavy, if it contains at least points of , or light, otherwise. The number of heavy cells is at most . We now compress each heavy cell into a single (symbolic) “heavy” point , and obtain a new compressed version of , which we denote as , where consists of all the points in the light cells and of all the representative heavy points . The distance between any pair of points in light cells remains the same, namely . The distance between a heavy point and a light point (a point in a light cell) is . Finally, the distance between two heavy points , is the distance determined by the bichromatic closest pair (BCP) in .
Running BFS on is performed similarly to the previous implementation. A single propagation step, at some iteration , involving two adjacent cells , , is executed depending on whether one of , , or both, are heavy. If both cells are light, we proceed as before, incurring the cost
| (1) |
see Theorem 2(a). When is light and is heavy, (if nonempty) is the singleton representative point , and by running the all-nearest-neighbor procedure (Theorem 2(b)) on and , we can determine whether this step should place in . The cost is then
| (2) |
A similar procedure is applied when is heavy and is light. In this case each point of searches for a near neighbor in , and the cost is
| (3) |
Finally, when both cells are heavy we need to compute the distance of the BCP in . This step is handled differently because its naïve implementation is too expensive, because it does not exploit the smaller parameter ; see shortly below.
We sum the bounds in (1), (2) and (3), over all iterations and pairs , of adjacent cells that are active at the corresponding iteration, when both and are light (eq. (1)), when is light and is heavy (3), and when is heavy and is light (2). Note that in each of the bounds (1)–(3), at least one of the sets ( and / or ) is from a light cell, so its size is at most .
Applying Hölder’s inequality, one can show that the sums of the bounds in (1), (2) and (3) are all . For example, summing the bound in (1) over all iterations and active pairs of neighboring cells, and using the facts that (i) , and (ii) each cell is active, or a neighbor of an active cell, at only iterations, in each of these bounds, we get
It remains to handle the case where both and are heavy. Here we need to compute the distance of the BCP in , over all adjacent heavy pairs (i.e., pairs where both and are heavy). As above, this can be done in a total of time. Since it does not depend on , this cost turns out to be too expensive to pay during the simulation of the procedure. To bypass this issue, we note that, once we have fixed the cell-size of the grid (as we did, ahead of the simulation), this step is independent of or , and can too be performed ahead of the simulation. We thus compute the BCPs for all pairs of adjacent heavy cells, once and for all. The overall cost, of both the binary search to fix the cell-size of the grid and the BCP computations, is easily seen to be . In addition, after having computed all the heavy BCP distances, we run an additional binary search through them, to ensure that the interval with which we start the shrink-and-bifurcate procedure does not contain any heavy BCP distance in its interior.
All these considerations imply that we can run BFS on the compressed graph in time , after an initial phase that takes time, which we will execute only once, outside (and preceding) the actual simulation during the search for .
We now run this simulation of the BFS on , using the shrink-and-bifurcate paradigm, as in [9] and as briefly reviewed above. As before, the shrinking procedure is based on distance selection in , but we apply it separately to each pair of adjacent grid cells, at least one of which is light. (This is justified as in [9], because must arise as a distance within such a pair, unless it is the BCP distance of some heavy pair, which will have been detected during the preliminary stage.) Arguing as in [9], and adapting the setup to , the cost of the shrinking stage is .
The subsequent bifurcation procedure runs, as before, in time , where is the cost of the decision procedure. Ignoring the cost of the preliminary part, which stays outside the simulation, we have . Hence the overall cost of the shrink-and-bifurcate procedure, adding back the preliminary cost as a one-time cost, is
| (4) |
Of course, so far we have only simulated the BFS on the compressed graph , so the output value is still not the correct value. Denoting it by , it is the smallest value of for which has a path of at most edges between and . As argued in [9], denoting by the graph distance between and a point in , and by the graph distance between and in , we have, for each ,
| (5) |
where . With some care. this also covers the cases where and / or lie in heavy cells.
Recovering
As in [9], to find the true value , we use dynamic programming, following the high-level approach of [9]. For the sake of completeness, we repeat here some details of this technique, adapted to our setup. We first comment that it is easy to solve the RSP problem using dynamic programming, without using the preceding machinery at all. To do so, let , for , denote the smallest bottleneck value of a path from to consisting of at most edges. Initially, and for any . Then we have, for and ,
| (6) |
If then , which follows easily from (6). The problem with this approach is that the number of times the step (6) is applied is (it is actually , for the number of points and the number of iterations to reach, or not to reach, ). Moreover, without any efficient implementation of the steps (6), the overall cost can be .
In our setup, though, we can reduce the number of dynamic programming steps to . Indeed, write . As each point participates in sets , we have .
As in [9], we exploit this property by replacing the function by another function , defined by the following modified iterative process. For each put
| (7) |
and and for all . As shown in [9], we also have . (This argument is fully general and discrete, and does not depend on any geometric feature of the problem, except for the inequalities (5). In particular, it does not depend on the dimension.)
The number of steps in this modified dynamic programming process is thus . To make these steps (relatively) efficient, we follow a variant of the technique in [9, Lemma 8], adapted to three dimensions. Briefly, the input is a set of points in , so that each point carries a positive weight (the weight will be set to at the -th iteration), and we have a set of queries, where each query point seeks the point that minimizes . The algorithm in [9, Lemma 8] is based on Voronoi diagrams, which works well in the plane, but is not efficient in three dimensions. We use the following modified approach. As in [9], we store the points of at the leaves of a balanced binary tree , sorted in increasing order of their weights. Each node of processes the set of those points of that are stored at the subtree rooted at . Each query point follows a path in . When reaches a node , with a left child and a right child , it finds its (standard) nearest neighbor in . If we recurse in , and if we recurse in . See [9] for details and justification of this rule.
Instead of using Voronoi diagrams, as in [9], we use the algorithm in Theorem 2(b). To do so, we run all the queries in parallel, proceeding level by level. When we reach a node , we have available the set of all the queries whose paths through reach . Using the algorithm in Theorem 2(b), we find, for each , its nearest neighbor in , in overall time
This allows us, by our search rule, to assign each either to or to , as appropriate. Carrying out this process at each node of the current level, we obtain, for each node of the next level, the set of query points that want to access (actually, ). We continue in this recursive manner until we reach the leaves. The cost at a leaf is . Summing over all nodes of , bearing in mind that
the overall cost is easily seen to be .
We now apply this procedure at each iteration of the dynamic programming, where the input set is , the query set is , and the weight of each input point is . Clearly, the output of this procedure, with this input, implements the iterative process in (7). The cost of iteration is therefore
Since each is at most , and , the overall cost of the dynamic program is (using Hölder’s inequality)
| (8) |
Altogether, the cost of the full algorithm is therefore
| (9) |
We first balance the first two terms, choosing , or , and then the bound becomes
We now choose to balance these terms, that is , and the cost of the algorithm is therefore . That is, we have
Theorem 4.
Let be a set of points in , let and be two designated points, and let be a given parameter. Then we can find the smallest value for which contains a path between and that consists of at most edges, in time.
3.2 Higher dimensions
The algorithm given above can be generalized to points in , for any dimension . The geometric components in the algorithm are (i) constructing the grid, (ii) distance selection and the corresponding interval shrinking procedure, (iii) an efficient algorithm for the all-near-neighbor problem, (iv) an efficient algorithm for the all-nearest-neighbor problem, and, as a special case, (v) an efficient algorithm for the BCP problem.
The grid construction can trivially be extended to any fixed dimension. Distance selection in takes time (see, e.g., [3, Appendix]). The bipartite version, where we want to select distances between two given sets of and points, respectively, takes time, and the corresponding interval shrinking procedure, implemented using the technique of [9], takes expected time, as long as and do not deviate significantly from one another. Agarwal and Matoušek [6] present an algorithm that solves problems (iii)–(v) for a pair of sets of respective sizes and , in in time , where . (Recall that these problems are mapped, via the lifting transform given in Section 2, to dynamic halfspace range emptiness queries in dimensions, which explains the value of ; see [6].)
Substituting these components into the algorithm, with parameters and , the running time becomes
where, as above, the third term is dominated by the fourth. We balance the terms in this bound, first by choosing as a function of , and then by choosing as a function of . As the exponents become rather messy, we simplify them somewhat by introducing another parameter . Straightforward, albeit rather tedious, calculations then yield
and the bound then becomes
As a sanity check, we verify that in , where , , and , the bound is indeed . In conclusion, we have
Theorem 5.
Let be a set of points in , for , let and be two designated points, and let be a given parameter. Then we can find the smallest value for which contains a path between and that consists of at most edges, in randomized expected time, where and .
As an additional example, in the case , with and , the algorithm runs in randomized expected time.
4 Reverse shortest paths in ball-intersection graphs in
As before, we use the shrink-and-bifurcate paradigm (see [7, 9, 11]). As we recall, the shrinking stage receives a number as input, and constructs an interval that contains and at most other critical values of . For arbitrary balls, the problem is represented in , where a ball with center and radius is mapped to the point . A value is critical if two expanded balls and become externally tangent to each other, namely when
The improved shrinking procedure of [9] performs “distance selection” on various random samples from , where the selection has to find the -th smallest critical value between pairs of balls in the cartesian product of the samples, for some given . This selection process is obtained in turn from a procedure that counts the number of critical values that are smaller than or equal to some given . To do so, we rephrase the problem as an offline range searching problem, in which each ball is mapped to the point in , and also to the range444Here the lifting transform to a halfspace in does not seem to lead to a more efficient procedure. Technically, this is because the lifted problem to now has to deal with halfspace range counting, whose performance is actually worse than the corresponding problem (involving semi-algebraic ranges instead of halfspaces) in .
Each point has four degrees of freedom, and, for fixed, so does each range. Using known results on semi-algebraic range searching (see [2] and [3, Appendix]), the cost of this procedure is . In fact, running this “distance selection” procedure in a bipartite setting (which is the setting that arises when applying the technique of [9]), where we want to select critical values determined between pairs of balls in a pair of samples, consisting of and balls, respectively, the cost is . Plugging this into the machinery of [9], the cost of the shrinking stage is .
The cost of the bifurcation stage, as in all the previous applications, is , where is a bound on the running time of the decision procedure, which is our BFS algorithm on , so . Hence the overall cost of the procedure is
We optimize this bound by choosing , or , and then the running time becomes . That is, we have:
Theorem 6.
The reverse shortest path problem on the ball-intersection graph of a set of balls in can be solved in time.
4.1 Higher dimensions
Here too, the machinery developed in Section 2 and the preceding part of Section 4 can be extended to any higher dimension .
Breadth-First Search.
We proceed as in the previous algorithm. The lifting transform used in the BFS implementation maps each ball to the point in , and also to the halfspace
in . The dynamic halfspace range reporting structure of [6] yields, for any parameter , where , a dynamic data structure of size , with initial construction cost , so that each query takes time, and each deletion takes amortized time. Choosing , the size (and construction cost) becomes , and each operation takes time, for a total of time. Since this dominates the running time of the BFS, we obtain:
Theorem 7.
BFS on the ball intersection graph of a set of balls in can be performed in randomized expected time, where .
Reverse Shortest Paths.
Here the selection procedure, rephrased as an offline semi-algebraic range searching problem, involves points and ranges, each point and range with degrees of freedom. Hence the running time of this procedure, in the bipartite setting of sets of and (ball-representing) points, respectively, is (again, see [3, Appendix])
This implies, as in [9], that the cost of the interval shrinking procedure is
As in all its implementations, the bifurcation procedure runs in time
The overall running time is thus
Optimizing the value of , we choose
and the overall running time of the algorithm is
(We verify that, for and , this bound is indeed .) That is, we have:
Theorem 8.
The reverse shortest path problem on the ball intersection graph of a set of balls in can be solved in time, where .
(We remind the reader that the second improvement in [9] is not applicable here, when the balls do not have the same radius.)
For example, in dimensions, with , the algorithm runs in time.
5 Further extensions
Other measures of expansion.
The RSP technique can easily be applied to other well-behaved measures of expansion of the balls. For example, consider, in three dimensions, the case where, for , each ball is expanded to the ball (all radii are multiplied by ). Now is a critical value iff a pair , of balls become externally tangent to each other, that is,
Hence, in the selection procedure, we map each ball to the range
Since these ranges are also semi-algebraic regions of constant complexity with four degrees of freedom, the same previously used offline semi-algebraic range searching machinery (on a different kind of ranges) can be applied here too, with the same asymptotic running time bound. The remainder of the procedure is carried out as before. Any other expansion rule can also be handled in this manner, as long as the corresponding ranges are semi-algebraic of constant complexity (with four degrees of freedom).
The same observations hold in any larger dimension .
References
- [1] A. Karim Abu-Affash, Paz Carmi, Matthew J. Katz, and Michael Segal. The Euclidean bottleneck Steiner path problem and other applications of (, )-pair decomposition. Discret. Comput. Geom., 51(1):1–23, 2014. doi:10.1007/S00454-013-9550-9.
- [2] Pankaj K. Agarwal. Simplex range searching and its variants: A review. In Journey through Discrete Mathematics: A Tribute to Jiří Matoušek, pages 1–30. Springer Verlag, Berlin-Heidelberg, 2017.
- [3] Pankaj K. Agarwal, Boris Aronov, Esther Ezra, Matthew J. Katz, and Micha Sharir. Intersection queries for flat semi-algebraic objects in three dimensions and related problems. ACM Trans. Algorithms, 21(3):25:1–25:59, 2025. doi:10.1145/3721290.
- [4] Pankaj K. Agarwal, Haim Kaplan, Matthew J. Katz, and Micha Sharir. Segment proximity graphs and nearest neighbor queries amid disjoint segments. In 32nd Annual European Symposium on Algorithms, ESA 2024, volume 308 of LIPIcs, pages 7:1–7:20. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.ESA.2024.7.
- [5] Pankaj K. Agarwal, Matthew J. Katz, and Micha Sharir. On reverse shortest paths in geometric proximity graphs. Comput. Geom., 117:102053, 2024. doi:10.1016/J.COMGEO.2023.102053.
- [6] Pankaj K. Agarwal and Jirí Matousek. Dynamic half-space range reporting and its applications. Algorithmica, 13(4):325–345, 1995. doi:10.1007/BF01293483.
- [7] Rinat Ben Avraham, Omrit Filtser, Haim Kaplan, Matthew J. Katz, and Micha Sharir. The discrete and semicontinuous Fréchet distance with shortcuts via approximate distance counting and selection. ACM Trans. Algorithms, 11(4):29:1–29:29, 2015. doi:10.1145/2700222.
- [8] Sergio Cabello and Miha Jejcic. Shortest paths in intersection graphs of unit disks. Comput. Geom., 48(4):360–367, 2015. doi:10.1016/J.COMGEO.2014.12.003.
- [9] Timothy M. Chan and Zhengcheng Huang. Faster algorithms for reverse shortest path in unit-disk graphs and related geometric optimization problems: Improving the shrink-and-bifurcate technique. In 41st International Symposium on Computational Geometry, SoCG 2025, volume 332 of LIPIcs, pages 32:1–32:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2025. doi:10.4230/LIPICS.SOCG.2025.32.
- [10] Timothy M. Chan and Dimitrios Skrepetos. All-pairs shortest paths in unit-disk graphs in slightly subquadratic time. In 27th International Symposium on Algorithms and Computation, ISAAC 2016, volume 64 of LIPIcs, pages 24:1–24:13. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2016. doi:10.4230/LIPICS.ISAAC.2016.24.
- [11] Haim Kaplan, Matthew J. Katz, Rachel Saban, and Micha Sharir. The unweighted and weighted reverse shortest path problem for disk graphs. In 31st Annual European Symposium on Algorithms, ESA 2023, volume 274 of LIPIcs, pages 67:1–67:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2023. doi:10.4230/LIPICS.ESA.2023.67.
- [12] Haim Kaplan, Wolfgang Mulzer, Liam Roditty, Paul Seiferth, and Micha Sharir. Dynamic planar Voronoi diagrams for general distance functions and their algorithmic applications. Discret. Comput. Geom., 64(3):838–904, 2020. doi:10.1007/S00454-020-00243-7.
- [13] Matthew J. Katz, Rachel Saban, and Micha Sharir. Near-linear algorithms for visibility graphs over a 1.5-dimensional terrain. In 32nd Annual European Symposium on Algorithms, ESA 2024, volume 308 of LIPIcs, pages 77:1–77:17. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.ESA.2024.77.
- [14] Katharina Klost. An algorithmic framework for the single source shortest path problem with applications to disk graphs. Comput. Geom., 111:101979, 2023. doi:10.1016/J.COMGEO.2022.101979.
- [15] Jirí Matousek. Reporting points in halfspaces. Comput. Geom., 2:169–186, 1992. doi:10.1016/0925-7721(92)90006-E.
- [16] Haitao Wang and Yiming Zhao. Reverse shortest path problem for unit-disk graphs. J. Comput. Geom., 14(1):14–47, 2023. doi:10.20382/JOCG.V14I1A2.
