Dynamic Maximum Depth of Geometric Objects
Abstract
Given a set of geometric objects in the plane (rectangles, squares, disks etc.), its maximum depth (or geometric clique) is the largest number of objects with a common intersection. In this paper, we present data structures for dynamically maintaining the maximum depth under insertions and deletions of geometric objects, with sublinear update time. We achieve the following results:
-
a -approximate dynamic maximum-depth data structure for (axis-parallel) rectangles with amortized update time, for any fixed . In particular, when , this gives an exact data structure for rectangles with amortized update time, almost matching the best known bound for the offline version of the problem.
-
a -approximate dynamic maximum-depth data structure for disks with amortized update time, for any constant . Having exact data structures for disks with sublinear update time is unlikely, since the static maximum-depth problem for disks is 3SUM-hard and thus does not admit subquadratic-time algorithms.
Keywords and phrases:
dynamic algorithms, maximum depthCopyright and License:
![[Uncaptioned image]](x1.png)
2012 ACM Subject Classification:
Theory of computation Computational geometry ; Theory of computation Design and analysis of algorithmsEditors:
Oswin Aichholzer and Haitao WangSeries and Publisher:

1 Introduction
We consider the problem of dynamically maintaining a set of geometric objects (rectangles, squares, disks etc.) under insertions and deletions so that at any instant we can find its maximum depth. The maximum depth of is the largest depth of any point in the plane, where the depth of a point , denoted , is defined as the number of objects in containing . The maximum depth of the set , written as , is equivalent to its maximum geometric clique, namely, the largest subset of objects in with a nonempty common intersection. It is easy to see that for axis-parallel rectangles and squares in the plane the geometric cliques correspond to graph-theoretic cliques in the intersection graph of because the Helly number of rectangles is two. This equivalence, however, does not hold for general geometric objects (even if the objects are convex), where the size of the maximum geometric clique can be smaller than the size of the maximum clique in the intersection graph. Nevertheless, in some cases, the maximum depth is closely related to the maximum clique size in the intersection graph. For example, it is well-known that for disks, the maximum depth is at least of the size of the maximum clique [8].
The problem of computing a maximum geometric clique for simple planar shapes such as rectangles, squares, or disks (and their counterparts in -dimensions) is well-studied in computational geometry, and several efficient algorithms are known [14, 2, 9, 12]. The primary focus of our work is to dynamically maintain the maximum depth as well as a point that realizes this depth, i.e., . To the best of our knowledge, the fully dynamic version of the maximum-depth problem has not been studied before, except for the case of -dimensional intervals on the line, where it can be easily solved optimally with update time [14].
The maximum-depth problem in two dimensions provides an interesting setting for dynamic data structures: unlike the NP-complete problems such as set cover, hitting set or independent set for rectangles, which have been the focus of recent work in dynamic data structures [13, 10, 1, 7, 11, 6], the static maximum depth problem for rectangles is polynomial-time solvable [14]. Yet, in attempting to design a dynamic data structure for the maximum depth, we face challenges similar to those in set cover etc, and sometimes have to settle for approximation to achieve fast update time. For example, for the maximum depth of (axis-parallel) rectangles, it is very difficult to achieve an update time for , since the best known static algorithm requires time [9]. Similarly, for the maximum depth of disks, a sublinear update time is unlikely even in the insertion-only setting (if we want exact depth), because the static problem is known to be 3SUM-hard [3] and thus does not admit a (truly) subquadratic-time algorithm by conjecture.
Another challenge for dynamic maximum depth is the non-decomposability of the problem. Specifically, even if we can partition the set of geometric objects as , the value of cannot be computed from and . This non-decomposability makes many techniques in data structure designing inapplicable to the maximum-depth problem.
1.1 Our results
In this paper, we study the dynamic maximum-depth problem for (axis-parallel) rectangles and disks, designing exact and approximate dynamic maximum-depth data structures with sublinear update time. Formally, a dynamic maximum-depth data structure maintains a set of geometric objects in (or more generally ) under insertions and deletions, and can report a point satisfying together with the value of .
Our main result for rectangles is the following theorem.
Theorem 1.
For every , there exists a -approximate dynamic maximum-depth data structure for axis-parallel rectangles with amortized update time.
When , the above theorem gives us an exact data structure for dynamic maximum depth of rectangles, with update time. Up to logarithmic factors, this bound matches the update time of the best known offline dynamic data structure for the problem [9], which is . Indeed, update time may be best one can expect, even in the insertion-only version, because it is a longstanding open problem whether the static maximum-depth problem for rectangles can be solved in time. The data structure in Theorem 1 can also be used to dynamically maintain the maximum clique of a rectangle graph with the same update time, because the maximum clique size of a rectangle intersection graph equals the maximum depth of the rectangles.
Our main result for disks is the following theorem.
Theorem 2.
For any constant , there exists a -approximate dynamic maximum-depth data structure for disks with amortized update time.
The static maximum-depth problem for disks is known to be 3SUM-hard [3] with a conjectured lower bound of . It is, therefore, unlikely that an exact dynamic maximum-depth data structure for disks with (truly) sublinear update time exists, even in the insertion-only setting. Theorem 2 also gives a data structure for dynamically maintaining a -approximation of the maximum clique size of a disk graph, with the same update time, because the maximum clique size of a disk graph is at most times the maximum depth of the disks [8]. For convenience, we call such a data structure a dynamic maximum-clique data structure for disk graphs.
Corollary 3.
For any constant , there exists a -approximate dynamic maximum-clique data structure for disk graphs with amortized update time.
The key idea underlying the data structure of Theorem 2 is to reduce the original problem to dynamic discrete maximum depth (for disks), by introducing a multiplicative error. In the discrete version of the problem, besides a dynamic set of disks, we also have a dynamic set of points in , and what we want to maintain is . This kind of reduction works for not only disks, but also general fat objects (where the multiplicative error depends on the fatness of the objects). We believe that this idea is of independent interest and may find applications in other problems as well.
1.2 Related Work
Dynamic data structures have been a focus of research in computational geometry from its very beginning, and there exists a vast literature. Since our work pertains to the dynamic geometric optimization problem, we limit our review to the narrow set of papers that have considered related problems. In recent years, a number of papers have addressed the problem of dynamically maintaining such functions as the minimum geometric set cover, minimum geometric hitting set, maximum geometric independent sets, etc., for simple geometric shapes such as rectangles, squares, disks, or their -dimensional counterpart. In particular, Agarwal et al. [1] were the first to present sub-linear time data structures for maintaining approximate minimum geometric set covers (and hitting sets) of intervals in one dimension and unit squares in two dimensions. The update time bounds were improved and the results generalized to rectangles as well as weighted objects by Chan et al.[10, 11]. Henzinger et al. [13] consider the problem of approximately maintaining the maximum independent set of -dimensional rectangles. Bhore and Chan [6] present dynamic data structures for a number of other geometric optimization problems such as minimum piercing sets, minimum vertex cover, maximum independent set of rectangles. Yildiz et al. [16] give a data structure to maintain Klee’s measure dynamically in a discrete setting.
2 Preliminaries
In this section, we introduce the basic notions used throughout the paper.
Depth.
Given a set of geometric shapes in , the depth of a point with respect to is the number of objects in that contain . Formally, . The maximum depth of the set is denoted by . We often use to denote a point that maximizes the depth with respect to the set , i.e., , and to denote a maximum geometric clique for , i.e., .
Disk and fat object.
For a disk , and denote its center and radius, respectively. This notation also holds for balls. In this paper, the fat object, which generalizes both rectangle and disk, is defined as follows: Let be a convex object in . Let be the largest ball that is contained in . We say is -fat, for some parameter , if , the ball that is concentric with and has radius , contains . We call the center of the center of and the in-radius of .111There are different definitions of fat objects in different papers. One can easily show that our definition is equivalent to the others.
3 Dynamic maximum-depth for rectangles
In this section, we consider the dynamic maximum-depth problem for rectangles. We first describe a data structure for maintaining the exact maximum depth with amortized update time and preprocessing time. This proves the base case of Theorem 1, namely, . We then inductively prove the theorem for all .
Let be a dynamic set of rectangles in , and denote by the initial size of . Our data structure applies the standard reconstruction technique. Specifically, we will reconstruct the data structure periodically: the -th reconstruction will be done after handling operations following the -th reconstruction, where is the size of at the time of the -th reconstruction. Thus, during the -th period, i.e., the period between the -th reconstruction and the -th reconstruction, the size of is always . Since our data structure will have an preprocessing time, the time cost for the -th reconstruction is , which can be amortized to the operations in the -th period, which is per operation. Because of the reconstruction, we only need to consider the first period, i.e., the first operations.
Let be a parameter to be determined later. To construct our data structure, we first partition the plane into horizontal slabs using horizontal lines such that each slab contains (in its interior) at most corners of rectangles in . For each slab , we construct a dynamic data structure that can maintain a point that maximizes , as well as the value of , under insertions and deletions on . We call slab data structures for convenience. The design of the slab data structures will be presented shortly. During the update, for each whenever the number of rectangle corners contained in (the interior of) reaches , we split into two horizontal slabs and each of which contains corners, and replace with two new slab data structures and . In this way, we guarantee that the number of rectangle corners contained in each slab is always . We observe that slab splitting can happen at most times (in the first period).
Fact 4.
During the first operations, the number of slab splittings is at most .
Proof.
Let be the set of all slabs occurred when processing the first operations. Consider the insertions in the operations. For each inserted rectangle , we charge it to the (at most) two slabs containing the corners of when is inserted. Observe that a slab eventually splits only if it gets charged at least times. Indeed, contains rectangle corners when it is created and contains rectangle corners when it splits. Thus, from the time point that is created to the time point that splits, there are at least inserted rectangles which have corners in , which implies that gets charged at least times. Since there can be at most inserted rectangles, the total number of charges is bounded by , which implies that the number of slab splitting is bounded by .
Clearly, using the information stored in the slab data structures, we can compute in time a point satisfying , as well as the value of . More precisely, we just consider the points stored in the slab data structures (and their depths with respect to ) and define as the one that maximizes its depth with respect to . Therefore, the remaining task now is to design efficient slab data structures.
The slab data structures.
Consider a slab . We want to maintain a point that maximizes , as well as the value of . Clearly, we can ignore the rectangles in that are disjoint from . Thus, in the following we assume that every rectangle in intersects . We classify the rectangles in into two types: long rectangles and short rectangles. A rectangle is long if its four corners are all outside and is short if at least one corner is inside (the interior of) . Let and be the sets of long and short rectangles in , respectively.
To maintain the maximum depth of inside , our main idea is to reduce the problem to maintaining the maximum depth of weighted intervals. Specifically, we shall construct a set of weighted intervals from such that the maximum depth of inside is equal to the maximum (weighted) depth of . The construction is done as follows. For each long rectangle , we include in the interval with weight . For the set of short rectangles, we first define a depth function as where is the vertical line with equation . One can easily observe that is a piecewise-constant function with pieces, since the value of only changes at the -coordinates of the corners of the short rectangles. Therefore, we can partition the real line into intervals such that is a constant function when restricted to each . Also, for each , there is a -coordinate such that and for any and . Define a function as if . We include in and set the weight of each to be for an arbitrary . (See Figure 1 for an illustration.)
We observe the following simple fact.
Fact 5.
If satisfies that , the point satisfies .
Proof.
We first prove that . By definition, . Note that a rectangle in contains iff its corresponding weight- interval in contains . So is contained in exactly weight- intervals corresponding to the rectangles in . Meanwhile, the total weight of the intervals containing which correspond to the rectangles in is equal to . It follows that . As , we have by the definition of . Thus, .
Since , it suffices to show that . Consider a point . Again, a rectangle in contains iff its corresponding weight- interval in contains . Also, by the definition of . Therefore,
Since , we have .
For our slab data structure , we can simply use the following result of Imai and Asano [14], which is a byproduct of their (static) algorithm for the maximum-depth problem for weighted rectangles.
Lemma 6 ([14]).
There exists a dynamic maximum-depth data structure for weighted intervals with update time and preprocessing time.
Given , we first compute the set as stated above (as well as the functions and ), and then build , i.e., the interval data structure of the above lemma on ). maintains satisfying , as well as the value of . The point we maintain is , which satisfies by Fact 5. Also, the value of is equal to by Fact 5, and we can maintain the latter from .
Consider an insertion/deletion on , and let be the inserted/deleted rectangle. If is a long rectangle, we just need to insert/delete its corresponding weight- interval on and update . If is a short rectangle, the situation is more complicated and what we do is the following. First, we delete from all intervals corresponding to the rectangles in the old . Then we construct the intervals corresponding to the rectangles in the new (we shall show how to do this efficiently) and insert them to . Also, we re-compute the functions and corresponding to the new . For both deletions and insertions on , we need to update accordingly.
Next, we analyze the preprocessing time and update time for our slab data structure. To this end, we first show that one can compute the weighted intervals in corresponding to the short rectangles, as well as the functions and , in time. Indeed, we can directly apply the classical (static) rectangle maximum-depth algorithm [14] on . The algorithm sweeps a vertical line from left to right, and maintains the maximum depth of on . Therefore, it exactly computes the function and , together with their piece intervals . The running time of the algorithm is . The intervals in corresponding to the long rectangles can be constructed directly. Overall, the time for constructing is , i.e., . By Lemma 6, can also be built in time as . So the preprocessing time of our slab data structure is . For the update time, we need to distinguish long rectangles and short rectangles. Inserting/deleting a long rectangle corresponds to an insertion/deletion on and thus can be done in time. For the insertion/deletion of a short rectangle, we need to delete intervals from , re-compute the intervals corresponding to and the functions and , and insert intervals to . The insertions/deletions on can be done in time by Lemma 6, and the re-computation can be done in time as discussed above. As the number of rectangle corners in a slab is always , we have . Thus, the update time for a short rectangle is .
Lemma 7.
One can construct each slab data structure in time such that for insertions/deletions of long (resp., short) rectangles, the data structure can be updated in time (resp., time).
Overall time complexity.
Using Lemma 7, we can now analyze the overall preprocessing time and update time of our data structure. Partitioning the plane into slabs can be done in time by sorting the corners of the rectangles in . Since there are slabs and each slab data structure can be constructed in time, the preprocessing time is . To analyze the update time, let be the inserted/deleted rectangle. Note that there are at most two slabs in which is a short rectangle. So among the updates of the slab data structures, there can be at most two updates for short rectangles and updates for long rectangles. By Lemma 7, these updates take time. Also, rectangle insertions might cause slab splittings and for each slab splitting we need time to build the new slab data structures. By Fact 4, the time cost for slab splitting during the first operations is , and the amortized time cost is then . Therefore, the overall (amortized) update time of our data structure is . To balance the two terms, we set , yielding update time. The preprocessing time is then . Based on the above discussion, we have the following conclusion, which is the base case of Theorem 1.
Lemma 8.
There exists a dynamic maximum-depth data structure for axis-parallel rectangles with amortized update time, which can be built in time.
Approximate data structures.
Now we are able to prove Theorem 1 for a general . As aforementioned, we apply induction on . Lemma 8 gives us the base case . Suppose there exists a -approximate dynamic maximum-depth data structure for rectangles with (amortized) update time and preprocessing time. We want to show the existence of a -approximate data structure with update time and preprocessing time.
As before, our data structure partitions the plane into slabs. The only difference occurs in the design of the slab data structures. Consider a slab . We want our slab data structure for to maintain a point satisfying that ; this guarantees that our overall data structure achieves the approximation ratio . Again, let and denote the set of long and short rectangles in with respect to , respectively. Unlike in our exact data structure, we now handle and separately. For long rectangles, we maintain a point that maximizes . This can be done using exactly the same idea as before: map each long rectangle to the interval , and build on these intervals. For short rectangles, we maintain a point satisfying the condition that . To this end, we build a -approximate dynamic maximum-depth data structure on , which can give us the desired point . By our induction hypothesis, there exists such a data structure with update time and preprocessing time. Define if and if . We observe the following.
Lemma 9.
.
Proof.
Let be a point that maximizes . By definition, we have . We consider two cases: and . If , we have
On the other hand, if , we must have . In this case, we have
Therefore, we always have .
The data structure for maintaining can be constructed in time and updated in time, by Lemma 6. Furthermore, the data structure for maintaining can be constructed in time and updated in time. Since and , for each slab data structure, the preprocessing time is and the update time is for long rectangles and for short rectangles. This implies that our overall data structure has preprocessing time and update time . By setting , we have the desired -approximation data structure with amortized update time and preprocessing time.
See 1
4 Dynamic maximum-depth for disks
We now describe our dynamic maximum-depth data structure for disks. The main idea underlying our result is to reduce the maximum-depth problem to discrete maximum depth.
Suppose we want to design a -approximate maximum-depth data structure for disks, where is a constant. Without loss of generality, we assume that is sufficiently small, say, . Let be a sufficiently large integer such that . For a disk in with center and radius , we define
and call the points in the companion points of . For a set of disks, write . The set of companion points is the discrete point set for which our data structure dynamically maintains depth. The following fact is straightforward.
Fact 10.
Let be a disk and be a set of points in . There exists a circular sector of with angle such that .
Proof.
Suppose we randomly sample a circular sector of with angle , from the uniform distribution over the space of all such sectors. The probability that a specific point is contained in is if is the center of and is otherwise. Therefore, , which in turn implies that there exists a sector of with angle for which .
Lemma 11.
Let be a set of congruent disks such that . Then for every , there exists such that .
Proof.
Without loss of generality, we assume that the disks in are unit disks. Let and be the unit disk centered at . Define as the set of the centers of the disks in . Since , we have . By Fact 10, there is a circular sector of with angle such that . Suppose and are the two vertices of on the boundary of . Define as the middle point of the segment connecting and , and as the disk centered at with radius . We claim that for any point . Consider an arbitrary point . Since the angle of is , the distance between and is . Also, the distance between and (resp., ) is . Recall that by our assumption, which implies that . Therefore, the distance between and is at most , and the distance between and (resp., ) is at most . Therefore, the unit disk centered at contains points , which further implies that it contains . Consequently, the unit disks in whose centers lie in contains . We have , which then implies that . See Figure 2 for an illustration.
Now it suffices to show for every . Note that the distance between and the center of is at most , since and . Furthermore, contains an axis-parallel square centered at with side-length , as its radius is . Since , this square then contains a point in , by our construction of . Therefore, .
Corollary 12.
For any set of disks in , the following holds:
Proof.
Let such that , and . Suppose is the smallest disk in . For each disk , we can find a smaller disk with the same radius as such that . Let be the set of all these disks. Then is a set of congruent disks containing , and for any . The former implies that . Furthermore, we have . By Lemma 11, there exists such that . This proves the corollary.
Using this corollary, we can reduce the task of maintaining a -approximation of the maximum depth of disks to the dynamic discrete maximum-depth problem for disks. Let be a dynamic set of disks. Corollary 12 implies that , where . Therefore, it suffices to consider the dynamic discrete maximum-depth instance with point set and disk set .
4.1 Dynamic discrete maximum depth for disks
In this section, we design a dynamic discrete maximum-depth data structure for disks with amortized update time. Via the standard lifting argument, the dynamic discrete maximum-depth problem for disks can be reduced to the same problem for 3D halfspaces. To construct the discrete maximum-depth data structure, we first need a dynamic depth-query data structure for halfspaces. Specifically, such a data structure maintains a dynamic set of hyperplanes and can return, for a query point , the value of .
Lemma 13.
There exists an dynamic depth-query data structure for 3D halfspaces with query time and amortized update time. The data structure can be built in time.
Proof.
By duality between points and halfspaces, the depth-query problem is equivalent to the range-counting problem: store a dynamic set of points in such that for a query halfspace , one can return the value . A dynamic 3D halfspace range-counting data structure with query time and amortized update time can be directly obtained using the dynamic partition tree given by Matoušek [15].
Let be the dynamic set of points and be the dynamic set of halfspaces, and we want to maintain . We build the dynamic depth-query data structure in Lemma 13 for . We store the points in in a standard partition tree [15]. The tree has nodes and can be built in time. The depth of is and the leaves of one-to-one correspond to the points in . Each node of corresponds to a subset , which consists of the points on the leaves of the subtree rooted at . Furthermore, given any halfspace in , one can find nodes of in time such that and the sets are pairwise disjoint; these nodes are called the canonical nodes for . For each node , we maintain two fields, and . The fields satisfies the following invariant: for any point , the sum of over all nodes satisfying is equal to . The field is defined as
where is the set of children of . By the invariant for the -fields, we see that the -field of the root of is just equal to .
Halfspace insertion/deletion.
For a halfspace insertion/deletion, we do not need to change the partition tree itself; instead, we need to update the -fields and -fields. Let be a halfspace to be inserted. We find the canonical nodes for . Then we increase the fields by . We also update the -fields of all ancestors of . Note that if a node is not an ancestor of any of , then its -field is not changed. The deletion of a halfspace can be handled in the same way, with the only difference that we decrease the fields by . The update time is . Besides, we also need to update the depth-query data structure for , which takes time as well.
Point insertion/deletion.
To delete a point from , we can directly remove the leaf of corresponding to , and update the -fields of all ancestors of . This takes time. To handle point insertions, we apply the classical logarithmic method [4]. This method, instead of using one partition tree, stores in partition trees each of which corresponds to a subset of whose size is a power of . The only problem we need to solve here is how to efficiently merge two partition trees storing subsets of with size into a larger partition tree , with the -fields and -fields. The tree itself can be built in time. To compute the -fields and -fields for the nodes of , what we do is the following. Let be the subset stores. We use the depth-query data structure to obtain for all . For each leaf of , we set , where is the point corresponding to . We set for all internal nodes of . Clearly, the invariant for the -fields holds. Using the -fields, we then compute the -field for every node of in a bottom-up order. The most time-consuming part in this procedure is the depth queries, which takes time. As such, the amortized update time for a point insertion is .
Lemma 14.
There exists a dynamic discrete maximum-depth data structure for disks with amortized update time for insertions/deletions of points and disks; here is the total number of points and disks in the instance.
See 2
Proof.
We just build the data structure of Lemma 14 on the point set and the disk set . The data structure maintains a point such that and thus by Corollary 12. Each insertion (resp., deletion) on corresponds to insertions (resp., deletions) on and one insertion (resp., deletion) on . Therefore, the update time is in total.
We remark that our data structure for disks can be generalized to balls in higher dimensions. Indeed, the proof of Fact 10 holds for balls in any fixed dimension. To solve dynamic discrete maximum depth for balls in , one can reduce to the problem for halfspaces in and use partition trees in . The update time becomes .
4.2 Generalization to fat objects
In this section, we discuss how our reduction from the dynamic maximum-depth problem to its discrete counterpart works for general fat objects (where the approximation ratio depends on the fatness of the objects).
For a -fat object in with whose center and in-radius , we define its companion points as
The key observation of the reduction for general fat objects is the following fact.
Fact 15.
A -dimensional ball with center and radius can be covered by balls with centers in and radii .
Corollary 16.
For any set of -fat objects in , the following holds:
Proof.
Let such that , and . Suppose is the smallest object in , i.e., has the smallest in-radius, which is assumed to be without loss of generality. For each object , we can find a unit ball as following: the unit ball centered on the line segment between and , which is tangent to all tangent hyperplanes of at . (See Figure 3 for an illustration.)
Since contains , we have the distance between and is at most . Moreover, it is clear that the distance between and is at most . Therefore, the distance between and is at most , which means that the centers of these unit balls lie within a ball with center and radius . By Fact 15, we can cover this ball by unit balls with centers in . Therefore, by the pigeonhole principle, we have
which completes the proof.
As a concluding remark, we note that our reduction for general fat objects relies on the following question: Given a ball of radius in , how many unit balls are needed to cover it? We employ a rather general method in the construction of , to illustrate the main idea, and the constant in Corollary 16 can be improved for specific values of and . For example, in the case of disk ( and ), as proved in [5], only unit disks are needed, instead of in Corollary 16. This optimization problem for particular values of and may have independent interest.
References
- [1] Pankaj K. Agarwal, Hsien-Chih Chang, Subhash Suri, Allen Xiao, and Jie Xue. Dynamic geometric set cover and hitting set. ACM Trans. Algorithms, 18(4):40:1–40:37, 2022. doi:10.1145/3551639.
- [2] Helmut Alt and Ludmila Scharf. Computing the depth of an arrangement of axis-aligned rectangles in parallel. In 26th European Workshop on Computational Geometry, pages 33–36, 2010.
- [3] Boris Aronov and Sariel Har-Peled. On approximating the depth and related problems. SIAM J. Comput., 38(3):899–921, 2008. doi:10.1137/060669474.
- [4] Jon Louis Bentley and James B. Saxe. Decomposable searching problems I: static-to-dynamic transformation. J. Algorithms, 1(4):301–358, 1980. doi:10.1016/0196-6774(80)90015-2.
- [5] Károly Bezdek. Körök optimális fedései (Optimal Covering of Circles). PhD thesis, Eötvös Lorand University, 1979.
- [6] Sujoy Bhore and Timothy M. Chan. Fast static and dynamic approximation algorithms for geometric optimization problems: Piercing, independent set, vertex cover, and matching. CoRR, abs/2407.20659, 2024. doi:10.48550/arXiv.2407.20659.
- [7] Sujoy Bhore, Guangping Li, and Martin Nöllenburg. An algorithmic study of fully dynamic independent sets for map labeling. ACM J. Exp. Algorithmics, 27:1.8:1–1.8:36, 2022. doi:10.1145/3514240.
- [8] Paz Carmi, Matthew J. Katz, and Pat Morin. Stabbing pairwise intersecting disks by four points. Discret. Comput. Geom., 70(4):1751–1784, 2023. doi:10.1007/S00454-023-00567-0.
- [9] Timothy M. Chan. A (slightly) faster algorithm for klee’s measure problem. Comput. Geom., 43(3):243–250, 2010. doi:10.1016/J.COMGEO.2009.01.007.
- [10] Timothy M. Chan and Qizheng He. More dynamic data structures for geometric set cover with sublinear update time. J. Comput. Geom., 13(2):90–114, 2021. doi:10.20382/JOCG.V13I2A6.
- [11] Timothy M. Chan, Qizheng He, Subhash Suri, and Jie Xue. Dynamic geometric set cover, revisited. In Joseph (Seffi) Naor and Niv Buchbinder, editors, Proceedings of the 2022 ACM-SIAM Symposium on Discrete Algorithms, SODA 2022, Virtual Conference / Alexandria, VA, USA, January 9 - 12, 2022, pages 3496–3528. SIAM, 2022. doi:10.1137/1.9781611977073.139.
- [12] Bernard Marie Chazelle and D. T. Lee. On a circle placement problem. Computing, 36(1-2):1–16, 1986. doi:10.1007/BF02238188.
- [13] Monika Henzinger, Stefan Neumann, and Andreas Wiese. Dynamic approximate maximum independent set of intervals, hypercubes and hyperrectangles. In Sergio Cabello and Danny Z. Chen, editors, 36th International Symposium on Computational Geometry, SoCG 2020, June 23-26, 2020, Zürich, Switzerland, volume 164 of LIPIcs, pages 51:1–51:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPICS.SOCG.2020.51.
- [14] Hiroshi Imai and Takao Asano. Finding the connected components and a maximum clique of an intersection graph of rectangles in the plane. Journal of Algorithms, 4(4):310–323, December 1983. doi:10.1016/0196-6774(83)90012-3.
- [15] Jirí Matoušek. Efficient partition trees. Discret. Comput. Geom., 8:315–334, 1992. doi:10.1007/BF02293051.
- [16] Hakan Yildiz, John Hershberger, and Subhash Suri. A discrete and dynamic version of klee’s measure problem. In Proceedings of the 23rd Annual Canadian Conference on Computational Geometry, Toronto, Ontario, Canada, August 10-12, 2011, 2011. URL: http://www.cccg.ca/proceedings/2011/papers/paper28.pdf.