Abstract 1 Introduction 2 Preliminaries 3 Dynamic maximum-depth for rectangles 4 Dynamic maximum-depth for disks References

Dynamic Maximum Depth of Geometric Objects

Subhash Suri ORCID Department of Computer Science, University of California, Santa Barbara, CA, USA Jie Xue ORCID New York University Shanghai, China Xiongxin Yang ORCID New York University Shanghai, China Jiumu Zhu ORCID New York University Shanghai, China
Abstract

Given a set of geometric objects in the plane (rectangles, squares, disks etc.), its maximum depth (or geometric clique) is the largest number of objects with a common intersection. In this paper, we present data structures for dynamically maintaining the maximum depth under insertions and deletions of geometric objects, with sublinear update time. We achieve the following results:

  • a 1k-approximate dynamic maximum-depth data structure for (axis-parallel) rectangles with O(n1/(k+1)logn) amortized update time, for any fixed k+. In particular, when k=1, this gives an exact data structure for rectangles with O(nlogn) amortized update time, almost matching the best known bound for the offline version of the problem.

  • a (12ε)-approximate dynamic maximum-depth data structure for disks with n2/3logO(1)n amortized update time, for any constant ε>0. Having exact data structures for disks with sublinear update time is unlikely, since the static maximum-depth problem for disks is 3SUM-hard and thus does not admit subquadratic-time algorithms.

Keywords and phrases:
dynamic algorithms, maximum depth
Copyright and License:
[Uncaptioned image] © Subhash Suri, Jie Xue, Xiongxin Yang, and Jiumu Zhu; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Computational geometry
; Theory of computation Design and analysis of algorithms
Editors:
Oswin Aichholzer and Haitao Wang

1 Introduction

We consider the problem of dynamically maintaining a set 𝒪 of geometric objects (rectangles, squares, disks etc.) under insertions and deletions so that at any instant we can find its maximum depth. The maximum depth of 𝒪 is the largest depth of any point in the plane, where the depth of a point p, denoted 𝖽𝖾𝗉(p,𝒪), is defined as the number of objects in 𝒪 containing p. The maximum depth of the set 𝒪, written as 𝖽𝖾𝗉(𝒪)=maxpd𝖽𝖾𝗉(p,𝒪), is equivalent to its maximum geometric clique, namely, the largest subset of objects in 𝒪 with a nonempty common intersection. It is easy to see that for axis-parallel rectangles and squares in the plane the geometric cliques correspond to graph-theoretic cliques in the intersection graph of 𝒪 because the Helly number of rectangles is two. This equivalence, however, does not hold for general geometric objects (even if the objects are convex), where the size of the maximum geometric clique can be smaller than the size of the maximum clique in the intersection graph. Nevertheless, in some cases, the maximum depth is closely related to the maximum clique size in the intersection graph. For example, it is well-known that for disks, the maximum depth is at least 14 of the size of the maximum clique [8].

The problem of computing a maximum geometric clique for simple planar shapes such as rectangles, squares, or disks (and their counterparts in d-dimensions) is well-studied in computational geometry, and several efficient algorithms are known [14, 2, 9, 12]. The primary focus of our work is to dynamically maintain the maximum depth as well as a point p that realizes this depth, i.e., 𝖽𝖾𝗉(p,𝒪)=𝖽𝖾𝗉(𝒪). To the best of our knowledge, the fully dynamic version of the maximum-depth problem has not been studied before, except for the case of 1-dimensional intervals on the line, where it can be easily solved optimally with O(logn) update time [14].

The maximum-depth problem in two dimensions provides an interesting setting for dynamic data structures: unlike the NP-complete problems such as set cover, hitting set or independent set for rectangles, which have been the focus of recent work in dynamic data structures [13, 10, 1, 7, 11, 6], the static maximum depth problem for rectangles is polynomial-time solvable [14]. Yet, in attempting to design a dynamic data structure for the maximum depth, we face challenges similar to those in set cover etc, and sometimes have to settle for approximation to achieve fast update time. For example, for the maximum depth of (axis-parallel) rectangles, it is very difficult to achieve an O(nc) update time for c<1/2, since the best known static algorithm requires n1.5/logO(1)n time [9]. Similarly, for the maximum depth of disks, a sublinear update time is unlikely even in the insertion-only setting (if we want exact depth), because the static problem is known to be 3SUM-hard [3] and thus does not admit a (truly) subquadratic-time algorithm by conjecture.

Another challenge for dynamic maximum depth is the non-decomposability of the problem. Specifically, even if we can partition the set 𝒪 of geometric objects as 𝒪=𝒪1𝒪2, the value of 𝖽𝖾𝗉(𝒪) cannot be computed from 𝖽𝖾𝗉(𝒪1) and 𝖽𝖾𝗉(𝒪2). This non-decomposability makes many techniques in data structure designing inapplicable to the maximum-depth problem.

1.1 Our results

In this paper, we study the dynamic maximum-depth problem for (axis-parallel) rectangles and disks, designing exact and approximate dynamic maximum-depth data structures with sublinear update time. Formally, a dynamic maximum-depth data structure maintains a set 𝒪 of geometric objects in 2 (or more generally d) under insertions and deletions, and can report a point p2 satisfying 𝖽𝖾𝗉(p,𝒪)=𝖽𝖾𝗉(𝒪) together with the value of 𝖽𝖾𝗉(𝒪).

Our main result for rectangles is the following theorem.

Theorem 1.

For every k+, there exists a 1k-approximate dynamic maximum-depth data structure for axis-parallel rectangles with O(n1/(k+1)logn) amortized update time.

When k=1, the above theorem gives us an exact data structure for dynamic maximum depth of rectangles, with O(nlogn) update time. Up to logarithmic factors, this bound matches the update time of the best known offline dynamic data structure for the problem [9], which is O((n/logn)log3/2logn). Indeed, O~(n) update time may be best one can expect, even in the insertion-only version, because it is a longstanding open problem whether the static maximum-depth problem for rectangles can be solved in O(n1.5ε) time. The data structure in Theorem 1 can also be used to dynamically maintain the maximum clique of a rectangle graph with the same update time, because the maximum clique size of a rectangle intersection graph equals the maximum depth of the rectangles.

Our main result for disks is the following theorem.

Theorem 2.

For any constant ε>0, there exists a (12ε)-approximate dynamic maximum-depth data structure for disks with n2/3logO(1)n amortized update time.

The static maximum-depth problem for disks is known to be 3SUM-hard [3] with a conjectured lower bound of Ω~(n2). It is, therefore, unlikely that an exact dynamic maximum-depth data structure for disks with (truly) sublinear update time exists, even in the insertion-only setting. Theorem 2 also gives a data structure for dynamically maintaining a (18ε)-approximation of the maximum clique size of a disk graph, with the same update time, because the maximum clique size of a disk graph is at most 4 times the maximum depth of the disks [8]. For convenience, we call such a data structure a dynamic maximum-clique data structure for disk graphs.

Corollary 3.

For any constant ε>0, there exists a (18ε)-approximate dynamic maximum-clique data structure for disk graphs with n2/3logO(1)n amortized update time.

The key idea underlying the data structure of Theorem 2 is to reduce the original problem to dynamic discrete maximum depth (for disks), by introducing a 12α multiplicative error. In the discrete version of the problem, besides a dynamic set 𝒟 of disks, we also have a dynamic set S of points in 2, and what we want to maintain is maxpS𝖽𝖾𝗉(p,𝒟). This kind of reduction works for not only disks, but also general fat objects (where the multiplicative error depends on the fatness of the objects). We believe that this idea is of independent interest and may find applications in other problems as well.

1.2 Related Work

Dynamic data structures have been a focus of research in computational geometry from its very beginning, and there exists a vast literature. Since our work pertains to the dynamic geometric optimization problem, we limit our review to the narrow set of papers that have considered related problems. In recent years, a number of papers have addressed the problem of dynamically maintaining such functions as the minimum geometric set cover, minimum geometric hitting set, maximum geometric independent sets, etc., for simple geometric shapes such as rectangles, squares, disks, or their d-dimensional counterpart. In particular, Agarwal et al. [1] were the first to present sub-linear time data structures for maintaining approximate minimum geometric set covers (and hitting sets) of intervals in one dimension and unit squares in two dimensions. The update time bounds were improved and the results generalized to rectangles as well as weighted objects by Chan et al.[10, 11]. Henzinger et al. [13] consider the problem of approximately maintaining the maximum independent set of d-dimensional rectangles. Bhore and Chan [6] present dynamic data structures for a number of other geometric optimization problems such as minimum piercing sets, minimum vertex cover, maximum independent set of rectangles. Yildiz et al. [16] give a data structure to maintain Klee’s measure dynamically in a discrete setting.

2 Preliminaries

In this section, we introduce the basic notions used throughout the paper.

Depth.

Given a set 𝒪 of geometric shapes in d, the depth of a point p with respect to 𝒪 is the number of objects in 𝒪 that contain p. Formally, 𝖽𝖾𝗉(p,𝒪)=|{O𝒪:pO}|. The maximum depth of the set 𝒪 is denoted by 𝖽𝖾𝗉(𝒪)=maxpd𝖽𝖾𝗉(p,𝒪). We often use p to denote a point that maximizes the depth with respect to the set 𝒪, i.e., 𝖽𝖾𝗉(p,𝒪)=𝖽𝖾𝗉(𝒪), and 𝒪 to denote a maximum geometric clique for 𝒪, i.e., 𝒪={O𝒪:pO}.

Disk and fat object.

For a disk D, 𝖼𝗍𝗋(D) and 𝗋𝖺𝖽(D) denote its center and radius, respectively. This notation also holds for balls. In this paper, the fat object, which generalizes both rectangle and disk, is defined as follows: Let O be a convex object in d. Let Oin be the largest ball that is contained in O. We say O is α-fat, for some parameter α1, if Oout, the ball that is concentric with Oin and has radius 𝗋𝖺𝖽(Oout)=α𝗋𝖺𝖽(Oin), contains O. We call the center of Oin the center of O and 𝗋𝖺𝖽(Oin) the in-radius of O.111There are different definitions of fat objects in different papers. One can easily show that our definition is equivalent to the others.

3 Dynamic maximum-depth for rectangles

In this section, we consider the dynamic maximum-depth problem for rectangles. We first describe a data structure for maintaining the exact maximum depth with O(nlogn) amortized update time and O(n1.5logn) preprocessing time. This proves the base case of Theorem 1, namely, k=1. We then inductively prove the theorem for all k+.

Let 𝒬 be a dynamic set of rectangles in d, and denote by n the initial size of 𝒬. Our data structure applies the standard reconstruction technique. Specifically, we will reconstruct the data structure periodically: the (i+1)-th reconstruction will be done after handling ni/2 operations following the i-th reconstruction, where ni is the size of 𝒬 at the time of the i-th reconstruction. Thus, during the i-th period, i.e., the period between the i-th reconstruction and the (i+1)-th reconstruction, the size of 𝒬 is always Θ(ni). Since our data structure will have an O(n1.5logn) preprocessing time, the time cost for the (i+1)-th reconstruction is O(ni1.5logni), which can be amortized to the ni/2 operations in the i-th period, which is O(nilogni) per operation. Because of the reconstruction, we only need to consider the first period, i.e., the first n/2 operations.

Let r be a parameter to be determined later. To construct our data structure, we first partition the plane into r horizontal slabs L1,,Lr using r1 horizontal lines such that each slab contains (in its interior) at most 4n/r corners of rectangles in 𝒬. For each slab Li, we construct a dynamic data structure 𝒟(Li) that can maintain a point piLi that maximizes 𝖽𝖾𝗉(pi,𝒬), as well as the value of 𝖽𝖾𝗉(pi,𝒬), under insertions and deletions on 𝒬. We call 𝒟(L1),,𝒟(Lr) slab data structures for convenience. The design of the slab data structures will be presented shortly. During the update, for each i[r] whenever the number of rectangle corners contained in (the interior of) Li reaches 8n/r, we split Li into two horizontal slabs Li and Li′′ each of which contains 4n/r corners, and replace 𝒟(Li) with two new slab data structures 𝒟(Li) and 𝒟(Li′′). In this way, we guarantee that the number of rectangle corners contained in each slab is always O(n/r). We observe that slab splitting can happen at most O(r) times (in the first period).

Fact 4.

During the first t operations, the number of slab splittings is at most 2tr/n.

Proof.

Let be the set of all slabs occurred when processing the first t operations. Consider the insertions in the t operations. For each inserted rectangle Q, we charge it to the (at most) two slabs containing the corners of R when Q is inserted. Observe that a slab L eventually splits only if it gets charged at least n/r times. Indeed, L contains 4n/r rectangle corners when it is created and contains 8n/r rectangle corners when it splits. Thus, from the time point that L is created to the time point that L splits, there are at least n/r inserted rectangles which have corners in L, which implies that L gets charged at least n/r times. Since there can be at most t inserted rectangles, the total number of charges is bounded by 2t, which implies that the number of slab splitting is bounded by 2tr/n.

Clearly, using the information stored in the slab data structures, we can compute in O(r) time a point p satisfying 𝖽𝖾𝗉(p,𝒬)=𝖽𝖾𝗉(𝒬), as well as the value of 𝖽𝖾𝗉(𝒬). More precisely, we just consider the points p1,,pr stored in the slab data structures (and their depths with respect to 𝒬) and define p as the one that maximizes its depth with respect to 𝒬. Therefore, the remaining task now is to design efficient slab data structures.

The slab data structures.

Consider a slab L. We want to maintain a point pL that maximizes 𝖽𝖾𝗉(p,𝒬), as well as the value of 𝖽𝖾𝗉(p,𝒬). Clearly, we can ignore the rectangles in 𝒬 that are disjoint from L. Thus, in the following we assume that every rectangle in 𝒬 intersects L. We classify the rectangles in 𝒬 into two types: long rectangles and short rectangles. A rectangle is long if its four corners are all outside L and is short if at least one corner is inside (the interior of) L. Let 𝒬𝗅𝗈𝗇𝗀𝒬 and 𝒬𝗌𝗁𝗈𝗋𝗍𝒬 be the sets of long and short rectangles in 𝒬, respectively.

To maintain the maximum depth of 𝒬 inside L, our main idea is to reduce the problem to maintaining the maximum depth of weighted intervals. Specifically, we shall construct a set of weighted intervals from 𝒬 such that the maximum depth of 𝒬 inside L is equal to the maximum (weighted) depth of . The construction is done as follows. For each long rectangle Q=[x,x+]×[y,y+]𝒬𝗅𝗈𝗇𝗀, we include in the interval [x,x+] with weight 1. For the set 𝒬𝗌𝗁𝗈𝗋𝗍 of short rectangles, we first define a depth function f: as f(r)=maxprL𝖽𝖾𝗉(p,𝒬𝗌𝗁𝗈𝗋𝗍) where r is the vertical line with equation x=r. One can easily observe that f is a piecewise-constant function with O(|𝒬𝗌𝗁𝗈𝗋𝗍|) pieces, since the value of f only changes at the x-coordinates of the corners of the short rectangles. Therefore, we can partition the real line into s=O(|𝒬𝗌𝗁𝗈𝗋𝗍|) intervals I1,,Is such that f is a constant function when restricted to each Ii. Also, for each Ii, there is a y-coordinate yi such that Ii×{yi}L and f(x)=𝖽𝖾𝗉(p,𝒬𝗌𝗁𝗈𝗋𝗍) for any xIi and pIi×{yi}. Define a function g: as g(x)=yi if xIi. We include I1,,Is in and set the weight of each Ii to be f(x) for an arbitrary xIi. (See Figure 1 for an illustration.)

Figure 1: Construct of weighted intervals corresponding to 𝒬𝗌𝗁𝗈𝗋𝗍.

We observe the following simple fact.

Fact 5.

If x satisfies that 𝖽𝖾𝗉(x,)=𝖽𝖾𝗉(), the point p=(x,g(x))L satisfies 𝖽𝖾𝗉(p,𝒬)=𝖽𝖾𝗉(x,)=maxpL𝖽𝖾𝗉(p,𝒬).

Proof.

We first prove that 𝖽𝖾𝗉(p,𝒬)=𝖽𝖾𝗉(x,). By definition, 𝖽𝖾𝗉(p,𝒬)=𝖽𝖾𝗉(p,𝒬𝗅𝗈𝗇𝗀)+𝖽𝖾𝗉(p,𝒬𝗌𝗁𝗈𝗋𝗍). Note that a rectangle in 𝒬𝗅𝗈𝗇𝗀 contains p iff its corresponding weight-1 interval in contains x. So x is contained in exactly 𝖽𝖾𝗉(p,𝒬𝗅𝗈𝗇𝗀) weight-1 intervals corresponding to the rectangles in 𝒬𝗅𝗈𝗇𝗀. Meanwhile, the total weight of the intervals containing x which correspond to the rectangles in 𝒬𝗌𝗁𝗈𝗋𝗍 is equal to f(x). It follows that 𝖽𝖾𝗉(x,)=𝖽𝖾𝗉(p,𝒬𝗅𝗈𝗇𝗀)+f(x). As p=(x,g(x)), we have 𝖽𝖾𝗉(p,𝒬𝗌𝗁𝗈𝗋𝗍)=f(x) by the definition of g. Thus, 𝖽𝖾𝗉(x,)=𝖽𝖾𝗉(p,𝒬𝗅𝗈𝗇𝗀)+𝖽𝖾𝗉(p,𝒬𝗌𝗁𝗈𝗋𝗍)=𝖽𝖾𝗉(p,𝒬).

Since 𝖽𝖾𝗉(p,𝒬)maxpL𝖽𝖾𝗉(p,𝒬), it suffices to show that 𝖽𝖾𝗉(x,)maxpL𝖽𝖾𝗉(p,𝒬). Consider a point pL. Again, a rectangle in 𝒬𝗅𝗈𝗇𝗀 contains p iff its corresponding weight-1 interval in contains x(p). Also, f(x(p))𝖽𝖾𝗉(p,𝒬𝗌𝗁𝗈𝗋𝗍) by the definition of f. Therefore,

𝖽𝖾𝗉(x(p),)=𝖽𝖾𝗉(p,𝒬𝗅𝗈𝗇𝗀)+f(x(p))𝖽𝖾𝗉(p,𝒬𝗅𝗈𝗇𝗀)+𝖽𝖾𝗉(p,𝒬𝗌𝗁𝗈𝗋𝗍)=𝖽𝖾𝗉(p,𝒬).

Since 𝖽𝖾𝗉(x,)=𝖽𝖾𝗉(), we have 𝖽𝖾𝗉(x,)𝖽𝖾𝗉(x(p),)𝖽𝖾𝗉(p,𝒬).

For our slab data structure 𝒟(L), we can simply use the following result of Imai and Asano [14], which is a byproduct of their (static) algorithm for the maximum-depth problem for weighted rectangles.

Lemma 6 ([14]).

There exists a dynamic maximum-depth data structure 𝒜 for weighted intervals with O(logn) update time and O(nlogn) preprocessing time.

Given 𝒬, we first compute the set as stated above (as well as the functions f and g), and then build 𝒜(), i.e., the interval data structure of the above lemma on ). 𝒜() maintains x satisfying 𝖽𝖾𝗉(x,)=𝖽𝖾𝗉(), as well as the value of 𝖽𝖾𝗉(x,). The point we maintain is p=(x,g(x)), which satisfies 𝖽𝖾𝗉(p,𝒬)=maxpL𝖽𝖾𝗉(p,𝒬) by Fact 5. Also, the value of 𝖽𝖾𝗉(p,𝒬) is equal to 𝖽𝖾𝗉(x,) by Fact 5, and we can maintain the latter from 𝒜().

Consider an insertion/deletion on 𝒬, and let Q be the inserted/deleted rectangle. If Q is a long rectangle, we just need to insert/delete its corresponding weight-1 interval on and update 𝒜(). If Q is a short rectangle, the situation is more complicated and what we do is the following. First, we delete from all intervals corresponding to the rectangles in the old 𝒬𝗌𝗁𝗈𝗋𝗍. Then we construct the intervals corresponding to the rectangles in the new 𝒬𝗌𝗁𝗈𝗋𝗍 (we shall show how to do this efficiently) and insert them to . Also, we re-compute the functions f and g corresponding to the new 𝒬𝗌𝗁𝗈𝗋𝗍. For both deletions and insertions on , we need to update 𝒜() accordingly.

Next, we analyze the preprocessing time and update time for our slab data structure. To this end, we first show that one can compute the weighted intervals in corresponding to the short rectangles, as well as the functions f and g, in O(|𝒬𝗌𝗁𝗈𝗋𝗍|log|𝒬𝗌𝗁𝗈𝗋𝗍|) time. Indeed, we can directly apply the classical (static) rectangle maximum-depth algorithm [14] on 𝒬𝗌𝗁𝗈𝗋𝗍. The algorithm sweeps a vertical line from left to right, and maintains the maximum depth of 𝒬𝗌𝗁𝗈𝗋𝗍 on . Therefore, it exactly computes the function f and g, together with their piece intervals I1,,Is. The running time of the algorithm is O(|𝒬𝗌𝗁𝗈𝗋𝗍|log|𝒬𝗌𝗁𝗈𝗋𝗍|). The intervals in corresponding to the long rectangles can be constructed directly. Overall, the time for constructing is O(|𝒬|log|𝒬|), i.e., O(nlogn). By Lemma 6, 𝒜() can also be built in O(nlogn) time as ||=O(|𝒬|). So the preprocessing time of our slab data structure is O(nlogn). For the update time, we need to distinguish long rectangles and short rectangles. Inserting/deleting a long rectangle corresponds to an insertion/deletion on 𝒜() and thus can be done in O(logn) time. For the insertion/deletion of a short rectangle, we need to delete O(|𝒬𝗌𝗁𝗈𝗋𝗍|) intervals from , re-compute the intervals corresponding to 𝒬𝗌𝗁𝗈𝗋𝗍 and the functions f and g, and insert O(|𝒬𝗌𝗁𝗈𝗋𝗍|) intervals to . The O(|𝒬𝗌𝗁𝗈𝗋𝗍|) insertions/deletions on 𝒜() can be done in O(|𝒬𝗌𝗁𝗈𝗋𝗍|logn) time by Lemma 6, and the re-computation can be done in O(|𝒬𝗌𝗁𝗈𝗋𝗍|log|𝒬𝗌𝗁𝗈𝗋𝗍|) time as discussed above. As the number of rectangle corners in a slab is always O(n/r), we have |𝒬𝗌𝗁𝗈𝗋𝗍|=O(n/r). Thus, the update time for a short rectangle is O((n/r)logn).

Lemma 7.

One can construct each slab data structure in O(nlogn) time such that for insertions/deletions of long (resp., short) rectangles, the data structure can be updated in O(logn) time (resp., O((n/r)logn) time).

Overall time complexity.

Using Lemma 7, we can now analyze the overall preprocessing time and update time of our data structure. Partitioning the plane into slabs can be done in O(nlogn) time by sorting the corners of the rectangles in 𝒬. Since there are r slabs and each slab data structure can be constructed in O(nlogn) time, the preprocessing time is O(rnlogn). To analyze the update time, let Q be the inserted/deleted rectangle. Note that there are at most two slabs in which Q is a short rectangle. So among the updates of the slab data structures, there can be at most two updates for short rectangles and O(r) updates for long rectangles. By Lemma 7, these updates take O((n/r)logn+rlogn) time. Also, rectangle insertions might cause slab splittings and for each slab splitting we need O(nlogn) time to build the new slab data structures. By Fact 4, the time cost for slab splitting during the first t operations is O(trlogn), and the amortized time cost is then O(rlogn). Therefore, the overall (amortized) update time of our data structure is O((n/r)logn+rlogn). To balance the two terms, we set r=n, yielding O(nlogn) update time. The preprocessing time is then O(n1.5logn). Based on the above discussion, we have the following conclusion, which is the base case k=1 of Theorem 1.

Lemma 8.

There exists a dynamic maximum-depth data structure for axis-parallel rectangles with O(nlogn) amortized update time, which can be built in O(n1.5logn) time.

Approximate data structures.

Now we are able to prove Theorem 1 for a general k. As aforementioned, we apply induction on k. Lemma 8 gives us the base case k=1. Suppose there exists a 1k-approximate dynamic maximum-depth data structure for rectangles with O(n1/(k+1)logn) (amortized) update time and O(n1+1/(k+1)logn) preprocessing time. We want to show the existence of a 1k+1-approximate data structure with O(n1/(k+2)logn) update time and O(n1+1/(k+2)logn) preprocessing time.

As before, our data structure partitions the plane into r slabs. The only difference occurs in the design of the slab data structures. Consider a slab L. We want our slab data structure for L to maintain a point pL satisfying that 𝖽𝖾𝗉(p,𝒬)1k+1maxpL𝖽𝖾𝗉(p,𝒬); this guarantees that our overall data structure achieves the approximation ratio 1k+1. Again, let 𝒬𝗅𝗈𝗇𝗀 and 𝒬𝗌𝗁𝗈𝗋𝗍 denote the set of long and short rectangles in 𝒬 with respect to L, respectively. Unlike in our exact data structure, we now handle 𝒬𝗅𝗈𝗇𝗀 and 𝒬𝗌𝗁𝗈𝗋𝗍 separately. For long rectangles, we maintain a point p1L that maximizes 𝖽𝖾𝗉(p1,𝒬𝗅𝗈𝗇𝗀). This can be done using exactly the same idea as before: map each long rectangle Q=[x,x+]×[y,y+]𝒬𝗅𝗈𝗇𝗀 to the interval [x,x+], and build 𝒜 on these intervals. For short rectangles, we maintain a point p2L satisfying the condition that 𝖽𝖾𝗉(p2,𝒬𝗌𝗁𝗈𝗋𝗍)1kmaxpL𝖽𝖾𝗉(p,𝒬𝗌𝗁𝗈𝗋𝗍). To this end, we build a 1k-approximate dynamic maximum-depth data structure on 𝒬𝗌𝗁𝗈𝗋𝗍, which can give us the desired point p2. By our induction hypothesis, there exists such a data structure with O(n1/(k+1)logn) update time and O(n1+1/(k+1)logn) preprocessing time. Define p=p1 if 𝖽𝖾𝗉(p1,𝒬𝗅𝗈𝗇𝗀)𝖽𝖾𝗉(p2,𝒬𝗌𝗁𝗈𝗋𝗍) and p=p2 if 𝖽𝖾𝗉(p1,𝒬𝗅𝗈𝗇𝗀)<𝖽𝖾𝗉(p2,𝒬𝗌𝗁𝗈𝗋𝗍) . We observe the following.

Lemma 9.

𝖽𝖾𝗉(p,𝒬)1k+1maxpL𝖽𝖾𝗉(p,𝒬).

Proof.

Let pL be a point that maximizes 𝖽𝖾𝗉(p,𝒬). By definition, we have 𝖽𝖾𝗉(p,𝒬)=𝖽𝖾𝗉(p,𝒬𝗅𝗈𝗇𝗀)+𝖽𝖾𝗉(p,𝒬𝗌𝗁𝗈𝗋𝗍). We consider two cases: 𝖽𝖾𝗉(p,𝒬𝗅𝗈𝗇𝗀)1k+1𝖽𝖾𝗉(p,𝒬) and 𝖽𝖾𝗉(p,𝒬𝗅𝗈𝗇𝗀)<1k+1𝖽𝖾𝗉(p,𝒬). If 𝖽𝖾𝗉(p,𝒬𝗅𝗈𝗇𝗀)1k+1𝖽𝖾𝗉(p,𝒬), we have

𝖽𝖾𝗉(p,𝒬)𝖽𝖾𝗉(p1,𝒬)𝖽𝖾𝗉(p1,𝒬𝗅𝗈𝗇𝗀)𝖽𝖾𝗉(p,𝒬𝗅𝗈𝗇𝗀)1k+1𝖽𝖾𝗉(p,𝒬).

On the other hand, if 𝖽𝖾𝗉(p,𝒬𝗅𝗈𝗇𝗀)<1k+1𝖽𝖾𝗉(p,𝒬), we must have 𝖽𝖾𝗉(p,𝒬𝗌𝗁𝗈𝗋𝗍)>kk+1𝖽𝖾𝗉(p,𝒬). In this case, we have

𝖽𝖾𝗉(p,𝒬)𝖽𝖾𝗉(p2,𝒬)𝖽𝖾𝗉(p2,𝒬𝗌𝗁𝗈𝗋𝗍)1k𝖽𝖾𝗉(p,𝒬𝗌𝗁𝗈𝗋𝗍)>1k+1𝖽𝖾𝗉(p,𝒬).

Therefore, we always have 𝖽𝖾𝗉(p,𝒬)1k+1𝖽𝖾𝗉(p,𝒬).

The data structure for maintaining p1 can be constructed in O(|𝒬𝗅𝗈𝗇𝗀|log|𝒬𝗅𝗈𝗇𝗀|) time and updated in O(log|𝒬𝗅𝗈𝗇𝗀|) time, by Lemma 6. Furthermore, the data structure for maintaining p2 can be constructed in O(|𝒬𝗌𝗁𝗈𝗋𝗍|1+1/(k+1)log|𝒬𝗌𝗁𝗈𝗋𝗍|) time and updated in O(|𝒬𝗌𝗁𝗈𝗋𝗍|1/(k+1)log|𝒬𝗌𝗁𝗈𝗋𝗍|) time. Since |𝒬𝗅𝗈𝗇𝗀|=O(n) and |𝒬𝗌𝗁𝗈𝗋𝗍|=O(n/r), for each slab data structure, the preprocessing time is O(nlogn+(n/r)1+1/(k+1)log(n/r)) and the update time is O(logn) for long rectangles and O((n/r)1/(k+1)log(n/r)) for short rectangles. This implies that our overall data structure has preprocessing time O(rnlogn+r(n/r)1+1/(k+1)log(n/r)) and update time O(rlogn+(n/r)1/(k+1)log(n/r)). By setting r=n1/(k+2), we have the desired 1k+1-approximation data structure with O(n1/(k+2)logn) amortized update time and O(n1+1/(k+2)logn) preprocessing time.

See 1

4 Dynamic maximum-depth for disks

We now describe our dynamic maximum-depth data structure for disks. The main idea underlying our result is to reduce the maximum-depth problem to discrete maximum depth.

Suppose we want to design a (12ε)-approximate maximum-depth data structure for disks, where ε>0 is a constant. Without loss of generality, we assume that ε is sufficiently small, say, ε<0.1. Let δ be a sufficiently large integer such that 1δ1cosεπ. For a disk D in 2 with center (x,y) and radius r, we define

Γδ(D)={(x3r+ir/δ,y3r+jr/δ)2:i,j[6δ]},

and call the points in Γδ(D) the companion points of D. For a set 𝒟 of disks, write Γδ(𝒟)=D𝒟Γδ(D). The set of companion points is the discrete point set for which our data structure dynamically maintains depth. The following fact is straightforward.

Fact 10.

Let D be a disk and S be a set of points in D. There exists a circular sector X of D with angle (12ε)π such that |SX|(12ε)|S|.

Proof.

Suppose we randomly sample a circular sector X of D with angle (12ε)π, from the uniform distribution over the space of all such sectors. The probability that a specific point pS is contained in X is 1 if p is the center of D and is 12ε otherwise. Therefore, 𝔼[|SX|](12ε)|S|, which in turn implies that there exists a sector X of D with angle (12ε)π for which |SX|(12ε)|S|.

Lemma 11.

Let 𝒟 be a set of congruent disks such that D𝒟D. Then for every D𝒟, there exists pΓδ(D) such that 𝖽𝖾𝗉(p,𝒟)(12ε)|𝒟|.

Proof.

Without loss of generality, we assume that the disks in 𝒟 are unit disks. Let pD𝒟D and P be the unit disk centered at p. Define 𝖼𝗍𝗋(𝒟) as the set of the centers of the disks in 𝒟. Since pD𝒟D, we have 𝖼𝗍𝗋(𝒟)P. By Fact 10, there is a circular sector X of P with angle (12ε)π such that |𝖼𝗍𝗋(𝒟)X|(12ε)|𝖼𝗍𝗋(𝒟)|=(12ε)|𝒟|. Suppose u and v are the two vertices of X on the boundary of P. Define o as the middle point of the segment connecting u and v, and Do as the disk centered at o with radius 1cosεπ. We claim that 𝖽𝖾𝗉(p,𝒟)(12ε)|𝒟| for any point pDo. Consider an arbitrary point pDo. Since the angle of X is (12ε)π, the distance between o and p is sinεπ. Also, the distance between o and u (resp., v) is cosεπ. Recall that ε<0.1 by our assumption, which implies that sinεπ<cosεπ. Therefore, the distance between p and p is at most sinεπ+(1cosεπ)<1, and the distance between p and u (resp., v) is at most cosεπ+(1cosεπ)=1. Therefore, the unit disk centered at p contains points p,u,v, which further implies that it contains X. Consequently, the unit disks in 𝒟 whose centers lie in X contains p. We have |𝖼𝗍𝗋(𝒟)X|(12ε)|𝒟|, which then implies that 𝖽𝖾𝗉(p,𝒟)(12ε)|𝒟|. See Figure 2 for an illustration.

Figure 2: Illustrating the proof of Lemma 11.

Now it suffices to show Γδ(D)Do for every D𝒟. Note that the distance between o and the center of D is at most 2, since pD and oP. Furthermore, Do contains an axis-parallel square centered at o with side-length 1cosεπ, as its radius is 1cosεπ. Since 1cosεπ1δ, this square then contains a point in Γδ(D), by our construction of Γδ(D). Therefore, Γδ(D)Do.

Corollary 12.

For any set 𝒟 of disks in 2, the following holds:

maxpΓδ(𝒟)𝖽𝖾𝗉(p,𝒟)(12ε)𝖽𝖾𝗉(𝒟).

Proof.

Let p2 such that 𝖽𝖾𝗉(p,𝒟)=𝖽𝖾𝗉(𝒟), and 𝒟={D𝒟:pD}. Suppose D0𝒟 is the smallest disk in 𝒟. For each disk D𝒟, we can find a smaller disk DD with the same radius as D0 such that pD. Let 𝒟 be the set of all these disks. Then 𝒟 is a set of congruent disks containing p, and 𝖽𝖾𝗉(p,𝒟)𝖽𝖾𝗉(p,𝒟) for any p2. The former implies that D𝒟D. Furthermore, we have D0𝒟. By Lemma 11, there exists pΓδ(D0) such that 𝖽𝖾𝗉(p,𝒟)(12ε)|𝒟|=(12ε)|𝒟|=(12ε)𝖽𝖾𝗉(𝒟). This proves the corollary.

Using this corollary, we can reduce the task of maintaining a (12ε)-approximation of the maximum depth of disks to the dynamic discrete maximum-depth problem for disks. Let 𝒟 be a dynamic set of disks. Corollary 12 implies that maxpS𝖽𝖾𝗉(p,𝒟)(12ε)𝖽𝖾𝗉(𝒟), where S=Γδ(𝒟). Therefore, it suffices to consider the dynamic discrete maximum-depth instance with point set S and disk set 𝒟.

4.1 Dynamic discrete maximum depth for disks

In this section, we design a dynamic discrete maximum-depth data structure for disks with n2/3logO(1)n amortized update time. Via the standard lifting argument, the dynamic discrete maximum-depth problem for disks can be reduced to the same problem for 3D halfspaces. To construct the discrete maximum-depth data structure, we first need a dynamic depth-query data structure for halfspaces. Specifically, such a data structure maintains a dynamic set of hyperplanes and can return, for a query point q3, the value of 𝖽𝖾𝗉(q,).

Lemma 13.

There exists an dynamic depth-query data structure for 3D halfspaces with n2/3logO(1)n query time and logO(1)n amortized update time. The data structure can be built in nlogO(1)n time.

Proof.

By duality between points and halfspaces, the depth-query problem is equivalent to the range-counting problem: store a dynamic set S of points in 3 such that for a query halfspace H, one can return the value |SH|. A dynamic 3D halfspace range-counting data structure with n2/3logO(1)n query time and logO(1)n amortized update time can be directly obtained using the dynamic partition tree given by Matoušek [15].

Let S be the dynamic set of points and be the dynamic set of halfspaces, and we want to maintain maxpS𝖽𝖾𝗉(p,). We build the dynamic depth-query data structure in Lemma 13 for . We store the points in S in a standard partition tree 𝒯 [15]. The tree 𝒯 has |S|logO(1)|S| nodes and can be built in |S|logO(1)|S| time. The depth of 𝒯 is logO(1)|S| and the leaves of 𝒯 one-to-one correspond to the points in S. Each node v of 𝒯 corresponds to a subset S(v)S, which consists of the points on the leaves of the subtree rooted at v. Furthermore, given any halfspace H in 3, one can find k=|S|2/3logO(1)|S| nodes v1,,vk of 𝒯 in |S|2/3logO(1)|S| time such that SH=i=1kS(vi) and the sets S(v1),,S(vk) are pairwise disjoint; these nodes are called the canonical nodes for H. For each node v, we maintain two fields, σ(v) and μ(v). The fields σ(v) satisfies the following invariant: for any point pS, the sum of σ(v) over all nodes v satisfying pS(v) is equal to 𝖽𝖾𝗉(p,). The field μ(v) is defined as

μ(v)={σ(v)if v is a leaf,σ(v)+maxu𝖼𝗁(v)μ(u)otherwise,

where 𝖼𝗁(v) is the set of children of v. By the invariant for the σ-fields, we see that the μ-field of the root of 𝒯 is just equal to maxpS𝖽𝖾𝗉(p,).

Halfspace insertion/deletion.

For a halfspace insertion/deletion, we do not need to change the partition tree 𝒯 itself; instead, we need to update the σ-fields and μ-fields. Let H be a halfspace to be inserted. We find the canonical nodes v1,,vk for H. Then we increase the fields σ(v1),,σ(vk) by 1. We also update the μ-fields of all ancestors of v1,,vk. Note that if a node is not an ancestor of any of v1,,vk, then its μ-field is not changed. The deletion of a halfspace can be handled in the same way, with the only difference that we decrease the fields σ(v1),,σ(vk) by 1. The update time is |S|2/3logO(1)|S|=n2/3logO(1)n. Besides, we also need to update the depth-query data structure for , which takes ||2/3logO(1)||=n2/3logO(1)n time as well.

Point insertion/deletion.

To delete a point p from S, we can directly remove the leaf v of 𝒯 corresponding to p, and update the μ-fields of all ancestors of v. This takes logO(1)n time. To handle point insertions, we apply the classical logarithmic method [4]. This method, instead of using one partition tree, stores S in O(log|S|) partition trees each of which corresponds to a subset of S whose size is a power of 2. The only problem we need to solve here is how to efficiently merge two partition trees storing subsets of S with size 2i into a larger partition tree 𝒯, with the σ-fields and μ-fields. The tree 𝒯 itself can be built in 2iiO(1) time. To compute the σ-fields and μ-fields for the nodes of 𝒯, what we do is the following. Let SS be the subset 𝒯 stores. We use the depth-query data structure to obtain 𝖽𝖾𝗉(p,) for all pS. For each leaf v of 𝒯, we set σ(v)=𝖽𝖾𝗉(p,), where pS is the point corresponding to v. We set σ(v)=0 for all internal nodes v of 𝒯. Clearly, the invariant for the σ-fields holds. Using the σ-fields, we then compute the μ-field for every node of 𝒯 in a bottom-up order. The most time-consuming part in this procedure is the depth queries, which takes |S|||2/3logO(1)||=2in2/3logO(1)n time. As such, the amortized update time for a point insertion is n2/3logO(1)n.

Lemma 14.

There exists a dynamic discrete maximum-depth data structure for disks with n2/3logO(1)n amortized update time for insertions/deletions of points and disks; here n is the total number of points and disks in the instance.

See 2

Proof.

We just build the data structure of Lemma 14 on the point set S=Γδ(𝒟) and the disk set 𝒟. The data structure maintains a point pS such that 𝖽𝖾𝗉(p,𝒟)=maxpS𝖽𝖾𝗉(p,𝒟) and thus 𝖽𝖾𝗉(p,𝒟)(12ε)𝖽𝖾𝗉(𝒟) by Corollary 12. Each insertion (resp., deletion) on 𝒟 corresponds to O(1) insertions (resp., deletions) on S and one insertion (resp., deletion) on 𝒟. Therefore, the update time is O(n2/3logO(1)n) in total.

We remark that our data structure for disks can be generalized to balls in higher dimensions. Indeed, the proof of Fact 10 holds for balls in any fixed dimension. To solve dynamic discrete maximum depth for balls in d, one can reduce to the problem for halfspaces in d+1 and use partition trees in d+1. The update time becomes nd/(d+1)logO(1)n.

4.2 Generalization to fat objects

In this section, we discuss how our reduction from the dynamic maximum-depth problem to its discrete counterpart works for general fat objects (where the approximation ratio depends on the fatness of the objects).

For a α-fat object O in d with whose center (x1,xd) and in-radius r, we define its companion points as

Γα(O)={(x1±2i1r,,xd±2idr):i1,,id[2α]}.

The key observation of the reduction for general fat objects is the following fact.

Fact 15.

A d-dimensional ball with center (x1,xd) and radius 2αr can be covered by balls with centers in Γα(O) and radii r.

Corollary 16.

For any set 𝒪 of α-fat objects in d, the following holds:

maxpΓα(𝒪)𝖽𝖾𝗉(p,𝒪)1(22α)d𝖽𝖾𝗉(𝒪).

Proof.

Let pd such that 𝖽𝖾𝗉(p,𝒪)=𝖽𝖾𝗉(𝒪), and 𝒪={O𝒪:pO}. Suppose O0𝒪 is the smallest object in 𝒪, i.e., O0 has the smallest in-radius, which is assumed to be 1 without loss of generality. For each object O𝒪, we can find a unit ball OO as following: the unit ball centered on the line segment between p and 𝖼𝗍𝗋(O), which is tangent to all tangent hyperplanes of O at p. (See Figure 3 for an illustration.)

Figure 3: Illustrating the proof of Corollary 16 in 2.

Since O0 contains p, we have the distance between p and 𝖼𝗍𝗋(O0) is at most α. Moreover, it is clear that the distance between p and 𝖼𝗍𝗋(O) is at most α. Therefore, the distance between 𝖼𝗍𝗋(O) and 𝖼𝗍𝗋(O0) is at most 2α, which means that the centers of these unit balls lie within a ball with center 𝖼𝗍𝗋(O0) and radius 2α. By Fact 15, we can cover this ball by unit balls with centers in Γα(O0). Therefore, by the pigeonhole principle, we have

maxpΓα(𝒪)𝖽𝖾𝗉(p,𝒪)maxpΓα(𝒪0)𝖽𝖾𝗉(p,𝒪)1|Γα(𝒪0)|𝖽𝖾𝗉(𝒪)=1(22α)d𝖽𝖾𝗉(𝒪),

which completes the proof.

As a concluding remark, we note that our reduction for general fat objects relies on the following question: Given a ball of radius 2α in d, how many unit balls are needed to cover it? We employ a rather general method in the construction of Γα(O), to illustrate the main idea, and the constant (22α)d in Corollary 16 can be improved for specific values of α and d. For example, in the case of disk (d=2 and α=1), as proved in [5], only 7 unit disks are needed, instead of 16 in Corollary 16. This optimization problem for particular values of α and d may have independent interest.

References

  • [1] Pankaj K. Agarwal, Hsien-Chih Chang, Subhash Suri, Allen Xiao, and Jie Xue. Dynamic geometric set cover and hitting set. ACM Trans. Algorithms, 18(4):40:1–40:37, 2022. doi:10.1145/3551639.
  • [2] Helmut Alt and Ludmila Scharf. Computing the depth of an arrangement of axis-aligned rectangles in parallel. In 26th European Workshop on Computational Geometry, pages 33–36, 2010.
  • [3] Boris Aronov and Sariel Har-Peled. On approximating the depth and related problems. SIAM J. Comput., 38(3):899–921, 2008. doi:10.1137/060669474.
  • [4] Jon Louis Bentley and James B. Saxe. Decomposable searching problems I: static-to-dynamic transformation. J. Algorithms, 1(4):301–358, 1980. doi:10.1016/0196-6774(80)90015-2.
  • [5] Károly Bezdek. Körök optimális fedései (Optimal Covering of Circles). PhD thesis, Eötvös Lorand University, 1979.
  • [6] Sujoy Bhore and Timothy M. Chan. Fast static and dynamic approximation algorithms for geometric optimization problems: Piercing, independent set, vertex cover, and matching. CoRR, abs/2407.20659, 2024. doi:10.48550/arXiv.2407.20659.
  • [7] Sujoy Bhore, Guangping Li, and Martin Nöllenburg. An algorithmic study of fully dynamic independent sets for map labeling. ACM J. Exp. Algorithmics, 27:1.8:1–1.8:36, 2022. doi:10.1145/3514240.
  • [8] Paz Carmi, Matthew J. Katz, and Pat Morin. Stabbing pairwise intersecting disks by four points. Discret. Comput. Geom., 70(4):1751–1784, 2023. doi:10.1007/S00454-023-00567-0.
  • [9] Timothy M. Chan. A (slightly) faster algorithm for klee’s measure problem. Comput. Geom., 43(3):243–250, 2010. doi:10.1016/J.COMGEO.2009.01.007.
  • [10] Timothy M. Chan and Qizheng He. More dynamic data structures for geometric set cover with sublinear update time. J. Comput. Geom., 13(2):90–114, 2021. doi:10.20382/JOCG.V13I2A6.
  • [11] Timothy M. Chan, Qizheng He, Subhash Suri, and Jie Xue. Dynamic geometric set cover, revisited. In Joseph (Seffi) Naor and Niv Buchbinder, editors, Proceedings of the 2022 ACM-SIAM Symposium on Discrete Algorithms, SODA 2022, Virtual Conference / Alexandria, VA, USA, January 9 - 12, 2022, pages 3496–3528. SIAM, 2022. doi:10.1137/1.9781611977073.139.
  • [12] Bernard Marie Chazelle and D. T. Lee. On a circle placement problem. Computing, 36(1-2):1–16, 1986. doi:10.1007/BF02238188.
  • [13] Monika Henzinger, Stefan Neumann, and Andreas Wiese. Dynamic approximate maximum independent set of intervals, hypercubes and hyperrectangles. In Sergio Cabello and Danny Z. Chen, editors, 36th International Symposium on Computational Geometry, SoCG 2020, June 23-26, 2020, Zürich, Switzerland, volume 164 of LIPIcs, pages 51:1–51:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2020. doi:10.4230/LIPICS.SOCG.2020.51.
  • [14] Hiroshi Imai and Takao Asano. Finding the connected components and a maximum clique of an intersection graph of rectangles in the plane. Journal of Algorithms, 4(4):310–323, December 1983. doi:10.1016/0196-6774(83)90012-3.
  • [15] Jirí Matoušek. Efficient partition trees. Discret. Comput. Geom., 8:315–334, 1992. doi:10.1007/BF02293051.
  • [16] Hakan Yildiz, John Hershberger, and Subhash Suri. A discrete and dynamic version of klee’s measure problem. In Proceedings of the 23rd Annual Canadian Conference on Computational Geometry, Toronto, Ontario, Canada, August 10-12, 2011, 2011. URL: http://www.cccg.ca/proceedings/2011/papers/paper28.pdf.