Eﬃcient Approximations for the Online Dispersion Problem

,


Introduction
The problem of assigning elements to locations in a given area comes up only too often in real life: where to seat the customers in a restaurant, where to put certain facilities in a city, where to build nuclear power stations in a country, etc. Different problems have different features and constraints, but one common feature that appears in many of them is not to locate the elements too close to each other: for people's privacy, for environmental safety, and/or for serving more users. Another feature is also common in many applications: that is, not to locate the elements too close to the boundary of the area. Indeed, for security reasons, important national industrial facilities in many countries are built at a safe distance away from the border. Such problems have been widely studied in computational geometry and facility location; see, e.g., [34,3,2,4]. In particular, in the dispersion problem defined by [2], there is a k-dimensional polytope P and an integer n, and the goal is to locate n points in P so as to maximize the minimum distance among them and from them to the boundary of P .
However, there is another important feature in all the scenarios mentioned above and many other real-world scenarios: the presence of elements is time-dependent and decisions need to be made along time, without knowing when the elements will come and go in the future. Indeed, it may be hard to move an element once it is located, making it infeasible for the decision maker to relocate all the present elements according to the optimal static solution when an arrival/departure event occurs. Online dispersion and facility location problems have been studied when the underlying locations are vertices of a graph [30,16,29]. In this paper we consider, for the first time in the literature, the online dispersion problem in Euclidean space. The arriving and departure times of points are chosen by an adversary who knows everything and works adaptively. An online dispersion algorithm decides where to locate a point upon its arrival, without any knowledge about future events.

Main Results
We focus on two natural objectives for the online problem: the all-time worst-case (ATWC) problem, which aims at maximizing the minimum distance that ever appears at any time; and the cumulative distance (CD) problem, which aims at maximizing the integral of the minimum distance throughout the whole time interval. Although polynomial-time constant approximations have been given when time is not involved [2,4], nothing was known about the online problem. As we will show, solutions for the online problem are already complex even on a segment. For cumulative distance, even when the problem is time-dependent but offline, with all the arriving and departure times given in advance, it remains unclear how to efficiently compute the optimal solution. We formally define the problem in Section 2 and summarize our results in Table 1 below.
The most technical parts are the online ATWC problem and the offline time-dependent CD problem. Interestingly, we provide an efficient reduction from the offline CD problem to the online ATWC problem, and show that in order to solve the former, one can use an algorithm for the latter as a black-box. For the online ATWC problem, it is not hard to see that a natural greedy algorithm provides a 2-competitive ratio. Our main contributions for this problem are to provide an efficient algorithm that is optimal for the 1-dimensional case, improve the competitive ratio and prove a lower-bound for squares, and provide an efficient implementation of the greedy algorithm for the general case. We also prove a simple lower-bound for the general case.
To establish our results, we show interesting new connections between dispersion and ballpacking -both uniform packing (i.e., with balls of identical radius) and non-uniform packing (i.e., with balls of different radii). All our algorithms are deterministic and of polynomial time. Some of them take an arbitrarily small constant ǫ as a parameter and the running time is polynomial in 1 ǫ . All inapproximability results hold even when running time is not a concern. Most proofs are given in the appendix.
Discussion and future directions. An algorithm for high-dimensional polytopes may not be directly applicable in dimension 1, because all locations are on the boundary when a segment is treated as a high-dimensional polytope, and the minimum distance is always 0. Accordingly, we do not know whether the lower-bound for dimension 1 carries through to higher dimensions, and proving better lower-bounds will be an interesting problem for future studies. In the appendix, we consider online dispersion without the boundary constraint, where the lower-bound for dimension 1 Online Offline time-dependent ATWC k = 1: a 2 ln 2 (≈ 1.386) lower-bound and an optimal algorithm; see Theorems 5 and 8. k = 2: a 1.183 lower-bound and a 1.591competitive algorithm for squares; see Theorems 11 and 12. k ≥ 2: a 7 6 lower-bound and a 2 1−ǫcompetitive algorithm for arbitrary polytopes P ; see Theorems 13 and 14. Equivalent to dispersion without time; see Claim 2.

CD
No constant competitive algorithm even when k = 1; see Claim 1.
A black-box reduction to online ATWC for arbitrary k and P , with the competitive ratio scaling up by at most 2; see Theorem 15. Table 1: Online and offline time-dependent dispersion problems in a k-dimensional polytope P .
indeed carries through. We show all our algorithms can be adapted for this setting. Another important future direction is to understand the role of randomized algorithms in the online dispersion problem. Finally, improving the (deterministic or randomized) algorithms' competitive ratios in various classes of polytopes is certainly a long-lasting theme for the online dispersion problem. Special classes such as regular polytopes and uniform polytopes may be reasonable starting points. Given the connections between dispersion and ball-packing, it is conceivable that new competitive algorithms for online dispersion may stem from and also imply new findings on ball-packing.

Related Work
Dispersion without time. In dispersion problems in general, the possible locations can be either a continuous region or a set of discrete candidates. Two objectives have been studied in the literature: the max-min distance as considered in this paper, and the maximum total distance. In continuous settings, the authors of [2] consider the max-min distance with the boundary condition. Under L ∞ -norm, they give a polynomial-time 1.5-approximation in rectilinear polygons 1 and show that a 14 13 -approximation in arbitrary polygons is NP-hard. Moreover, they show there is no PTAS under any norm unless P=NP. [4] considers a similar boundary condition under L 2 -norm and provides a 1.5-approximation in polygons with obstacles. [10] considers the problem of selecting n points in n given unit-disks, one per disk, and the objective is to maximize the minimum distance.
In discrete settings, [34,5] show that, if the distances among the candidate locations do not satisfy triangle inequality, then there is no polynomial-time constant approximation for either objective, unless P=NP; while if triangle inequality is satisfied, then there are efficient 2-approximations for both objectives. If the goal is to maximize total distance and the candidate locations are in a k-dimensional space, [14] gives a PTAS under L 1 -norm; and [7,8] provide PTASes when locations need to satisfy matroid constraints. Finally, [3,33,27,1] consider various dispersion problems in obnoxious facility allocation.
Packing without time. It is well known that dispersion and packing are "dual" problems of each other [28]. In this paper we show interesting new connections between them and use several important results for packing in our analysis. Thus we briefly introduce this literature. Indeed, the packing problem is one of the most extensively studied problems in geometric optimization, and a huge amount of work has been done on different variants of the problem; see [32,22] for surveys on this topic.
One important problem is to pack circles with identical radius, as many as possible, in a bounded region. [17] shows this problem to be strongly NP-hard and [23] gives a PTAS for it. An APTAS for the circle bin packing problem is given in [31]. The dispersal packing problem tries to maximize the radius of a given number of circles packed in a square. A lot of effort has been made in finding the optimal radius and the corresponding packing when the number of circles is a small constant; see [38,37,39,32]. Heuristic methods have also been used in finding approximations when the number of circles gets large [25,41]. Finally, an important packing problem is to understand the packing density: that is, the maximum fraction of an infinite space covered by a packing of unit circles/spheres. The packing density is solved for dimension 2 in [13] and for dimension 3 in [21]. Very recently, [42] and [9] solve it for dimensions 8 and 24, respectively. Asymptotic lower bounds (as the dimension grows) for the density of the densest packing are provided in [35].
Online geometric optimization. Many important geometric optimization problems have been studied in online settings, although the settings and the objectives are quite different from ours. In particular, the seminal work of [40] provides a nearly-optimal competitive algorithm for the classic online bin-packing problem. Algorithms for variants of the problem have been considered ever since, such as a constant competitive ratio for packing circles in square bins [24], and constant competitive ratios for bin-packing in higher dimensions [11,12].
In online facility location [30], it is the demands rather than the facilities that come along time. The facilities have open costs and the goal is to minimize the total open cost and the total distance between demands and facilities. As shown in [30], when the demands arrive adversarially, there is a randomized polynomial-time O(log n)-competitive algorithm, and a constant competitive ratio is impossible. A deterministic O( log n log log n )-competitive algorithm and a matching lower-bound are provided in [16] for the same problem. In incremental facility location [15], the facilities can be opened, closed or merged, depending on the arriving demands. In [29,36], there is a cost for each location configuration and the goal is to minimize the cost when the facilities arrive online. A constant competitive algorithm for this problem is provided in [29], and [36] gives a reduction from the online problem to the offline version of the problem.
Dynamic resource division. Fair resource division is an important problem in economics [19,20,26]. When the resource is 1-dimensional and homogenous, dynamic fair division is in some sense the "dual" of online dispersion: locating n points as far as possible from each other and from the boundary is the same as partitioning the segment into n + 1 pieces as evenly as possible. [18] provides an optimal d-disruptive mechanism for 1-dimensional homogenous resource. Interestingly, our algorithm for the 1-dimensional case provides an optimal mechanism when d = 1, although the techniques are quite different. Optimal mechanisms for heterogenous or high-dimensional resource remain unknown. It would be interesting to see if our techniques for dispersion can be used in resource division problems in general.

The Online Dispersion Problem
Given a k-dimensional polytope P , the dispersion problem [2] takes as input a positive integer n and outputs n locations, X 1 , . . . , X n ∈ P , specifying how to locate n points. For each point i, let dis(X i , ∂P ) be the distance from X i to ∂P , the boundary of P , measured by L 2 -norm. Also, let dis(X i , X j ) be the distance between X i and X j for any i = j. The objective is Disp(n; P ) max {dis(X i , ∂P ), dis(X i , X j )}.
In Appendix F we also consider the dispersion problem where the distances to the boundary are not taken into consideration. Most of our techniques can be applied there. We now define the online dispersion problem, where each point i arrives at time s i and departs at time d i , with d i > s i . Without loss of generality, 0 = s 1 ≤ s 2 ≤ · · · ≤ s n . An online algorithm is notified upon the occurrence of an arrival/departure event. It must decide the location X i for a point i upon its arrival, knowing neither the future events nor the total number of points n. An adversary knows how the algorithm works and chooses future events after seeing the output of the algorithm so far. In the time-dependent offline version of the problem, the times of all events, denoted by a vector S = ((s 1 , d 1 ), . . . , (s n , d n )), is given to the algorithm in advance.
Given such a vector S, let T = max i∈[n] d i be the last departure time. Moreover, given locations X = (X 1 , . . . , X n ), for any t ≤ T , let be the minimum distance corresponding to the points that are present at time t. When X is clear from the context, we may write d min (t) for short. We consider two natural objectives: the all-time worst-case (ATWC) problem, where the objective is Note that both objectives are defined to be the optimum of the corresponding offline problems, the same as the ex-post optimum for the online problems. Below we provide two simple observations about the objectives, proved in Appendix A.
Claim 1. For the CD problem, even when k = 1 and P is the unit segment, no (randomized) online algorithm achieves a competitive ratio to OP T C better than Ω(n).
Next, given any ex-post instance S = ((s 1 , d 1 ), . . . , (s n , d n )), let m be the maximum number of points simultaneously present at any time t: that is, m = max t≤T |{i : In light of the claims above, the online CD problem is highly inapproximable and the offline ATWC problem is equivalent to the dispersion problem without time. Thus we will focus on the online ATWC problem and the offline CD problem, especially the former. Our results will also imply a simple O(n)-competitive algorithm for the online CD problem (Corollary 23 of Appendix 5), matching the lower-bound in Claim 1.
Below we point out some connections between dispersion and ball-packing, proved in Appendix A: they are not hard to show, and similar results for the dispersion problem without the boundary condition have been pointed out in [28]. More precisely, the (uniform) ball-packing problem [23] in a polytope P takes as input a non-negative value r and outputs an integer n, the maximum number of balls of radius r that can be packed non-overlappingly in P , together with a corresponding packing. We denote the solution by P ack(r; P ). The dispersal packing problem [2] is a "mixture" of dispersion and packing: it takes as input an integer n and outputs the maximum radius for n balls with identical radius that can be packed in P , together with a corresponding packing. That is, DP (n; P ) max{r : P ack(r; P ) ≥ n}.
Recall that a k-dimensional convex polytope P has an insphere if the largest ball contained wholly in P is tangent to all the facets (i.e., (k − 1)-faces) of P . Such a ball, if it exists, is unique. It is referred to as the insphere of P . The center of the insphere maximizes the minimum distance for any point in P to its facets, and has the same distance to all facets -the radius of the insphere. We have the following two claims.
Claim 3. For any k ≥ 1 and any k-dimensional convex polytope P with an insphere, letting x be the radius of the insphere, we have Disp(n; P ) = 2xDP (n;P ) x+DP (n;P ) .

Claim 4.
For any k ≥ 1 and any k-dimensional convex polytope P with an insphere, given the radius of the insphere, (1) any polynomial-time algorithm for Disp(n; P ) implies such an algorithm for P ack(r; P ); (2) any polynomial-time algorithm for P ack(r; P ) implies an FPTAS for Disp(n; P ).
To the best of our knowledge, it is still unknown whether ball-packing in regular polytopes (which is a special case of convex polytopes with an insphere) is NP-hard or not. Therefore the complexity of dispersion in regular polytopes remains open. Note that ball-packing in arbitrary polytopes is NP-hard [17], so is a 14 13 -approximation for dispersion in rectilinear polygons [2]. Moreover, a claim similar to Claim 4 applies to DP (n; P ) and P ack(r; P ) in arbitrary polytopes. The relation between dispersion and packing in arbitrary polytopes is not so clear and worth further investigation: for example, it would be interesting to know if there exists a counterpart of Claim 4 when the polytope does not have an insphere.
Finally, the insert-only model, where all points have the same departure time, is a special case of our general model. Interestingly, as will become clear in our analysis, the difficulty of the general online ATWC problem is captured by the problem under this special model. The insert-only model was also considered by [29,36] in settings different from ours and with a different objective function. We further discuss this model in Appendix G.

The 1-Dimensional Online All-Time Worst-Case Problem
Note that a 1-dimensional polytope is simply a segment. Without loss of generality, we consider the unit segment P = [0, 1]. Below we first provide a lower bound for the competitive ratio of any algorithm, even computationally unbounded ones.

The Lower Bound
Theorem 5. No online algorithm achieves a competitive ratio better than 2 ln 2 (≈ 1.386) for the 1-dimensional ATWC problem.
Proof ideas. Letting σ ′ r = 2r i=r+1 1 i for any positive integer r, we show that no algorithm achieves a competitive ratio better than 2σ ′ r . Roughly speaking, we construct an instance (i.e., an adversary) for the online ATWC problem with three stages. In the first stage, r − 1 points arrive simultaneously; in the second stage, r new points arrive one by one; and finally, all 2r − 1 points depart simultaneously. If an algorithm A is α-competitive to OP T A with α < 2σ ′ r , it must be α-competitive after the arrival of each point, as it does not know the total number of points. Thus for each arriving point, there must exist an interval long enough such that putting the new point inside the interval does not violate the competitive ratio. We show that in order for A to do so, the segment must be longer than P itself, a contradiction. Theorem 5 holds by setting r → ∞. The complete proof is provided in Appendix B.1.

A Polynomial-Time Online Algorithm
Next, we provide a deterministic polynomial-time online algorithm whose competitive ratio to OP T A can be arbitrarily close to 2 ln 2. Intuitively, a good algorithm should disperse the points as evenly as possible. However, if at some point of time with m points present, the resulting m + 1 intervals on the segment have almost the same length, then the next arriving point will force the minimum distance to drop by a factor of 2, while the optimum only changes from 1 m+1 to 1 m+2 , causing the competitive ratio to drop by almost 2. To overcome this problem, the algorithm must find a balance between two consecutive points, choosing a sub-optimal solution for the former so as to leave enough space for the latter. The difficulty, as for online algorithms in general, is that this balance needs to be kept for arbitrarily many pairs of consecutive point, as the sequence of points is chosen by an adversary who observes the algorithm's output. Inspired by our lower bound, roughly speaking, our algorithm uses a parameter r to pre-fix the locations of the first r points and the resulting r + 1 intervals, and then inserts the next r + 1 points in the middle of these intervals. The idea is that, when done properly, after these 2r + 1 points, the resulting configuration is almost the same as if the algorithm has used parameter 2r + 1 to pre-fix the first 2r + 2 intervals: then the procedure can repeat for arbitrary sequences.
More specifically, given a positive integer r, let Q = {q 1 , . . . , q r } be a set of positions on the segment, such that the length ratios of the r + 1 intervals sliced by them are 1 r+1 : 1 r+2 : · · · : 1 2r+1 . That is, letting σ r = j for each i ∈ [r], as illustrated by Figure 1, with q 0 = 0 and q r+1 = 1. Note that σ r differs from σ ′ r in Theorem 5 by 1 2r+1 . Also, σ r is strictly decreasing in r and lim r→∞ σ r = ln 2. Moreover, for any two intervals (q j−1 , q j ) and (q j ′ −1 , q j ′ ) with j < j ′ ≤ r + 1, we Our algorithm also takes as parameter an ordering for the positions in Q, denoted by τ = (τ 1 , τ 2 , . . . , τ r ). It is defined in Algorithm 1 and we have the following two lemmas, proved in Appendix B.2 and B.3, respectively. We only sketch the main ideas below. Recall that, given S = ((s 1 , d 1 ), . . . , (s n , d n )), m is the maximum number of points simultaneously present at any time t. Lemma 6. Given any r and τ , Algorithm 1 is 2σ r -competitive to OP T A for any S with m > r. if Q ⊆Q then 6: Choose the first position q according to τ with q ∈ Q \Q, add it toQ and label it occupied.

7:
Put i at position q. Find position q which is the middle of the largest interval created by the positions inQ.

10:
Put i at position q, add q toQ and label it occupied. Arbitrarily choose a vacant position q fromQ and label it occupied.

14:
Put i at position q. 15: end if the maximum number of points that has appeared simultaneously on the line. Thus only m positions is created for instance S. We prove that when m > r, the minimum distance produced by our algorithm, denoted by d min (m), is 1 2 l+1 σr(r+i) , where l, i are the unique integers such that l ≥ 0, 0 ≤ i ≤ r + 1 and 2 l (r + 1) + 2 l (i − 1) ≤ m < 2 l (r + 1) + 2 l i. Note that the minimum distance only depends on m. By comparing OP T A with d min (m), we show that the competitive ratio 2σ r holds for all m > r.
Lemma 7. For any integer l > 0 and r = 2 l − 1, there exists an ordering τ for the corresponding set Q, s.t. Algorithm 1 is 2σ r -competitive to OP T A for any S with m ≤ r.
Proof ideas. Interestingly, due to the structure of Algorithm 1, we only need to consider the instance S = ((1, r + 1), (2, r + 1), . . . , (r, r + 1)). We construct an ordering τ = {τ d } d∈ [r] for Q such that the competitive ratio at any time d ∈ [r] is smaller than 2σ r . To do so, we fill in a complete binary tree with r nodes as in Figure 2, and τ is obtained by traversing the tree in a breadth-first manner starting from the root. Given any d = 2 i + s with i ∈ {0, 1, . . . , l − 1} and s ∈ {0, 1, . . . , 2 i − 1}, we have τ d = q 2 l−i−1 (2s+1) . Denoting the competitive ratio at time d by apx(d), we prove that Writing apx(d) as apx(i, s), we prove that, fixing i, apx(i, s) is strictly increasing in s; and letting s = 2 i − 1, apx(i, s) is strictly increasing in i. Therefore the worst competitive ratio occurs at i = l − 1 and s = 2 l−1 − 1. Since apx(l − 1, 2 l−1 − 1) = (2 − 1 2 l )σ r < 2σ r , Lemma 7 holds. The theorem below follows easily from the above two lemmas; see Appendix B.4. Theorem 8. There exists a deterministic polynomial-time online algorithm for the ATWC problem, whose competitive ratio can be arbitrarily close to 2 ln 2. Moreover, the running time is polynomial in 1 ǫ for competitive ratio 2 ln 2 + ǫ.
The left-hand side shows the top three levels of the binary tree for a general l, with The right-hand side shows the complete binary tree for l = 3, with τ = (q 4 , q 2 , q 6 , q 1 , q 3 , q 5 , q 7 ).
Remark. When the number of arrived points is large but the maximum number m of simultaneously present points is small, the running time of the algorithm for each arriving point is polynomial in m and can be much faster than being polynomial in the size of the input. Following Theorem 5, Algorithm 1 is essentially optimal. Inspired by our constructions of Q and τ , we actually characterize the optimal solution for the online ATWC problem, whose competitive ratio is exactly 2 ln 2: see Theorem 9 below, proved in Appendix B.5. However, this solution involves irrational numbers and cannot be exactly computed in polynomial time.

Theorem 9. For any integer
). If Algorithm 1 creates the d-th new position inQ to be τ d , the competitive ratio is exactly 2 ln 2.

The 2-Dimensional Online All-Time Worst-Case Problem
We now consider the 2-dimensional online ATWC problem in a square-without loss of generality, P = [0, 1] 2 . One difficulty is that, different from the 1-dimensional problem where it is trivial to have Disp(n; P ) = 1 n+1 for any n ≥ 1, here neither Disp(n; P ) nor P ack(r; P ) has a known closedform optimal solution (whether polynomial-time computable or not). Accordingly, our lower-bound and our competitive algorithm must rely on some proper upper-and lower-bounds for Disp(n; P ), which is part of the reason why the resulting bounds are not tight. In particular, we have the following lemma, proved in Appendix C.1.

The Lower Bound
Interestingly, not only the dispersion problem is closely related to uniform packing (i.e., the disks all have the same radius) as we have seen in Section 2, but we also obtain a lower bound for the online ATWC problem by carefully fitting a non-uniform packing into the square. The idea is to imagine each position created in an online algorithm as a disk centered at that position. The radius of each disk is a function of the algorithm's competitive ratio and the optimal solutions to specific dispersion problems without time. Note that the area covered by the disks is upper-bounded by the area of the square containing them. Combining these relations together gives us the following theorem, proved in Appendix C.2.
Theorem 11. No online algorithm achieves a competitive ratio better than 1.183 for the 2-dimensional ATWC problem in a square.

A Polynomial-Time Online Algorithm
Now we provide a deterministic polynomial-time online algorithm which is 1.591-competitive to OP T A . Similar to Algorithm 1, we first construct a set Q of pre-fixed positions. However, it is unclear how to define Q of arbitrary size in the square, and we construct a set of 36 positions, denoted by Q = {q 1 , . . . , q 36 }. It depends on a parameter 1 < c < √ 2 and x = 1 3+4c , as illustrated in Figure 3. The indices of the q i 's specify the order according to which they should be occupied, thus we do not need an extra ordering τ . Note that these positions create a grid in P and split it into multiple rectangles. The choice of c (and x, Q) will become clear in the analysis.
Whenever a new position needs to be created, we pick the first position in Q that has never been occupied yet. When all positions in Q are occupied, we may (1) create a new position in the center of a current rectangle with the largest area, split this rectangle into four smaller ones accordingly, and add the vertices of the new rectangles into the grid; or (2) create a new position at a grid point that has never been occupied yet. The main algorithm is similar to Algorithm 1 and defined in Algorithm 2. It uses in Step 8 a sub-routine, the Position Creation Phase, as defined in Algorithm 5 in Appendix C. 3. In Appendix C.4 we provide some intuition on the choices of Q, x, and c. By setting c = 1.271, we have the following theorem, proved in Appendix C.5. More specifically, denoting a rectangle by the position in Q at its lower-left corner, the green area is (3, 10, 2; 18, 5, 9; 1, 19, 4); the two pink areas are (23, 6, 20; 28, 27, 26) and (34, 21; 33, 7; 32, 22); the red area is (31, 8; 30, 29); the two blue areas are (35,36) and (24,25); the orange area is (17,16,15,11,12,13,14); and finally the yellow area contains all the remaining rectangles: that is, rectangles adjacent to the left boundary and the bottom boundary.
Theorem 12. Algorithm 2 runs in polynomial time and is 1.591-competitive for the 2-dimensional online ATWC problem in a square.
Note that the upper-bound for Disp(n; P ) in Lemma 10 is not tight when n is small. With better upper-bounds for Disp(n; P ), better competitive ratios for our algorithm can be directly obtained via a similar analysis. Moreover, we believe the competitive ratio can be improved by using a larger set Q and the best ordering for positions in Q. Such a Q and a rigorous analysis based on it are left for future studies. Finally, similar techniques can be used when P is a rectangle, but the gap between the lower-and upper-bounds will be even larger, and the analysis will be more complicated without adding much new insight to the problem. Thus we leave a thorough study on rectangles for the future. Parameter: c such that 1 < c < √ 2, the corresponding x = 1 3+4c , and Q. Input: A sequence of points arriving and departing along time. Put w at position q |Q|+1 , add this position toQ and label it occupied. Compute a position q according to the Position Creation Phase defined in Algorithm 5.

9:
Put w at position q, add q toQ and label it occupied. Arbitrarily choose a vacant position q fromQ and label it occupied.

13:
Put w at position q. 14: end if

The General k-Dimensional Online ATWC Problem
Although the literature gives us little understanding about the optimal dispersion/packing problem in an arbitrary k-dimensional polytope P with k ≥ 2, we are still able to provide a simple lowerbound and a simple polynomial-time algorithm for the online ATWC problem. Below we only state the theorems; see Appendix D for the proofs.
Theorem 13. For any k ≥ 2, no online algorithm achieves a competitive ratio better than 7 6 for the ATWC problem for arbitrary polytopes.
For any polytope P , letting the covering rate be the ratio between the edge-lengths of the maximum inscribed cube and the minimum bounding cube, we have the following theorem. Note that, although a natural greedy algorithm provides a 2-competitive ratio, the exact greedy solution may not be computable in polynomial time. Here we show the greedy algorithm can be efficiently approximated arbitrarily closely. The geometric problems of finding the minimum bounding cube, deciding whether a position is in P , and finding the distance between a point in P and the boundary of P are given as oracles.
Theorem 14. For any constants γ, ǫ > 0, for any integer k ≥ 2 and any k-dimensional polytope P with covering rate at least γ, there exists a deterministic polynomial-time online algorithm for the ATWC problem, with competitive ratio 2 1−ǫ and running time polynomial in 1 (γǫ) k .
6 The General k-Dimensional Offline CD Problem By Claim 1, no online algorithm provides a good competitive ratio for the CD problem, thus we focus on the offline problem. Given an input sequence S = ((s 1 , d 1 ), . . . , (s n , d n )), we first slice the whole time interval [0, T ] into smaller ones by the arriving time s i and the departure time . Thus the set of present points only changes at the end-points of the intervals and stays the same within an interval. Our algorithm will be such that, in each time interval, the minimum distance is a good approximation to the optimal dispersion problem without time, for the points present in this interval. Interestingly, this is achieved by reducing the offline CD problem to the online ATWC problem, for any dimension k and polytope P . To carry out this idea, we first provide a polynomial-time algorithm A I (Algorithm 3) that, given a sequence S, selects a subset I of points from S. The set I satisfies the following properties, which are proved in Claim 24 in Appendix E.
Φ.1 I can be partitioned into two groups I 1 and I 2 such that the points in the same group have disjoint time intervals.
Φ.2 For any time 0 ≤ t ≤ T , if there are points in S present at time t, then at least one of them is selected to I.
Arbitrarily choose j ∈ arg max i∈Ŝ d i and add j to I index . 7: The offline CD algorithm A CD uses algorithm A I to select I from its input S, eliminates the selected points from S, and repeats on the remaining S until all points have been eliminated. Recall that m is the maximum number of points simultaneously present at any time. By property Φ.2, this procedure ends in at most m iterations. Based on the partitions constructed by A I , A CD constructs an instance of the online ATWC problem and uses any online algorithm A AT W C for the latter as a black-box, so as to decide how to locate the points. Algorithm A CD is defined in Algorithm 4 and we have the following theorem, proved in Appendix E. Below we only sketch the main ideas.
Run A I on S to obtain two disjoint sets I 2r+1 , I 2r+2 ⊆ S.

5:
r = r + 1. 6: end while 7: Run A AT W C on the following online sequence of 2r points: for all i ∈ {0, 1, . . . , r − 1}, points 2i + 1 and 2i + 2 arrive at time i. All points depart at time r. 8: Letting x 2i+1 , x 2i+2 be the two positions returned by A AT W C at time i, assign all points in I 2i+1 to x 2i+1 and all points in I 2i+2 to x 2i+2 .
Theorem 15. For any k ≥ 1 and k-dimensional polytope P , given any polynomial-time online algorithm A AT W C for the ATWC problem with competitive ratio σ, there is a polynomial-time offline algorithm A CD for the CD problem with competitive ratio σ max i≥1 Disp(i;P ) Proof ideas. Given an input sequence S, we slice the whole time interval [0, T ] into smaller ones according to the arriving time and the departure time of each point. Denote these small intervals by T 1 , . . . , T l , where l is the number of small intervals created. For each interval T i , let S i be the set of points that overlap with T i and n i = |S i |. By properties Φ.1 and Φ.2, all points in S i are eliminated from S in the first n i iterations of A I , thus are located at the first 2n i positions created by A AT W C . The minimum distance among points in T i (and to the boundary) is at least Disp(2n i ;P ) σ , since algorithm A AT W C has competitive ratio σ. Thus, within each T i , the competitive ratio to the optimal solution is upper-bounded by σDisp(n i ;P ) Disp(2n i ;P ) . Taking summation over all T i 's, the competitive ratio is upper-bounded by σ max Remark. Under the insert-only model, it is not hard to see that the online CD problem and the online ATWC problem are equivalent, in the sense that an algorithm is σ-competitive for the online ATWC problem if and only if it is σ-competitive for the online CD problem. Thus, all our algorithms for the online ATWC problem can be directly applied to the online (and also offline) CD problem, with the competitive ratios unchanged. Finally, all our inapproximability results for the online ATWC problem hold under the insert-only model.
A Proofs for Section 2 Claim 1. (restated) For the CD problem, even when k = 1 and P is the unit segment, no (randomized) online algorithm provides a competitive ratio to OP T C better than Ω(n).
Proof. Consider n points with s i = 0 for all i and let X 1 , . . . , X n be the locations chosen by an online algorithm at time 0. If there exist two points i, j with dis(X i , X j ) = d min (0), then the adversary sets the departure time of i and j to be a large number T , and that of all the other points to be 1. Otherwise, there exists a point i with dis(X i , ∂P ) = d min (0), and the adversary sets d i = T and d j = 1 for all j = i. We only analyze the first case as the second is almost the same. In the algorithm, d min (t) = d min (0) ≤ 1 n+1 for any t, and T 0 d min (t)dt ≤ T n+1 . However, by putting i and j at 1/3 and 2/3 respectively, we have OP T C (S; P ) > T −1 3 . Thus the competitive ratio is Ω(n).
Proof. Let X = (X 1 , . . . , X n ) be the optimal solution for the offline ATWC problem, and t ∈ [0, T ] be such that there exist exactly m points at time t. It is easy to see that Next, let Y = (Y 1 , . . . , Y m ) be the optimal solution for Disp(m; P ) and consider the following algorithm for the offline ATWC problem with input S: when a point arrives, arbitrarily pick a location Y i that is not currently occupied and put it there; when a point leaves, the Y i occupied by it becomes vacant again. Note that this is an offline algorithm because it knows m (and thus Y ). Also note that this algorithm produces a valid solution, because there are at most m points simultaneously present at any time, and m locations are sufficient. Abusing notation sightly and letting d min (t; Y ) be the minimum distance produced by the algorithm at time t, we have d min (t; Y ) ≥ Disp(m; P ) for all t ≤ T , thus Disp(m; P ) ≤ OP T A (S; P ). Therefore Claim 2 holds. Claim 3. (restated) For any k ≥ 1, x > 0 and any k-dimensional polytope P with an insphere, letting x be the radius of the insphere, we have Disp(n; P ) = 2xDP (n;P ) x+DP (n;P ) .
Proof. Let c be the center of the insphere. On the one hand, given the optimal locations X 1 , . . . , X n corresponding to Disp(n; P ), let P ′ be the polytope obtained from P by moving each facet towards c for distance Disp(n;P ) 2 . As the distance from c to each facet of P is exactly x, P ′ can also be obtained by shrinking P by a factor of λ = in P ′ , centered at the X i 's. Indeed, as the distance of each X i to the facets of P is at least Disp(n; P ), its distance to the facets of P ′ is at least Disp(n;P ) 2 and the n balls are contained wholly in P ′ . Moreover, as the distance of any two locations X i and X j is at least Disp(n; P ), the n balls are not overlapping with each other. By scaling P ′ up by a factor of λ with respect to c, we get a packing of n balls in P with radius r = λDisp(n;P ) 2 = xDisp(n;P ) 2x−Disp(n;P ) . Thus P ack(r; P ) ≥ n. Accordingly, DP (n; P ) ≥ r by definition, which implies Disp(n; P ) ≤ 2xDP (n; P ) x + DP (n; P ) .
On the other hand, given the optimal solution for DP (n; P ), with the balls centered at Y 1 , . . . , Y n , let P ′′ be the polytope obtained from P by moving each facet away from c for distance DP (n; P ). It is easy to see that Y 1 , . . . , Y n is a dispersion in P ′′ with distance 2DP (n; P ). Again because the distance from c to each facet of P is exactly x, P ′′ can be obtained by scaling P up by a factor of λ ′ = x+DP (n;P ) x with respect to c. By scaling P ′′ down by a factor of λ ′ , we obtain a dispersion in P with distance d = 2DP (n;P ) x+DP (n;P ) . Therefore Disp(n; P ) ≥ 2xDP (n; P ) x + DP (n; P ) , and Claim 3 holds.

Claim 4. (restated)
For any k ≥ 1 and any k-dimensional convex polytope P with an insphere, given the radius of the insphere, (1) any polynomial-time algorithm for Disp(n; P ) implies such an algorithm for P ack(r; P ); (2) any polynomial-time algorithm for P ack(r; P ) implies an FPTAS for Disp(n; P ).
Proof. Roughly speaking, given an algorithm for one of the two problems, we can use binary search to find a solution for the other. Without loss of generality, assume P has volume 1. Let x be the radius of the insphere. For the first part of the claim, let Ball(r) be the volume of the k-dimensional ball with radius r and N = ⌊ 1 Ball(r) ⌋. Clearly, P ack(r; P ) ≤ N . The binary search over the set {0, 1, . . . , N } works as follows. In each round, find the median of the current set, denoted by n; compute d n = Disp(n; P ) using the polynomial-time algorithm and r n = dnx 2x−dn . Note r n = DP (n; P ) by Claim 3. If r n < r then continue searching from n − 1 and below-that is, one cannot pack n balls of radius r in P ; and if r n ≥ r then continue searching from n and abovethat is, one can pack at least n balls of radius r in P . When n is the only number left, output it. The correctness of the algorithm follows from the fact that Disp(n; P ) and thus DP (n; P ) are non-increasing in n. Accordingly, there exists a unique n * such that r n ≥ r for all n ≤ n * and r n < r for all n > n * . It is easy to see that P ack(r; P ) = n * .
For the second part, using x as an upper bound for Disp(n; P ), the idea is almost the same. The only difference is that Disp(n; P ) may not have finite representations and the algorithm has to stop when the length of the interval is no larger than some small constant ǫ, leading to an FPTAS rather than an exact solution.
Note that finding the radius of the insphere in an irregular convex polytope is a non-trivial computation problem. Thus we require it is given as an input. Finding the radius is easy for regular polytopes. i . We show that no algorithm achieves a competitive ratio better than 2σ ′ r . For the sake of contradiction, assume there exists an r and an online algorithm A with competitive ratio α < 2σ ′ r . We construct an instance of the online ATWC problem with three stages. In the first stage, r − 1 points arrive simultaneously; in the second stage, r new points arrive one by one; and then all 2r − 1 points depart simultaneously.
Since no point departs before the last point arrives, by Claim 2 we have OP T A (S i ; P ) = Disp(i; P ) for any point i, where S i is the instance containing only the first i points. Since the online algorithm is α-competitive to OP T A , it must ensure that for each point i, after its arrival, its distance to all the other present points is at least OP T A (S i ;P ) α = Disp(i;P ) α : otherwise the adversary simply stops adding new points and the competitive ratio is violated for the instance S i .
Nevertheless, we show that after the second stage, the claimed competitive ratio must be violated. To do so, note that Disp(i; P ) = 1 i+1 for any point i, because the optimal dispersion without time is to locate the points evenly on the segment, resulting in i + 1 equal-length intervals. Denote by Q = {q ℓ |1 ≤ ℓ ≤ r − 1} the positions of the first r − 1 points given by the algorithm, and let q 0 = 0 and q r = 1. We claim that, in the second stage, no two points can be put into the same interval generated by Q. Assume otherwise, and assume exactly two points i and j are put into the same interval, with r ≤ i < j ≤ 2r − 1. There must exist another interval (q ℓ , q ℓ+1 ) with ℓ ≥ 0, which does not contain any new point from stage two. Any other interval contains exactly one new point. Thus the total length of all the intervals generated by Q, denoted by L, is To see why Equation 1 is true, first note that q ℓ+1 − q ℓ is the length of the interval which does not contain any new point. Second, the length of the interval split by i and j must be larger than Indeed, upon arrival, point i first splits this interval into two smaller intervals, and one of them is further split by j; thus the sub-interval between i and the adjacent end-point of the interval is at least 2σ ′ r , and each of the two sub-intervals created by j is at least Disp(j;P ) α > Disp(j;P ) 2σ ′ r , due to the algorithm's claimed competitive ratio. Moreover, for each point h ∈ {r, r + 1, . . . , 2r − 1} \ {i, j}, it splits one of the remaining intervals and each of the two sub-intervals is at least Combining Equations 1 and 2, we have That is, the total length is larger than 1, a contradiction. In general, having more than two points in the same interval and having more than one interval containing at least two points lead to the same contradiction. More specifically, for all new points i that contributes one copy of Disp(i;P ) 2σ ′ r in the lower bound of L as in Equation 1, there exists exactly the same number of intervals that are not split by any new point. Thus we can arbitrarily fix a bijection between those points and intervals, such that each unsplit interval contributes another copy of Disp(i;P ) 2σ ′ r for its corresponding point i as in Equation 2, and Equation 3 holds again. Accordingly, we conclude that each interval generated by Q is split by exactly one new point in stage two.
However, in this case, by a similar argument, the total length of these intervals is the same contradiction. Therefore such an algorithm A does not exist. In sum, for any integer r > 0, no algorithm achieves a competitive ratio better than 2σ ′ r . Since σ ′ r is strictly increasing in r and lim r→∞ 2σ ′ r = 2 ln 2, no algorithm achieves a competitive ratio better than 2 ln 2 and Theorem 5 holds. By the construction of the algorithm, upto time r, only the positions in Q may be used for the points. At time r,Q = Q, all r positions in Q are occupied and the segment is sliced into r + 1 intervals. After that, at time i > r, the arriving point i will split the current largest interval into two equal sub-intervals and create a new position inQ. In fact, since the lengths of the intervals created by Q strictly decrease from left to right, the position creation procedure can be described by "rounds" as follows. Round 0 is τ 1 , . . . , τ r (from time 1 to time r); round 1 splits existing intervals one by one into halves, from the leftmost to the rightmost (from time r + 1 to time 2(r + 1)); round 2 again splits existing intervals from the leftmost to the rightmost (from time 2(r + 1) + 1 to time 4(r + 1)); so on and so forth. In particular, by the end of each round l, the number of sub-intervals sliced byQ is 2 l (r + 1) and the number of points is |Q| = 2 l (r + 1) − 1. Also, for each interval (q i−1 , q i ) with i ∈ [r + 1], it has been sliced into 2 l sub-intervals after round l. Moreover, because the maximum interval (q 0 , q 1 ) is less than twice of the minimum interval (q r , q r+1 ), whenever a point i > r is added and i ∈ (q j−1 , q j ) for some j, the resulting minimum distance occurs between i and its right neighbor inQ (and is also the length of all sub-intervals in (q j−1 , q j ) before i).
Let point m appear during round (l + 1) for some l ≥ 0, then there exits 1 ≤ i ≤ r + 1 such that interval immediately after it. And this length is d min (m) = 1 2 l+1 σ r (r + i) .
Therefore the competitive ratio is and Lemma 6 holds.  + 1), . . . , (m, m + 1)) with m ≤ r: whenever a new position is created, the minimum distance so far is incurred byQ, and the size ofQ only increases. In other words, we can focus on the instance S = ((1, r + 1), (2, r + 1), . . . , (r, r + 1)) and prove that the competitive ratio at any time d ∈ [r] is smaller than 2σ r . Below we construct the desired ordering τ for Q by filling in a complete binary tree with r nodes. We do so in l rounds, with round j ∈ {0, . . . , l − 1} filling in level j of the tree and level 0 being the root. In round 0, letting the left end-point be 0 and the right end-point be 2 l (corresponding to q 0 = 0 and q 2 l = 1), fill the root with the average of the two, namely, 2 l−1 . In each round j > 0, process the nodes in level j from left to right. For each node x, letting its two neighbors in the current tree be filled with x lef t and x right , fill node x with If node x is the leftmost (respectively, rightmost) node in the current tree, then take x lef t = 0 (respectively, x right = 2 l ). After the whole tree being filled, τ is obtained by traversing it in a breadth-first manner starting from the root: letting the j-th node visited being filled with x j , then τ j = q x j . Figure 2 illustrates the structure of the tree and the ordering τ , for general l and for l = 3. We refer to the resulting τ as the binary ordering and we have the following claim.
We prove Claim 16 after the proof of Lemma 7. Note that OP T A (S d ; P ) = Disp(d; P ) = 1 d+1 = To show apx(d) < 2σ r , we lower bound the denominator in two steps, by the following two claims, which are also proved after the proof of Lemma 7.
Claim 17. Arbitrarily fixing i ∈ {0, 1, . . . , l − 1} and letting we have that f (s) is strictly decreasing in s. Remark. Note the denominator of apx(d) may not be decreasing in d, and the jump may happen from d = 2 i + (2 i − 1) to 2 i+1 . We bypass this problem by breaking the analysis into two steps, as above.
We now prove the three claims.
Proof of Claim 16. We first provide some simple facts about τ and Algorithm 1. Since the algorithm adds positions toQ according to τ , the whole procedure can also be considered as l rounds, same as the construction of the binary tree. By induction, for each i = 0, 1, . . . , l−1, the number of positions added toQ in round i is 2 i (i.e., the number of nodes in level i of the tree), and the number of intervals created byQ by the end of round i is 2 i+1 . We denote these intervals by I i 0 , I i 1 , . . . , I i 2 i+1 −1 from left to right, and refer to them as the round-i intervals. Moreover, referring to the intervals (q j−1 , q j ) with j ∈ [r + 1] as the pre-fixed intervals, we have that each round-i interval contains 2 l−i−1 pre-fixed intervals: by construction, each position added in round i split the corresponding round-(i − 1) interval into two sub-intervals, not with the same length but with the same number of pre-fixed intervals, thus all round-i intervals contain the same number of pre-fixed intervals. Next, it is easy to see that the points that arrive in round i are points 2 i +0, 2 i +1, . . . , 2 i +(2 i −1). Thus point d = 2 i + s arrives in round i and the corresponding position τ d is in the round-(i − 1) interval I i−1 s . Accordingly, there are (2s + 1) round-i intervals to the left of τ d , corresponding to a total of 2 l−i−1 (2s + 1) pre-fixed intervals. That is, τ d = q 2 l−i−1 (2s+1) as we wanted to show.
Below we compute the minimum distance of the algorithm at time d. Note that, after point d is located, the intervals incurred byQ are I i 0 , I i 1 , . . . , I i 2s+1 , I i 2s+2 , I i−1 s+1 , . . . , I i−1 2 i −1 . By induction, we have that • the lengths of I i 0 , I i 1 , . . . , I i 2s+2 are strictly decreasing, • the lengths of I i−1 s+1 , . . . , I i−1 2 i −1 are also strictly decreasing, where the last inequality is because, for any two pre-fixed intervals (q j−1 , q j ) and (q j ′ −1 , q j ′ ) with As the left end-point of I i 2s+2 is τ d and the right end-point is q 2 l−i−1 (2s+2) , we have and Claim 16 holds.
However, notice that the maximum value of j in Equation 4 is 2 l − 1 + 2 l−i−1 (2s + 2), thus all the inequalities above hold immediately. Accordingly, Claim 17 holds.
Proof of Claim 18. By definition, where the inequality is because 1 j−2 l−i−2 > 1 j for all j in the range of the summation. Therefore Claim 18 holds.

B.4 Proof for Theorem 8
Theorem 8. (restated) There exists a deterministic polynomial-time online algorithm for the ATWC problem, whose competitive ratio can be arbitrarily close to 2 ln 2. Moreover, the running time is polynomial in 1 ǫ for competitive ratio 2 ln 2 + ǫ.
Since the selection of l only depends on ǫ and not on the input sequence, l is a constant, and so is r. Given l and r, the binary ordering τ can be constructed in time O(r) = O( 2 ǫ ). Given r and τ , when a point arrives or departs, the running time of Algorithm 1 is polynomial in |Q|, thus polynomial in the size of the input so far. Accordingly, Theorem 8 holds.
Proof. Similar to the proof of Lemma 6, the worst case happens when each point arrives at different time and all points depart at the same time. Arbitrarily fixing d ≥ 1, we consider the minimum distance incurred by {τ 1 , τ 2 , . . . , τ d } (at time d), d min (d), and the competitive ratio at time d, .
Remark: Not only the algorithm in Theorem 9 cannot be computed in polynomial time, but it also cannot be properly approximated by just rounding each created position to a fixed precision. The reason is that the online algorithm does not know m beforehand and cannot adjust the precision according to it. When the number of simultaneously present points grows, the optimal minimum distance may be much smaller than the precision, the relative positions of the points may be completely different, and the competitive ratio may be arbitrarily bad. Instead, the polynomialtime algorithm in Algorithm 1 deals with rational numbers and can adjust the precision as the number of present points grows.
C Proofs for Section 4 C.1 Proof for Lemma 10 Proof. We first prove the upper-bound. Recall that DP (n; P ) is optimal radius for the dispersal packing problem with n balls. By Claim 3, DP (n; P ) = Disp(n; P ) 2(1 − Disp(n; P )) , as the radius of the insphere in the unit square P is x = 1 2 . Because the disk packing density in the unit square P is upper-bounded by the disk packing density in 2-dimensional infinite space, which by comparing the total area of the n disks packed in P with P 's own area. Accordingly, In fact, DP (n; P ) → as n → ∞, thus Equation 6 is tight as n → ∞.
Next, we prove the lower-bound using hexagonal packing in P , as illustrated in Figure 4. In particular, we would like to find a radius r * such that, each row of the packing contains √ 3n 2 disks and there are 2n √ 3 rows in total. If so then we can pack n disks in P with radius r * , which implies DP (n; P ) ≥ r * . disks in a row, the radius r * must satisfy in order to pack 2n √ 3 rows, the radius r * must satisfy It is easy to verify that both inequalities are satisfied by r * = 1 3+ Accordingly, DP (n; P ) ≥

C.2 Proof for Theorem 11
Theorem 11. (restated) No online algorithm achieves a competitive ratio better than 1.183 for the 2-dimensional ATWC problem in a square.
Proof. Arbitrarily fixing an online algorithm A and denoting its competitive ratio by σ, we show that σ ≥ 1.183. To do so, arbitrarily fix a positive integer r and consider the set of positive integers Solving the inequality for δ ′ , we have Letting δ = max ∆, we have and ∆ = {1, . . . , δ}.
Following Lemma 10, it is easy to see that Disp(r + δ; P ) ≥ Disp(r;P )

2
. We now consider the input sequence S = ((1, r + δ + 1), (2, r + δ + 1), . . . , (r + δ, r + δ + 1)) and prove rπ Disp(r; P ) 2σ Indeed, after the first r points arrive, d min (r) ≥ Disp(r;P ) σ by assumption, thus we can pack r disks with radius Disp(r;P ) 2σ in the square, centered at the r positions where the points are located. Their total area is Next, for each arriving point r + i with 1 ≤ i ≤ δ, the distance between itself and any other point j with 1 ≤ j ≤ r is at least and the distance between itself and any other point r + j with 1 ≤ j < i is at least where the last inequality is because Disp(n; P ) is non-increasing in n and Disp(r; P ) ≥ Disp(r + j; P ). By the definition of δ, for any i ≤ δ we have and the left-hand side of the first inequality is a well-defined radius. Accordingly, by Equations 9 and 10, if we put a disk with radius Disp(r+i;P ) σ − Disp(r;P ) 2σ centered at the position of point r + i for each 1 ≤ i ≤ δ, then the disk r + i does not overlap with the first r disks whose radius is Disp(r;P ) 2σ , neither does it overlap with any disk r + j with 1 ≤ j < i, whose radius is Disp(r+j;P ) σ − Disp(r;P ) 2σ . Moreover, since the radius of disk r + i is at most d min (r + i), it does not overlap with the boundary of P either. By induction, all r + δ disks do not overlap with each other or with the boundary of P , and they are a non-uniform packing in P . The total area of all the disks r + i with 1 ≤ i ≤ δ is Finally, since d min (r) − Disp(r; P ) 2σ ≥ Disp(r; P ) 2σ and Replacing δ with 3r − 4 3 and letting r → ∞, we calculate the limits term by term as follows. Accordingly, and σ ≥ 2(2 ln 2−1)π √ 3 ≈ 1.183. Therefore Theorem 11 holds.

C.3 Algorithm 5
In this section we define Algorithm 5, the Position Creation Phase used by Algorithm 2. It proceeds in rounds i = 0, 1, . . . , and Figure 5 illustrates the colored areas after round 0.

C.4 Some Intuition for the Set Q of Pre-fixed Positions
To help understanding the performance of Algorithm 2, we first provide some intuition for the selection of the first 36 positions in Q. Similar to those for Algorithm 1, they cannot be too unevenly distributed, and cannot be too evenly distributed either. That is why q 1 is not at the center of the square. Without loss of generality, it is closer to the left and the bottom boundary. Letting y be its distance to the left (and the bottom) boundary, Disp(1;P ) y = 1 2y will be roughly the competitive ratio. Again similar to Algorithm 1, q 1 's distance to the right and the top boundary, 1 − y, is larger than y but smaller than 2y.
To make the minimum distance shrink as little as possible, q 2 is put in the center of the larger square, to the upper-right of q 1 . Note that the best position for q 2 given q 1 is of equal distance to q 1 and the boundary, as illustrated by Figure 6. We instead put it in the center of the upper-right square so as to leave enough space for q 3 and q 4 . Indeed, q 3 and q 4 are symmetric and at the grid vertices induced by q 1 and q 2 , so that adding them toQ does not shrink the minimum distance.
After the first four positions, the square is split into 9 small rectangles, from A to I, as shown in Figure 7. Let us now consider q 5 . Notice that among the four upper-right squares, E is better than B, C and F , as the latter three have the same size with E but are adjacent to the boundaries. If we put q 5 at the center of E, then eventually we will slice the area to the upper-right of q 1 into four intervals in each dimension, all of length 1−y 4 . Another possibility is to put q 5 in G. However, note that the center of G is not the optimal position in G, for the same reason as the choice of q 2 . We will choose y such that the center of E is better than the optimal position in G. In fact, a better competitive ratio is achieved if we slice G Algorithm 5. The Position Creation Phase. In this phase, our algorithm repeatedly splits the rectangles into smaller ones by creating a position at its center. The internal areas (green, pink and red) will expand and the outer areas (orange, yellow and blue) will still be the strips with width of a single rectangle.
After q 36 is added toQ, the position creation phase proceeds in rounds i = 0, 1, . . . such that, after each round, each of the previous rectangles has been divided into four sub-rectangles by a position at its center. Accordingly, the number of rectangles created in round i ≥ 0, referred to as round-i rectangles, is 49 · 4 i+1 ; the number of intervals in each dimension is 7 · 2 i+1 ; and the number of positions inQ is (7 · 2 i+1 − 1) 2 , corresponding to the grid vertices not on the boundary. Moreover, the positions created in round i are positions ( The algorithm keeps the round number i and updates as it proceeds. (Or, given n > 36, we can find in time O(log n) the round i in which position n is created from the formula above.) The location of position n is decided by the following five cases. We provide the ranges of position n for each case, so that it can be easily decided which case position n belongs to. Case 1. If there is a round-(i − 1) square in the green area whose center is not inQ, arbitrarily pick such a rectangle and put position n at its center, splitting it into four round-i rectangles.
We claim these are positions (7 Case 2. Else, if there is a round-(i − 1) rectangle in the pink areas whose center is not inQ, arbitrarily pick such a rectangle and put position n at its center, splitting it into four round-i rectangles.
We claim these are positions 65 Case 3. Else, if there exists a round-(i − 1) rectangle in the red area (which is actually a square) whose center is not inQ, arbitrarily pick such a rectangle and create position n at its center, splitting it into four round-i rectangles (again, squares).
Case 4. Else, we distinguish two sub-cases.
First, if there exists a round-(i − 1) rectangle in the orange area and the two blue areas whose center is not inQ, arbitrarily pick such a rectangle and create position n at its center, splitting it into four round-i rectangles.
Otherwise, if there exists a vertex of a round-i rectangle in the green and the orange areas which is (a) not inQ, (b) not adjacent to the blue or the pink areas and (c) not on the boundary, then arbitrarily pick such a vertex for position n.
We claim these are positions 98 Case 5. Else, among all the centers of the round-(i − 1) rectangles in the yellow area and all the vertices of the round-i rectangles, which is not inQ and not on the boundary, arbitrarily pick one for position n.
We claim these are positions 130 After all positions in round i are created, the yellow, blue, and orange areas shrink by one round-i rectangle towards the boundary: they only contain rectangles adjacent to the boundary. The area released by orange is taken by green; that released by blue is taken by pink; and that released by yellow is taken by blue, pink, and red. Figure 5 illustrates the colored areas after round 0.
1 y 2 d d d Figure 6: The best position for q 2 given q 1 . into three intervals instead of four in each dimension, all of length y 3 . Let x = y 3 and cx = 1−y 4 be the corresponding lengths of the intervals after all the slicing, as illustrated by Figure 3. We have x = 1 3+4c and the minimum distance incurred by putting q 5 at the center of E is √ 2cx. After the grid with 36 internal vertices is created, the positions on the grid appear naturally by always selecting the next optimum. In order to find this optimal position and compute the corresponding minimum distance easily, in a 1-dimensional space we would choose x < cx < 2x as in Algorithm 1. In a 2-dimensional space we instead want x < cx < √ 2x, that is, 1 < c < √ 2, because the distances are L 2 -norm. The resulting ordering for Q is shown in Figure 3 and the minimum distances after each position n are shown in Table 2, grouped into 7 cases.

C.5 Proof for Theorem 12
Theorem 12. (restated) Algorithm 2 runs in polynomial time and it is 1.591-competitive for the 2-dimensional online ATWC problem in a square.
We first show that the algorithms are well defined; see the claim below. is obtained because each one of them is split into two after round i; and the extra +1 is because, at the end of round i, the orange area shrinks and the green area grows by one round-i rectangle to the top and to the right.
Solving this inductive formula with initial condition N −1 g = 9, we have for each i ≥ 0, which is exactly the number of positions created in Case 1 of round i, because one position is created at the center of every such square. Accordingly, in Case 1, be the number of round-(i − 1) rectangles in the pink areas. In order to compute N i−1 p , we first compute N i−1 r , the number of round-(i − 1) squares in the red area. Similar to the induction above, we have N −1 r = 4 and N i r = (2 N i−1 r + 1) 2 , thus Accordingly, which is exactly the number of positions created in Case 2 of round i, because one position is created at the center of every such square. Thus in Case 2, Case 4. Furthermore, the number of round-(i − 1) orange rectangles and that of round-(i − 1) blue rectangles are, respectively, Thus the first sub-case in Case 4 of round i creates Combining the two sub-cases together, the positions created in Case 4 of round i are Case 5. Finally, all the remaining positions in round i are created in Case 5, with For completeness, we note that the number of round-(i − 1) yellow rectangles is Combining all the cases together, Claim 19 holds.
Finally we are ready to prove Theorem 12.
Proof of Theorem 12. It is easy to see that the running times of Algorithms 2 and 5 are polynomial in |Q| and thus in the number of arrived points. For the competitive ratio, again we can focus on instances of the form S = ((1, n + 1), (2, n + 1), . . . , (n, n + 1)), with n ≥ 1. Letting apx(n; P ) = Disp(n;P ) d min (n) , we would like to find c = arg min c max n apx(n; P ). In the analysis below we distinguish three cases: first n ≤ 36, then n in rounds 0 and 1 of Algorithm 5, and at last n in round i for each i ≥ 2 of Algorithm 5.
For n ≤ 36. Following Table 2 in Section C.4, the minimum distances within each of the seven cases are the same. Since Disp(n; P ) is non-increasing, the worst competitive ratios occur at the first position n in each case. For example, the worst competitive ratio in case 2 is Disp(2;P ) 2cx . For n = 1, 2 and 5, the exact solution for Disp(n; P ) can be easily found. That is, with one position in the center for n = 1; with two positions on a diagonal for n = 2, where the ratios of the three intervals are √ 2 : 1 : √ 2; and with one position in the center and four positions in the four resulting squares for n = 5, where the four positions are all on the diagonals and the ratios of the resulting intervals on a diagonal is √ 2 : 1 : 1 : √ 2. Although there is no general closed-form for Disp(n; P ) in the literature, exact solutions for DP (n; P ) have been found for small n's, and we use them to get the worst competitive ratios of our algorithm for 6 ≤ n ≤ 36. Extending Table 2, the competitive ratios are shown in Table 3. Note that here we do not need the upper bound of Disp(n; P ), as we know the exact solution.   [39,38,37,32] provide exact solutions for the corresponding DP (n; P )'s. We compute the corresponding Disp(n; P )'s by Claim 3.
Round 0. Our construction of Algorithm 5 guarantees that, in each round i ≥ 0, d min (n) is the same for all n's in the same case. The general formulas of d min (n) in all five cases are shown in Table 4, and the competitive ratios are obtained by applying our upper-bound for Disp(n; P ) in Lemma 10. Again note that the worst competitive ratio in each case happens at the first position n, because Disp(n; P ) is non-decreasing. The corresponding ratios for round 0 can be obtained by setting i = 0.
Case n d min (n) apx(n; P ) 1 n ≥ (7 × 2 i − 1) 2 + 1 and  Table 4 for any i ≥ 0. If n falls into Case 1, it is at the center of a round-(i−1) square in the green area, and the length of the edge is 4×2 i = cx 2 i . Since the minimum distance is half of the diagonal of that square, we have If n falls into Case 2, it is at the center of a round-(i − 1) rectangle in the pink areas. The length of the rectangle is cx 2 i and the width is Since the minimum distance is half of the diagonal of that rectangle, we have If n falls into Case 3, it is at the center of a round-(i − 1) square in the red area, and the length of the edge is x 2 i . Again, the minimum distance is half of the diagonal of that square and we have If n falls into Case 4, then it is either at the center of a round-(i − 1) rectangle in the orange and the blue areas, or it is a vertex of a round-i rectangle in the green and the orange areas such that it is not on the boundary or adjacent to the pink or the blue areas. No matter which sub-case happens, the minimum distance incurred by position n is 1 2 × cx 2 i , either to the boundary or to the center of a round-(i − 1) rectangle next to it. Thus Finally, if n falls into Case 5, it is either at the center of a round-(i − 1) rectangle in yellow area or it is a vertex of a round-i rectangle in the blue, yellow, pink or red areas. No matter which sub-case happens, the minimum distance incurred by position n is 1 2 × x 2 i , either to the boundary or to the center of the round-(i − 1) rectangle next to it. Therefore In sum, the general formulas of d min (n) for any i and n are as shown in Table 4.  Table 4, similar formulas for apx(n; P ) can be obtained for all five cases of round 1.
Round i ≥ 2. To find the worst competitive ratio for all rounds i ≥ 2, the difficulty is that the upper-bounds for apx(n; P ) in Table 4 are not necessarily monotone, thus it is hard to tell where the worst ratio occurs. Instead, we find a universal upper-bound for the ratios and a novel way to analyze its monotonicity. More precisely, let f (n; . By Lemma 10, for any n ≥ 1, apx(n; P ) = Disp(n; P ) d min (n) ≤ 2 In Table 5, we replace axp(n; P ) by its upper-bound f (n; P ).
Case n d min (n) f (n; P ) 1 n ≥ (7 × 2 i − 1) 2 + 1 and Given Tables 4 and 5, for any n ≥ 37, it falls into one entry based on its round number i and case number j ∈ {1, . . . , 5}. Accordingly, we also use apx(i, j) and f (i, j) to denote apx(n; P ) and f (n; P ). When 1 ≤ n ≤ 36, it falls into one of the seven cases in Table 3, and we denote the corresponding quantities by apx( * , j) and f ( * , j) with j ∈ {1, . . . , 7}. Since we want to find a proper parameter c to minimize max{ max i≥0, j≤5 apx(i, j), max j≤7 apx( * , j)}, we have to find where apx(i, j) or apx( * , j) is maximized.
When i ≥ 2, given any j ≤ 5, it is easy to see that f (i, j) is decreasing with respect to i-that is, f (n; P ) may not be monotone overall, but it is monotone across the same case! Therefore we only need to compare apx( * , j) for j = 1, · · · , 7, apx(i, j) for i = 0, 1 and j = 1, · · · , 5, and f (2, j) for j = 1, · · · , 5. Note that f (i, j) is also decreasing at i = 0, 1, but it is a loose upper-bound there, thus we use apx(i, j) for i = 0, 1 in order to get a better bound.
What remains is simple. Indeed, there are in total 22 apx(·; ·)'s and f (·; ·)'s to consider, whose values can be directly calculated from the tables. More specifically, each one of them is less than or equal to max{apx( * , 6), apx(0, 5)}. Notice that, so far, we haven't used the value of c. for the online ATWC problem for arbitrary polytopes.
Proof. We show that even when the given k-dimensional "polytope" P is a sphere and there are only two points, no algorithm achieves a competitive ratio better than 7 6 . As a sphere can be approximated arbitrarily closely by a polytope, this implies our theorem.
Without loss of generality, let the center of the sphere be (0, 0, . . . , 0) and the radius be 1 2 . We first consider the optimal solution for the dispersion problem without time. Clearly, Disp(1; P ) = 1 2 . When there are two points, we have the following.
Claim 20. For any algorithm A locating 2 points in P , there exists another algorithm A ′ such that A ′ locates the two points on a diameter of the sphere and the competitive ratio of A ′ to Disp(2; P ) is no worse than A.
Following Claim 20, it is not hard to see that the optimal solution has p 1 and p ′ 2 with x = − 1 6 and y ′ = 1 6 , thus Disp(2; P ) = 1 3 . Now we consider online algorithms for ATWC. Following Claim 20, without loss of generality we focus on algorithms that locate the second point on the same diameter as the first. Again without loss of generality, the two positions are p 1 = (x, 0, . . . , 0) and p 2 = (y, 0, . . . , 0), with x ≤ 0 ≤ y. Moreover, given p 1 , the best position for p 2 is to set y such that dis(p 2 , ∂P ) = dis(p 2 , p 1 ). That is, y − x = 1 2 − y, which implies y = 1+2x 4 . Accordingly, d min (2; P ) = min 1 2 + x, 1−2x 4 and the worst competitive ratio after the first two points is .
Choosing x to minimize this maximum, we have x = − 1 14 and the competitive ratio cannot be better than 7 6 . Thus Theorem 13 holds.
Similarly, a lower-bound of 1 + 1 4 √ k+2 can be obtained for k-dimensional cubes for any k ≥ 2. To construct an online algorithm for ATWC for arbitrary polytopes, we first consider the straightforward greedy algorithm, and we have the following. Recall that the geometric problems of finding the minimum bounding cube, deciding whether a position is in P , and finding the distance between a point in P and the boundary of P are given as oracles.
Proof. Denote by P the set of points in P with distance larger than or equal to Disp(n;P ) 2 from the boundary. By contradiction, assume that for every p ∈ P, min i∈[n] {dis(p, p i )} < Disp(n;P ) 2 .
Letting r = sup p∈P min i∈[n] {dis(p, p i )}, we have r < Disp(n;P ) 2 , because P is closed. Accordingly, P is covered by n balls centered at p 1 , . . . , p n with radius r and vol(P) ≤ nBall(r), where vol(P) is the volume of P and Ball(r) is the volume of a ball with radius r. However, by the definition of Disp(n; P ), there exist n positions in P such that min i,j∈ [n] {dis(p i , p j ), dis(p i , ∂P )} ≥ Disp(n; P ).
Consider the greedy algorithm that, for each n ≥ 1, creates the (n + 1)-st position at p = arg max p∈P min i∈ , by Lemma 21 and an inductive reasoning, it is easy to see that this online algorithm is 2-competitive for the ATWC problem. However, finding the optimal position p may be time consuming even given the oracles, and we design a polynomial-time competitive algorithm which achieves a competitive ratio of 2 1−ǫ for any ǫ > 0.
Theorem 14. (restated) For any constants γ, ǫ > 0, for any integer k ≥ 2 and any k-dimensional polytope P with covering rate at least γ, there exists a deterministic polynomial-time online algorithm for the ATWC problem, with competitive ratio 2 1−ǫ and running time polynomial in 1 (γǫ) k .
Proof. Without loss of generality, the minimum bounding cube of P is the unit cube. Thus the edge-length of the maximum inscribed cube C is at least γ. The idea is to simply slice the unit cube into small cubes and exhaustively search all the cube-centers that are in P . The number of cubes need to be searched depends on the number n of existing positions, the approximation parameter ǫ, and a lower-bound for Disp(n; P ), which ultimately depends on γ. In particular, we have the following. Proof. Let p be the optimal position chosen by the greedy algorithm given p 1 , . . . , p n , C j be the cube containing p, and c j the center of C j . We have that min i∈ [n] {dis(p, p i ), dis(p, ∂P )} ≥ Disp(n; P ) 2 (11) and .
Thus by going through all the c j 's in P and choosing the one that maximizes min i∈ [n] {dis(c j , p i ), dis(c j , ∂P )}, we find the (n+1)-st position in time O(m k ·n) = O( n 2 k k/2 γ k ǫ k ). Again because Disp(n; P ) ≥ Disp(n + 1; P ), applying Claim 22 to each round of the greedy algorithm, by induction we have that the resulting algorithm runs in polynomial time and is 2 1−ǫ -competitive. Therefore Theorem 14 holds.
Corollary 23. For any k-dimensional polytope P with covering rate at least γ, the algorithm in Theorem 14 is O(n)-competitive for the online CD problem.
Proof. Arbitrarily fix a small constant ǫ in this greedy algorithm. For any S = ((s 1 , d 1 ), . . . , (s n , d n )) and at any time 0 ≤ t ≤ T , the output of the algorithm satisfies d min (t) ≥  Proof. Given an input sequence S = ((s 1 , d 2 ), ..., (s n , d n )), let T be the union of all the intervals that contain at least one point. Note that, for the original input sequence, we may assume T = [0, T ] without loss of generality. However, this may change in later iterations.
In each round of A I 's while loop, the algorithm considers all the points that arrive within the sliding window (s, d] and depart after the sliding window. If there are such points, then it selects the point j with the latest departure time among them and moves the sliding window to (d, d j ]. It puts point j in I index , which alternates between I 1 and I 2 in different rounds. If there is no such point, then it finds the earliest arriving time after d, s ′ = min i∈S,s i >d s i , and moves the sliding window to (d, s ′ ]. Note that whichever case happens, the new sliding window satisfies s < d, thus is well defined. Now we prove the output {I 1 , I 2 } satisfies Φ.1. Indeed, note that the variable index has the same value for any two rounds l and l + 2. Let (s, d] be the sliding window in round l. If point j is added to I index in round l and point j ′ is added to I index in round l + 2, then the three sliding windows in rounds l, l + 1 and l + 2 are, respectively, (s, d], (d, d j ] and (d j , x] with some x > d j , computed either in Step 7 or Step 9. By the definition of j ′ , s j ′ > d j and points j and j ′ do not overlap. By induction, we have that no two points in I 1 overlap, neither do any two points in I 2 . Thus property Φ.1 holds.
Next, we prove the output {I 1 , I 2 } satisfies Φ.2: in other words, the time intervals of points in I = I 1 ∪ I 2 cover T . To do so, note that the sliding windows in Algorithm A I are all disjoint and their union covers [0, T ]. Accordingly, for any time t ∈ [0, T ] with at least one point j ′ at time t in S, there exists a unique round i with t in its sliding window (s i , d i ]: that is, We distinguish three cases. Case 1. If s i = −1 (i.e., round i is the first round in the algorithm), then it must be d i = 0 = s j ′ = t and d j ′ > 0. By definition,Ŝ = ∅ and the chosen point j is such that s j = 0 and d j ≥ d j ′ . Therefore at least one point at time t (i.e., point j) is in I.
at the end-points of the intervals and stays the same within each interval. Accordingly, given the optimal offline positions X 1 , . . . , X n ∈ P , |T i |d min (T i ; X 1 , . . . , X n ), where d min (T i ; X 1 , . . . , X n ) is the minimum distance incurred by X 1 , . . . , X n at any time t within T i (that is, t ∈ (lef t i , right i )). Now we consider the minimum distance incurred by algorithm A CD in each T i . For each i ∈ [l], let S i = {j|j ∈ [n], s j < right i , d j > lef t i } be the set of points that overlap with T i , and n i = |S i |. Again by Claim 24 and in particular by property Φ.2, all points in S i are removed from S in the first n i rounds of algorithm A CD and Accordingly, all points in S i are located at the first 2n i positions created by algorithm A AT W C from time 0 to time n i − 1. Since A AT W C is a σ-competitive algorithm, it ensures that at time n i − 1 in the online instance, the minimum distance incurred by positions x 1 , . . . , x 2n i is at least . Thus the cumulative distance in T i according to A CD is and the total cumulative distance is Again because the set of present points stays the same throughout T i , we have |T i |Disp(n i ; P ).
Therefore the competitive ratio is no larger than Disp(n i ; P ) Disp(2n i ; P ) ≤ σ max i≥1 Disp(i; P ) Disp(2i; P ) , as we wanted to show. Finally, all that remains is to prove the following claim.

F Dispersion Without the Boundary Condition
The literature has considered the dispersion problem when the distance to the boundary is not taken into consideration: referred to as spreading points [6] or facility dispersion [34]). The objective is SP (n; P ) max dis(X i , X j ).
In this section we consider the online dispersion problem for this objective.
Recall that for any instance S = ((s 1 , d 1 ), . . . , (s n , d n )), T = max i∈[n] d i . Given locations X = (X 1 , . . . , X n ), for any t ≤ T , let d SP min (t; X) = min i,j∈[n]:s i ≤t≤d i ,s j ≤t≤d i dis(X i , X j ) be the minimum distance among the points that are present at time t. When X is clear from the context, we may write d SP min (t) for short. Here we also consider two natural objectives: the all-time worst-case (ATWC) problem, where the objective is Similar to Claim 2, letting m = max t≤T |{i : s i ≤ t ≤ d i }|, we have OP T SP A (S; P ) = SP (m; P ). In the dispersion without boundary condition problem, the optimal solution and the optimal competitive ratio of online algorithms may be very different from the dispersion problem. For instance, consider the polytope shown in Figure 8. When the number of points is small, the optimal dispersion without boundary condition locates most points in the left part of the polytope, while the optimal dispersion locates most points in the right part. However, we show that most of our techniques for the dispersion problem can be carried through to dispersion without boundary condition. Indeed, for the 1-dimensional case, by locating the first two positions at the two end points of the segment, we can lower-and upper-bound the optimal competitive ratio for online ATWC as 2 ln 2. Because there is no boundary condition, this lowerbound continues to hold for higher dimensions. Indeed, we have the following.
Theorem 27. For any k ≥ 1, no algorithm achieves a competitive ratio better than 2 ln 2 for online ATWC without boundary condition for arbitrary k-dimensional polytopes.
Proof. Consider a unit segment in a k-dimensional space. The lower-bound of 2 ln 2 can be proved by directly applying Theorem 5. If we restrict ourselves to non-degenerate k-dimensional polytopes, we can consider the hyperrectangle with radius ǫ in k − 1 dimensions and radius 1 in one dimension. It can be viewed as an approximation of a segment in k-dimensional space. When ǫ is sufficiently small, SP (n; P ) ≈ 1 n−1 for n ≥ 2. Thus, by applying a similar technique to the proof of Theorem 5, the lower-bound of 2 ln 2 holds.
For squares, our technique in designing the online algorithm can still be applied, but the parameters need to be re-calculated. Accordingly, below we only show how to generalize the greedy algorithm for arbitrary polytopes to dispersion without boundary condition.
Theorem 28. For any k ≥ 2, polytope P with covering rate at least γ > 0, and ǫ > 0, there exists a deterministic polynomial-time algorithm for online ATWC without boundary condition, with competitive ratio 2 1−ǫ and running time polynomial in 1 (γǫ) k .
Proof. We prove the competitive ratio 2 for the greedy algorithm by induction. First when there is only one point in P , the optimal distance is equal to the maximal distance given by the greedy algorithm since the boundary is not taken into consideration. Suppose when there are n points in P and the allocation of the n points satisfies the competitive ratio 2, we find a feasible position for the (n + 1)th point satisfying the requirement. Note that for arbitrarily fixed n points {p 1 , . . . , p n } where p i is in P for i ∈ [n], there exists a point p * such that dis(p i , p * ) ≥ SP (n + 1; P ) 2 , ∀i ∈ [n].
This is true because otherwise n spheres centered at {p 1 , . . . , p n } with radius SP (n+1;P ) 2 is a covering for P . Thus for n + 1 points arbitrarily allocated in P , there exists i ∈ [n + 1] such that there are at least two different points allocated in the same sphere centered at p i , excluding the boundary. Therefore, the minimum distance among those two points, and hence the minimum distance among n + 1 points is less than SP (n + 1; P ). Thus we derive a contradiction and the above inequality holds. By allocating the (n + 1)th point at position p * gives us the 2-competitive ratio.
The method to approximate the greedy mechanism is similar to the proof of Theorem 14, but not exactly the same. Note that when the boundary is not taken into consideration, there may not exist a cube whose center is in P and its distance to the greedy solution (denoted by p * ) is small. However, the distance from p * to any point within the cube which contains p * is small. By brute force searching the cube which intersects with P and whose center has largest distance to the existing points, allocating any point in the intersection of the cube and P gives us a 2 1−ǫ -competitive algorithm with running time polynomial in 1 (γǫ) k .
Finally, the offline CD problem without boundary condition is similar to the original CD problem (see Section 6 and Appendix E), and can be reduced to online ATWC without boundary condition with the competitive ratio scaling up by a factor of at most 2.

G The insert-only model
Under the insert-only model the online CD problem and the online ATWC problem are actually equivalent. On the one hand, if an algorithm is σ-competitive for the online ATWC problem, then directly applying the same algorithm achieves the same competitive ratio of σ for the online CD problem. Indeed, since the algorithm is σ-competitive whenever a new point arrives, it is actually σ-competitive within each time interval. Taking integral over all time intervals, the competitive ratio σ is preserved.
On the other hand, if an algorithm is σ-competitive for the online CD problem, directly applying the same algorithm would be σ-competitive for the online ATWC problem. Indeed, assuming otherwise, there must exist a point i such that, in the time interval [s i , s i+1 ), the minimum distance d min (t) with t ∈ [s i , s i+1 ) is smaller than a σ-fraction of Disp(i; P ). So we can construct another instance S ′ where the algorithm's competitive ratio for the online CD problem is violated. More precisely, S ′ keeps the arriving time of the first i points unchanged, there is no other point arriving, and the departure time T of the i points is set to be sufficiently large. By doing so, the ratio between the cumulative distance and d min (t)T with t ∈ [s i , s i+1 ) is arbitrarily close to 1, and the ratio between the optimal cumulative distance and Disp(i; P )T is arbitrarily close to 1. Accordingly, the algorithm's competitive ratio for S ′ for the online CD problem is worse than σ. 3 Thus, all our algorithms for the online ATWC problem can be directly applied to the online (and also offline) CD problem, with the competitive ratios unchanged. Finally, all our inapproximability results for the online ATWC problem hold under the insert-only model.