Capacitated Covering Problems in Geometric Spaces

We consider the following capacitated covering problem. We are given a set P of n points and a set B\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {B}}$$\end{document} of balls from some metric space, and a positive integer U that represents the capacity of each of the balls in B\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {B}}$$\end{document}. We would like to compute a subset B′⊆B\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {B}}' \subseteq {\mathcal {B}}$$\end{document} of balls and assign each point in P to some ball in B′\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {B}}'$$\end{document} that contains it, so that the number of points assigned to any ball is at most U. The objective function that we would like to minimize is the cardinality of B′\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$${\mathcal {B}}'$$\end{document}. We consider this problem in arbitrary metric spaces as well as Euclidean spaces of constant dimension. In the metric setting, even the uncapacitated version of the problem is hard to approximate to within a logarithmic factor. In the Euclidean setting, the best known approximation guarantee in dimensions 3 and higher is logarithmic in the number of points. Thus we focus on obtaining “bi-criteria” approximations. In particular, we are allowed to expand the balls in our solution by some factor, but optimal solutions do not have that flexibility. Our main result is that allowing constant factor expansion of the input balls suffices to obtain constant approximations for this problem. In fact, in the Euclidean setting, only 1+ϵ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1+\epsilon $$\end{document} factor expansion is sufficient for any ϵ>0\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\epsilon > 0$$\end{document}, with the approximation factor being a polynomial in 1/ϵ\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$1/\epsilon $$\end{document}. We obtain these results using a unified scheme for rounding the natural LP relaxation; this scheme may be useful for other capacitated covering problems. We also complement these bi-criteria approximations by obtaining hardness of approximation results that shed light on our understanding of these problems.


Introduction
In this paper, we consider the following capacitated covering problem. We are given a set P of n points and a set B of balls from some metric space, and a positive integer U that represents the capacity of each of the balls in B. We would like to compute a subset B ⊆ B of balls and assign each point in P to some ball in B that contains it, so that the number of points assigned to any ball is at most U . The objective function that we would like to minimize is the cardinality of B . We call this the Metric Capacitated Covering (MCC) problem.
An important special case of this problem arises when U = ∞, and we refer to this as Metric Uncapacitated Covering (MUC). The MUC requires us to cover the points in P using a minimum number of balls from B, and we can therefore solve it using the efficient greedy algorithm for set cover and obtain an approximation guarantee of O(log n). The approximation factor of O(log n) for Set Cover cannot be improved unless P = N P [14]. The same is true for the MUC, as demonstrated by the following reduction from Set Cover. We construct a graph with a vertex corresponding to each set and element, and an edge of length 1 between a set and an element if the element is contained in the set. In the metric induced by this graph, we create an MUC instance: we include a ball of radius 1 at each set vertex, and let the points that need to be covered be the element vertices. It is easy to see that any solution for this instance of MUC directly gives a solution for the input instance of the general Set Cover, implying that for MUC, it is not possible to get any approximation guarantee better than the O(log n) bound for Set Cover.
The MUC in fixed dimensional Euclidean spaces has been extensively studied. One interesting variant is when the allowed set B of balls consists of all unit balls. Hochbaum and Maass [19] gave a polynomial time approximation scheme (PTAS) for this variant using a grid shifting strategy. When B is an arbitrary finite set of balls, the problem seems to be much harder. An O(1) approximation algorithm in the 2-dimensional Euclidean plane was given by Brönnimann and Goodrich [9]. More recently, a PTAS was obtained by Mustafa and Ray [27]. In dimensions 3 and higher, the best known approximation guarantee is still O(log n). Motivated by this, Har-Peled and Lee [18] gave a PTAS for a bi-criteria version where the algorithm is allowed to expand the input balls by a (1 + ) factor. Covering with geometric objects other than balls has also been extensively studied; see [4,10,12,17,23,30] for a sample.
The MCC is a special case of the Capacitated Set Cover (CSC) problem. In the latter problem, we are given a set system (X , F) with n = |X | elements and m = |F| subsets of X . For each set F i ∈ F, we are also given an integer U i , which is referred to as its capacity. We are required to find a minimum size subset F ⊆ F and assign each element in X to a set in F containing it, so that for each set F i , the number of points assigned to F i is at most U i . The MCC is obtained as a special case of CSC by setting X = P, F = B, and U i = U for all i. Set Cover is a special case of CSC where the capacity of each set is ∞.
Applications of Set Cover include placement of wireless sensors or antennas to serve clients, VLSI design, and image processing [7,19]. It is natural to consider capacity constraints that appear in many applications, for instance, an upper bound on the number of clients that can be served by an antenna. Such constraints lead to the natural formulation of CSC. For the CSC problem, Wolsey [31] used a greedy algorithm to give an O(log n) approximation. For the special case of vertex cover (where each element in X belongs to exactly two sets in F), Chuzhoy and Naor [11] presented an algorithm with approximation ratio 3, which was subsequently improved to 2 by Gandhi et al. [15]. The generalization where each element belongs to at most a bounded number f of sets has been studied in a sequence of works, culminating in [20,32]. Berman et al. [7] have considered the "soft" capacitated version of the CSC problem that allows making multiple copies of input sets. Another closely related problem to the CSC problem is the so-called Replica Placement problem. For the graphs with treewidth bounded by t, an O(t) approximation algorithm for this problem is presented in [1]. Finally, PTASs for the Capacitated Dominating Set, and Capacitated Vertex Cover problems on planar graphs are presented in [6], under the assumption that the demands and capacities of the vertices are upper bounded by a constant.
Compared to MUC, relatively fewer geometric versions of the MCC problem have been studied in the literature. We refer to the version of MCC where the underlying metric is Euclidean as the Euclidean Capacitated Covering (ECC) problem. The dimension of the Euclidean space is assumed to be a constant. One such version arises when B comprises of all possible unit balls. This problem appeared in the Sloan Digital Sky Survey project [25]. Building on the shifting strategy of Hochbaum and Maass [19], Ghasemi and Razzazi [16] obtain a PTAS for this problem. When the set B of balls is arbitrary, the best known approximation guarantee is O(log n), even in the plane.
Given this state of affairs for MCC and ECC, we focus our efforts on finding a bi-criteria approximation. Specifically, we allow the balls in our solution to expand by at most a constant factor λ, without changing their capacity constraints (but optimal solution does not expand). We formalize this as follows. An (α, β)-approximation for a version of MCC, is a solution in which the balls may be expanded by a factor of β (i.e., for any ball B i , and any point p j ∈ P that is assigned to B i , d(c i , p j ) ≤ β ·r i -where r i is the radius of the ball B i ), and its cost is at most α times that of an optimal solution (which does not expand the balls). From the reduction of Set Cover to MUC described above, we can see that it is NP-hard to get an ( f (n), λ)-approximation for any λ < 3 and f (n) = o(log n). This follows from the observation that in the constructed instance of MUC, the distance between a set vertex and a vertex corresponding to an element not in that set is at least 3. We note that it is a common practice in the wireless network setting to expand the radii of antennas at the planning stage to improve the quality of service. For example, Bose et al. [8] propose a scheme for replacing omni-directional antennas by directional antennas that expands the antennas by a constant factor. Related Work. Capacitated versions of facility location and clustering type problems have been well-studied over the years. One such clustering problem is the capacitated k-center. In the version of this problem with uniform capacities, we are given a set P of points in a metric space, along with an integer capacity U . A feasible solution to this problem is a choice of k centers to open, together with an assignment of each point in P to an open center, such that no center is assigned more than U points. The objective is to minimize the maximum distance of a point to its assigned center. O(1) approximations are known for this problem [5,21]; the version with non-uniform capacities is addressed in [2,13]. Notice that the decision version of the uniform capacitated k-center with the radius parameter r is the same as the decision version of a special case of MCC, where the set B consists of balls of radius r centered at each point of the capacitated k-center instance. The capacity of each ball is the same as the uniform capacity U of the points. We want to find whether there is a subset of B consisting of k balls that can serve all the points without violating the capacity constraint. There have been recent developments on the capacitated versions of related optimization problems. For the metric facility location problem with non-uniform capacities, constant approximations were known via the local search technique [26,28]. For the special case of capacitated facility location problem, where the opening costs are uniform, Levi et al. [22] gave an LP rounding based constant approximation. However, for the general case with non-uniform opening costs, the first LP-based constant approximation was given by An et al. [3], who need to strengthen the natural LP relaxation. There has also been some recent work on the capacitated k-median problem [24].

Our Results and Contributions
In this article, we make significant progress on both the MCC and ECC problems.
-We present a (21, 6.47)-approximation for the MCC problem. Thus, if we are allowed to expand the input balls by a constant factor, we can obtain a solution that uses at most 21 times the number of balls used by the optimal solution. We note that we have not tried to optimize the approximation guarantee of 21. As noted above, if we are not allowed to expand by a factor of at least 3, we are faced with a hardness of approximation of Ω(log n). -We present an (O( −4d log(1/ )), 1 + )-approximation for the ECC problem in R d . Thus, assuming we are allowed to expand the input balls by an arbitrarily small constant factor, we can obtain a solution with at most a corresponding constant times the number of balls used by the optimal solution. Without expansion, the best known approximation guarantee for d ≥ 3 is O(log n), even without capacity constraints.
Both results are obtained via a unified scheme for rounding the natural LP relaxation for the problem. This scheme, which is instantiated in different ways to obtain the two results, may be of independent interest for obtaining similar results for related capacitated covering problems. Though the LP rounding technique is a standard tool that has been used in the literature of the capacitated problems, our actual rounding scheme is different from the existing ones. In fact, the standard rounding scheme for facility location, for example the one in [22], is not useful for our problems, as there a point can be assigned to any facility. But in our case, each point must be assigned to a ball that contains it (modulo constant factor expansion). This hard constraint makes the covering problems more complicated to deal with.
When the input balls have the same radius, it is easier to obtain the above guarantees for the MCC and and the ECC using known results or techniques. For the MCC, this (in fact, even a (1, O(1))-approximation) follows from the results for capacitated kcenter [2,5,13,21]. This is because of the connection between capacitated k-center and MCC as pointed out before. The novelty in our work lies in handling the challenging scenario where the input balls have widely different radii. For geometric optimization problems, inputs with objects at multiple scales are often more difficult to handle than inputs with objects at the same scale.
As a byproduct of the rounding schemes we develop, the bicriteria approximations can be extended to a more general capacity model. In this model, the capacities of the balls are not necessarily the same. In particular, suppose ball B i has capacity U i and radius r i . Then for any two balls B i , B j ∈ B, our model assumes that the following holds: r i > r j ⇒ U i ≥ U j . We refer to this capacity model as the monotonic capacity model. We refer to the generalizations of the MCC and the ECC problems with the monotonic capacity model as the Metric Monotonic Capacitated Covering (MMCC) problem and the Euclidean Monotonic Capacitated Covering (EMCC) problem, respectively. We note that the monotonicity assumption on the capacities is reasonable in many applications such as wireless networks-it might be economical to invest in capacity of an antenna to serve more clients, if it covers more area.
Hardness. We complement our algorithmic results with some hardness of approximation results that give a better understanding of the problems we consider. Firstly, we show that for any constant c > 1, there exists a constant c > 0 such that it is NP-hard to obtain a (1 + c , c)-approximation for the MCC problem, even when the capacity of all balls is 3. This shows that it is not possible to obtain a (1, c) approximation even for an arbitrarily large constant c. In the hardness construction, not all the balls in the hard instance have the same radius. This should be contrasted with the case where the radii of all balls are equal-in this case one can use the results from capacitated k-center (such as [2,13]), to obtain a (1, O(1))-approximation.
It is natural to wonder if our algorithmic results can be extended to weighted versions of the problems. We derive hardness results that indicate that this is not possible. In particular, we show that for any constant c ≥ 1, there exists a constant c > 0, such that it is NP-hard to obtain a (c log n, c)-approximation for the weighted version of MMCC with a very simple weight function (a constant power of original radius).
We describe the natural LP relaxation for the MMCC problem in Sect. 2. We describe a unified rounding scheme in Sect. 3, and apply it in two different ways to obtain the algorithmic guarantees for MMCC and EMCC. We present the hardness results in Sect. 4.

LP Relaxation for MMCC
Recall that the input for the MMCC consists of a set P of points and a set B of balls in some metric space, along with an integer capacity U i > 0 for ball B i ∈ B. We assume that for any two input balls B i , B j ∈ B, it holds that r i > r j ⇒ U i ≥ U j . The goal is to compute a minimum cardinality subset B ⊆ B for which each point in P can be assigned to a ball B containing it in such a way that no more than U i points are assigned to ball B i . Let d( p, q) denote the distance between two points p and q in the metric space. Let B(c, r ) denote the ball of radius r centered at point c. We let c i and r i denote the center and radius of ball B i ∈ B; thus, B i = B(c i , r i ).
First we consider an integer programming formulation of MMCC. For each set B i ∈ B, let y i = 1 if the ball B i is selected in the solution, and 0 otherwise. Similarly, for each point p j ∈ X and each ball B i ∈ B, let the variable x i j = 1 if p j is assigned to B i , and x i j = 0 otherwise. We relax these integrality constraints, and state the corresponding linear program as follows: Subsequently, we will refer to an assignment (x, y) that is feasible or infeasible with respect to Constraints (1)-(6) as just a solution. The cost of the LP solution σ = (x, y) (feasible or otherwise), denoted by cost(σ ), is defined as B i ∈B y i .

The Algorithmic Framework
In this section, we describe our framework for extracting an integral solution from a fractional solution to the above LP. The framework consists of two major steps-Preprocessing and the Main Rounding. The Main Rounding step is in turn divided into two smaller steps-Cluster Formation and Selection of Objects. For simplicity of exposition, we first describe the framework with respect to the MMCC problem as an algorithm and analyze the approximation factor achieved by this algorithm for MMCC. Later, we show how one or more steps of this algorithm can be modified to obtain the desired results for the EMCC.

The Algorithm for the MMCC Problem
Before we describe the algorithm we introduce some definitions and notation which will heavily be used throughout this section. For point p j ∈ P and ball B i ∈ B, we refer to x i j as the flow from B i to p j ; if x i j > 0, then we say that the ball B i serves the point p j . Each ball B i ∈ B can be imagined as a source of at most y i · U i units of flow, which it distributes to some points in P.
We now define an important operation, called rerouting of flow. "Rerouting of flow for a set P ⊆ P of points from a set of balls B to a ball B k / ∈ B " means obtaining a new solution (x,ŷ) from the current solution (x, y) in the following way: (a) For all points p j ∈ P ,x k j = x k j + B i ∈B x i j ; (b) for all points p j ∈ P and balls B i ∈ B , x i j = 0; (c) the otherx i j variables are the same as the corresponding x i j variables. The relevantŷ i variables may also be modified depending on the context where this operation is used. Let 0 < α ≤ 1/2 be a parameter to be fixed later. A ball B i ∈ B is heavy if the corresponding y i = 1, and light, if 0 < y i ≤ α. Corresponding to a feasible LP solution (x, y), let H = {B i ∈ B | y i = 1} denote the set of heavy balls, and L = {B i ∈ B | 0 < y i ≤ α} denote the set of light balls. We emphasize that the set L of light and H of heavy balls are defined w.r.t. an LP solution; however, the reference to the LP solution may be omitted when it is clear from the context. Now we move on towards the description of the algorithm. The algorithm, given a feasible fractional solution σ = (x, y), rounds σ to a solutionσ = (x,ŷ) such thatŷ is integral, and the cost ofσ is within a constant factor of the cost of σ . Thex variables are non-negative but may be fractional. Furthermore, each point receives unit flow from the balls that are chosen (y values are 1), and the amount of flow each chosen ball sends is bounded by its capacity. Notably, no point gets any non-zero amount of flow from a ball that is not chosen (y value is 0). Moreover, for any ball B i and any p j ∈ P, if B i serves p j , then d(c i , p j ) is at most a constant times r i . We expand each ball by a constant factor so that it contains all the points it serves.
We note that inσ points might receive fractional amount of flow from the chosen balls. However, since the capacity of each ball is integral, we can find, using a textbook argument for integrality of flow, another solution with the same set of chosen balls, such that the new solution satisfies all the properties ofσ and the additional property, that for each point p, there is a single chosen ball that sends one unit of flow to p [11]. Thus, choosing an optimal LP solution as the input σ = (x, y) of the rounding algorithm yields a constant approximation for MMCC by expanding each ball by at most a constant factor.
Our LP rounding algorithm consists of two steps. The first step is a preprocessing step where we construct a fractional LP solution σ = (x, y) from σ , such that each ball in σ is either heavy or light, and for each point p j ∈ P, the amount of flow that p j can potentially receive from the light balls is at most α. The latter property will be heavily exploited in the next step. The second step is the core step of the algorithm where we round σ to the desired integral solution.
We note that throughout the algorithm, for any intermediate LP solution that we consider, we maintain the following two invariants: (i) Each ball B i sends at most U i units of flow to the points, and (ii) Each point receives exactly one unit of flow from the balls. With respect to a solution σ = (x, y), we define the available capacity of a ball B i ∈ B, denoted AvCap(B i ), to be U i − p j ∈P x i j . We now describe the preprocessing step.
1. Any ball B i ∈ B with non-zero y i is either heavy (y i = 1) or light (0 < y i ≤ α). 2. For each point p j ∈ P, we have that where L is the set of light balls with respect to σ . 3. For any heavy ball B i , and any point p j ∈ P served by B i , d(c i , p j ) ≤ 3r i . 4. For any light ball B i , and any point p j ∈ P served by Proof The algorithm starts off by initializing σ to σ . While there is a violation of inequality (7), we perform the following steps.
1. We pick an arbitrary point p j ∈ P, for which inequality (7) is not met. Let L j be a subset of light balls serving p j such that α < B i ∈L j y i ≤ 2α. Note that such a set L j always exists because the y i variables corresponding to light balls are at most α ≤ 1/2. Let B k be a ball with the largest radius from the set L j . (If there is more than one ball with the largest radius, we consider one having the largest capacity among those. Throughout the paper we follow this convention.) Since r k ≥ r m for all other balls B m ∈ L j , we have, by the monotonicity assumption, that U k ≥ U m . 2. We set y k ← B i ∈L j y i , and y m ← 0 for all B m ∈ L j \{B k }. Note that y k ≤ 2α ≤ 1. Let A = {p t ∈ P | x it > 0 for some B i ∈ L j \{B k }} be the set of "affected" points. We reroute the flow for all the affected points in A from L j \{B k } to the ball B k . Since U k ≥ U m for all other balls B m ∈ L j , B k has enough available capacity to "satisfy" all "affected" points. In σ , all other x i j and y i variables remain same as before. (Note: Since B k had the largest radius from the set L j , all the points in A are within distance 3r k from its center c k , as seen using the triangle inequality. Also, since y k > α, B k is no longer a light ball. ) Finally, for all balls B i such that y i > α, we set y i = 1, making them heavy. Thus cost(σ ) is at most 1/α times cost(σ ), and σ satisfies all the conditions stated in the lemma.
Remark As a byproduct of Lemma 3.1, we get a simple (4, 3)-approximation algorithm for the soft capacitated version of our problem that allows making multiple copies of the input balls. We remind the reader that in this variant, we are allowed to open multiple identical copies of the given ball at the same location, and each such ball has the same capacity as the original ball. However, we need to pay a cost of 1 for each copy. The LP corresponding to the soft capacitated version, is the same as MMCC-LP, except that Constraint (6) is relaxed to simply y i ≥ 0. We solve this LP, and obtain an optimal solution (x * , y * ). Then, using the procedure from Lemma 3.1, we can ensure that the flow that each point receives from the set of non-light balls (B\L) is at least 1 − α. Then, opening y i /(1 − α) identical copies of each non-light ball B i ensures that for each point, at least one unit of demand is satisfied exclusively by these balls. We now expand each of the opened balls by a factor of 3. As y i ≥ α for each non-light ball B i , choosing α = 1/2 yields a simple 4-approximation for this version, where each ball is expanded by a factor of at most 3.

The Main Rounding Step
The main rounding step can be logically divided into two stages. The first stage, Cluster Formation, is the crucial stage of the algorithm. Note that there can be many light balls in the preprocessed solution. Including all these balls in the final solution may incur a huge cost. Thus we use a careful strategy based on flow rerouting to select a small number of balls. The idea is to use the capacity of a selected light ball to reroute as much flow as possible from other intersecting balls. This in turn frees up some capacity at those balls. The available capacity of each heavy ball is used, when possible, to reroute all the flow from some light ball intersecting it; this light ball is then added to a cluster centered around the heavy ball. Notably, for each cluster, the heavy ball is the only ball in it that actually serves some points, as we have rerouted flow from the other balls in the cluster to the heavy ball. In the second stage, referred to as Selection of Objects, we select exactly one ball (in particular, a largest ball) from each cluster as part of the final solution, and reroute the flow from the heavy ball to this ball, and expand it by the required amount. Together, these two stages ensure that we do not end up choosing many light balls.
We now describe the two stages in detail. Recall that any ball in the preprocessed solution is either heavy or light. Also L denotes the set of light balls and H the set of heavy balls. Note that any heavy ball B i may serve a point p j which is at a distance 3r i from c i . We expand each heavy ball by a factor of 3 so that B i can contain all points it serves. 1. Cluster Formation. In this stage, each light ball will be added to either a set O (that will eventually be part of the final solution), or a cluster corresponding to some heavy ball. Till the very end of this stage, the sets of heavy and light balls remain unchanged. The set O is initialized to ∅. For each heavy ball B i , we initialize the cluster of B i , denoted by cluster(B i ) to {B i }. We say a ball is clustered if it is added to a cluster. At any point, let denote the set consisting of each light ball that is (a) not in O, and (b) not yet clustered. While the set is non-empty, we perform the following steps.
(a) While there is a heavy ball B i and a light ball 1. For all the points served by B t , we reroute the flow from B t to B i .

We add B t to cluster(B i ).
After the execution of this while loop, if the set becomes empty, we stop and proceed to the Selection of Objects stage. Otherwise, we proceed to the following. (b) For any ball B j ∈ , let A j denote the set of points currently being served by B j . Also, for B j ∈ , let k j = min{U j , |A j |}, i.e., k j denotes the minimum of its capacity, and the number of points that it currently serves. We select the ball B t ∈ with the maximum value of k j , and add it to the set O. (c) Since we have added B t to the set O that will be among the selected balls, we use the available capacity at B t to reroute flow to it. This is done based on the following three cases depending on the value of k t .
In this case, for each point p l in B t that gets served by B t , we reroute the flow of p l from B\O to B t . Note that after the rerouting, p l is no longer being served by a ball in . The rerouting increases the available capacity of other balls intersecting B t . In particular, for each In this case, we select a point p j ∈ A t arbitrarily, and reroute the flow of p j from B\O to B t . This will increase the available capacity of other balls in B\O that were serving p j . Also note that p j is no longer being served by a ball in . We repeat the above flow rerouting process for other points of A t until we encounter a point p l such that rerouting the flow of p l from B\O to B t violates the capacity of B t . Thus the flow assignment of p l remains unchanged. Note that we can reroute the flow of at least In this case, we pick a point p j ∈ A t arbitrarily, and then perform the following two steps: (i) Reroute the flow of p j from to B t ; after this, p j is no longer being served by a ball in . Note that in this step, we reroute at most α amount of flow-this follows from the LP Constraint (1) and Property (7) as maintained in Lemma 3.1. Therefore, at this point we have AvCap(B t ) ≥ 1 − 2α. Let f be the amount of flow p j receives from the balls in O.
When the loop terminates, we have that each light ball is either in O or clustered. We set y i ← 1 for each ball B i ∈ O, thus making it heavy. For convenience, we also set cluster Note that, throughout the algorithm we ensure that, if a point p j ∈ P is currently served by a ball B i ∈ , then the amount of flow p j receives from any ball B i ∈ O ∪ is the same as that in the preprocessed solution, i.e., the flow assignment of p j w.r.t. the balls in O ∪ remains unchanged. 2. Selection of Objects. At the start of this stage, we have a collection of clusters, each centered around a heavy ball, such that the light balls in each cluster intersect the heavy ball. We are going to pick exactly one ball from each cluster and add it to a set C. Let C = ∅ initially. For each heavy ball B i , we consider cluster(B i ) and perform the following steps.
(a) If cluster(B i ) consists of only the heavy ball, we add B i to C. (b) Otherwise, let B j be a largest ball in cluster(B i ). If B j = B i , then we expand it by a factor of 3. Otherwise, B j is a light ball intersecting with B i , in which case we expand it by a factor of 5. In this case, we also reroute the flow from the heavy ball to the selected ball B j . Note that since we always choose a largest ball in the cluster, its capacity is at least that of the heavy ball, because of the monotonicity assumption. We add B j to C, and we set y s ← 0 for any other ball B s in the cluster. Note that we always select a largest ball in from cluster(B i ). In the first case, the selected ball B i is also the center of the cluster. From the triangle inequality, it is easy to see that an expansion by a factor of 3 is sufficient to cover all balls in cluster(B i ). Otherwise, in the second case, the selected ball intersects the center of the cluster, B i . Again, from triangle inequality, it can be seen that an expansion by a factor of 5 suffices.
After processing the clusters, we set y t ← 1 for each ball B t ∈ C. Finally, we return the current set of heavy balls (i.e., C) as the set of selected balls. Note that the flow out of each such ball is at most its capacity, and each point receives one unit of flow from the (possibly expanded) balls that contain it. As mentioned earlier, this can be converted into an integral flow.

The Analysis of the Rounding Algorithm
Let O PT be the cost of an optimal solution. We establish a bound on the number of balls our algorithm outputs by bounding the size of the set C. Then we conclude by showing that any input ball that is part of our solution expands by at most a constant factor to cover the points it serves. For notational convenience, we refer to the solution σ = (x, y) at hand after preprocessing, as σ = (x, y). Now we bound the size of the set O computed during Cluster Formation. The basic idea is that each light ball added to O creates significant available capacity in the heavy balls. Furthermore, whenever there is enough available capacity, a heavy ball clusters intersecting light balls, thus preventing them from being added to O. The actual argument is more intricate because we need to work with a notion of y-accumulation, a proxy for available capacity. The way the light balls are picked for addition to O plays a crucial role in the argument.
Let H 1 (resp. L 1 ) be the set of heavy (resp. light) balls after preprocessing, and I be the total number of iterations in the Cluster Formation stage. Also let L j be the light ball selected (i.e., added to O) in iteration j for 1 ≤ j ≤ I . Now, L t maximizes k j amongst all balls from in iteration t (Recall that k j was defined as the minimum of the number of points being served by L j , and its capacity). Note that k 1 ≥ k 2 ≥ · · · ≥ k I . For any B i ∈ H 1 , denote by F(L t , B i ), the total amount of flow rerouted in iteration t from B i to L t corresponding to the points B i serves. This is the same as the increase in AvCap(B i ) when L t is added to O. Correspondingly, we define Y (L t , B i ), the "y-credit contributed by L t to B i ", to be F(L t , B i )/k t . Now, the increase in available capacity over all balls in The approximation guarantee of the algorithm depends crucially on the following simple lemma, which states that in each iteration we make "sufficiently large" amount of flow available for the set of heavy balls.

Lemma 3.2 Consider a ball B t ∈ O processed in the step (c) of the Cluster Formation
Proof The algorithm ensures that the flow assignment of each point in A t w.r.t. the balls in O ∪ is the same as that in the preprocessed solution. Thus by Property 2 of Lemma 3.1, each such point gets at most α amount of flow from the balls in O ∪ . Now there are three cases corresponding to the three substeps of step (c).
at most α amount of flow comes from the balls in O ∪ . So the remainder is rerouted from the balls in H 1 resulting in a contribution of at least 1 − α towards F t . Therefore, we get that When k t > 1, the previous quantity is at least k t /5, again by using the fact that Consider any heavy ball B i ∈ H 1 . It gains y-credit when a light ball is added to O, and loses y-credit when it adds a light ball to cluster(B i ). We define a quantity called the y-accumulation of B i -a proxy for the available capacity of B i -which indicates the "remaining" y-credit of B i . Formally, at any moment in the Cluster Formation stage, and any heavy ball B i ∈ H 1 , define its y-accumulation as: The next lemma gives a relation between the y-accumulation of B i and its available capacity.

Lemma 3.3 Fix a heavy ball B i ∈ H 1 , and an integer
The proof is by induction on t. For this proof, we abbreviate AvCap(B i ) by A i . In the first iteration, just after adding L 1 , Assume inductively that we have added balls L 1 , . . . , L t−1 to the set O, and that just after adding L t−1 , the claim is true. That is, if y(B i ) and A i are, respectively, the y-accumulation and the available capacity of B i just after adding L t−1 , then Consider the iteration t. At step (a) of Cluster Formation, B i uses up some of its available capacity to add 0 or more balls to cluster(B i ), after which at step (b) we add L t to O. Suppose that at step (a), one or more balls are added to cluster(B i ). Let B j be the first such ball, and let k and U j be the number of points B j serves and the capacity of B j , respectively. Then the amount of capacity used by B j is at most min{U j · y j , k · y j } = min{U j , k} · y j ≤ k t−1 · y j , where the last inequality follows because of the order in which we add balls to O. Now, after adding B j to cluster(B i ), the new y-accumulation becomes y(B i ) = y(B i )− y j . As for the available capacity, Therefore, the claim is true after addition of the first ball B j . Note that B i may add multiple balls to cluster(B i ), and the preceding argument would work after each such addition. Now consider the moment when L t is added to O. Let y(B i ) denote the yaccumulation just before this. Now, the new y-accumulation of B i becomes y(

then the new available capacity is
If y(B i ) > 0, the new available capacity, using the inductive hypothesis, is where, in the second inequality we use k t ≤ k t−1 . Now, in the next lemma, we show that any ball B i ∈ H 1 cannot have "too-much" y-accumulation at any moment during Cluster Formation.
However, L t is a light ball, and so the total flow out of L t is at most αk t . Therefore, the available capacity of B i is large enough that we can add L t to cluster(B i ), instead of to the set O, which is a contradiction.
We can now show that the number of light balls added to O during Cluster Formation is small. The idea is that each such ball, by Lemma 3.2, contributes a significant amount of y-credit to the balls in H 1 . On the other hand, Lemma 3.4 upper bounds the total y-accumulation for the heavy balls.

Lemma 3.5 At the end of Cluster Formation stage, we have
Proof At the end of Cluster Formation stage, where we used Lemma 3.2 to get the last inequality. Now, adding the inequality of Lemma 3.4 over all Combining this with (8) yields the desired inequality.
The next two lemmas bound the cost of our solution and the expansion factor. Proof Let σ = (x, y) be the preprocessed LP solution. Now, the total number of balls in the solution is |O| + |H 1 |. Using Lemma 3.5,

Lemma 3.7
In the algorithm each input ball is expanded by at most a factor of 9.
Proof Recall that when a light ball becomes heavy in the preprocessing step, it is expanded by a factor of 3. Therefore after the preprocessing step, any heavy ball in a solution may be an expanded or unexpanded ball. Now, consider the selection of the balls in the second stage. If a cluster consists of only a heavy ball, then it does not expand any further. Since it might be an expanded ball, the total expansion factor is at most 3.
Otherwise, for a fixed cluster, let r l and r h be the radius of the largest light ball and the heavy ball, respectively. If r l ≥ r h , then the overall expansion factor is 5. Otherwise, if r l < r h , then the heavy ball is chosen, and it is expanded by a factor of at most 3. Now as the heavy ball might already be expanded by a factor of 3 during the preprocessing step, here the overall expansion factor is 9.
If the capacities of all balls are equal, then one can improve the expansion factor to 6.47 by using an alternative procedure to the Selection of Objects stage . Proof If the capacities of all balls are equal (say U ), then we proceed in the same way until the Selection of Objects stage. Then, we use the following scheme that guarantees a smaller expansion factor for this special case. We first describe the scheme and then analyze it.
Fix a cluster obtained after the Cluster Formation stage. If the cluster contains only a heavy ball, then we add it to a set C (initialized to ∅), without expansion.
Otherwise, let r l denote the radius of a largest light ball in the cluster, and let r h be the radius of the heavy ball. Let B l and B h be the corresponding balls. We consider the following three cases: r l ≥ r h : In this case, let B = B l . We set its new radius to be 3r l + 2r h . r h / √ 3 ≤ r l < r h : Let B = B l . We set its new radius to be 3r l + 2r h .
We set its new radius to be r h + 2r l .
Finally, if B = B h , then we reroute the flow from B h to B, set y h ← 0, and add B to the set C respectively. Finally, we set y i ← 1 for all balls B i ∈ C, and return C as the solution.
To analyze the scheme, note that a heavy ball at the end of Cluster Formation stage may have been a light ball that was expanded by a factor of 3 in the preprocessing step. Therefore, if a cluster contains only a heavy ball, then the total expansion factor is at most 3. Otherwise, we analyze each of the 3 cases discussed above separately.
In the first case, 3r l + 2r h ≤ 5r l . In the second case, 3r l + 2r h ≤ (3 + 2 · √ 3)r l < 6.47r l . In the third case, r h + 2r l ≤ (1 + 2/ √ 3)r h . But B h might be originally a light ball that was expanded by a factor of 3 in the preprocessing step. Therefore, the total expansion factor is at most 3 + 2 · √ 3 < 6.47.
Lastly, from Lemmas 3.6 and 3.7, we get the following theorem.

Theorem 3.9
There is a polynomial time (21,9)-approximation algorithm for the MMCC problem.

The Algorithm for the EMCC Problem
For the EMCC problem, we can exploit the structure of R d to restrict the expansion factor of the balls to at most 1 + , while paying in terms of the cost of the solution.
In the following, we give an overview of how to adapt the stages of the framework for obtaining this result. Note that in each iteration of the preprocessing stage for MMCC, we consider a point p j and a cluster L j of light balls. We select a largest ball from this set and reroute the flow of other balls in L j to this ball. However, to ensure that the selected ball contains all the points it serves we need to expand this ball by a factor of 3. For the EMCC problem, for the cluster L j , we consider the bounding hypercube whose side is at most a constant times the maximum radius of the balls from L j , and subdivide it into multiple cells. The granularity of the cells is carefully chosen to ensure that (1) Selecting a maximum radius ball among the balls whose centers are lying in that cell, and expanding it by 1 + factor is enough to cover all such balls, and (2) The total number of cells is poly(1/ ). From each cell we select a maximum radius ball, expand it by 1 + factor, and reroute the flow from the other balls in that cell to it. It follows that the cost of the preprocessed solution increases by at most a poly(1/ ) factor. The Cluster Formation stage for the EMCC problem is exactly the same as that for the MMCC problem. Note that in the Selection of Objects stage for MMCC, we select only one ball per cluster and expand it by an O(1) factor to cover all the balls in the cluster. But for EMCC, we want to restrict the expansion factor of the balls to at most 1 + . Hence we select multiple balls per cluster in a way so that the selected balls when expanded by a factor of 1 + can cover all the balls in the cluster. The ideas are similar to the ones in the Preprocessing stage, however one needs to be more careful to handle some technicalities that arise. We summarize the result for the EMCC problem in the following theorem. In the rest of this section we prove this theorem. We describe in detail how the Preprocessing stage and and the Selection of Objects stage are modified. For simplicity, we first assume that d = 2, and thus we are working in the plane. Our algorithm takes an additional input-a constant > 0, and gives an O( −6 log(1/ )) approximation, where each ball in the solution may be expanded by a factor of at most 1 + . For the EMCC problem, the Preprocessing stage is as follows.

Any ball B i ∈ B with non-zero y i is either heavy
2. For each point p j ∈ P, we have that where L is the set of light balls with respect to σ .

For any heavy ball B i , and any point p j ∈ P served by B i , d(c i , p j ) ≤ (1 + ) ·r i . 4. For any light ball B i , and any point p j ∈ P served by B i , d(c i
Proof As in Lemma 3.1, in each iteration, we pick an arbitrary point p j ∈ P for which the inequality (9) is not met, and consider a set L j of light balls serving p j such that α < B i ∈L j y i ≤ 2α. Let r be the radius of a maximum radius ball from the set L j . Let T ⊂ L j , the set of tiny balls, consist of those balls whose radius is at most r /2. We partition the set L j \T of balls into groups such that the centers of balls in a group are nearby.
To compute these groups, we divide the balls in L j \T into O(log(1/ )) classes such that the ith class contains balls whose radii are in the interval (2 i−1 r , 2 i r ], for 0 ≤ i ≤ O(log(1/ )). We consider each class separately. For the ith class, we overlay a grid on the plane where each grid cell has side length 2 i−2 r 2 . For each grid cell that has at least one ball center from the ith class, we create a group consisting of all balls (from the i'th class) whose center lies within the cell. As all balls in L j contain the common point p j , it follows that there exists an axis-parallel square of side 2 i+2 r such that the centers of the balls in ith class are contained in it. Thus the number of groups from the ith class is O( −2 ), and the overall number of groups is O( −2 log 1/ ).
Consider a group B with balls in the ith class. Note that the radius of each ball in this group is at least 2 i−1 r . Let B m ∈ B be the ball with the maximum radius r m ; recall that we break ties in favor of higher capacity. We refer to B m as the leader of its group. Since the center of B m is within distance 2 i−1 r 2 from the center of any ball B l ∈ B , all the points contained in ball B l ∈ B are within distance 2 i−1 r 2 + r l ≤ r m + r m = (1 + ) · r m from the center c m of the ball B m . Thus, a (1 + )-expansion of the leader contains any ball within its group.
For each group B , we select the group leader B m . We reroute the flow from each of the other balls in B to B m . Note that the capacity U m of the ball B m is at least that of the capacity of any ball from the set B , because of the monotonicity property. Furthermore, the ball B m has enough capacity to receive all the redirected flow, since We set y m ← 1, and y t ← 0 for the remaining balls in B . Now we address the tiny balls. Note that the ball in L j with maximum radius is the leader of the group it belongs to. It is easy to see that a (1 + )-expansion of this ball contains every ball in T . We set y t ← 0 for every ball in T , and redirect the flow from it to the maximum radius ball. As above, it is easy to see that the capacity constraint for the maximum radius ball is maintained.
This completes the description of one iteration of the preprocessing algorithm. After all the iterations are completed, we set y t ← 1 for any ball B t such that α < y t < 1. Consider an iteration of the algorithm. At the beginning of the iteration, all balls in L j were light, and y(L j ) > α. After the iteration, none of the balls in L j are light. Furthermore, we select O( −2 log(1/ )) balls from L j in the iteration. It follows that the entire preprocessing algorithm increases the cost by a factor of at most O( −2 log(1/ )). It is easy to verify that the other properties in the statement of the lemma are also satisfied.
As mentioned before, the Cluster Formation stage for EMCC is exactly the same as the one for MMCC. Note that the Cluster Formation stage increases the cost of the solution only by a constant factor. We describe and analyze the Selection of Objects stage in the following lemma. The main idea is similar to that of Lemma 3.11, but there is one complicating factor.
We begin by undoing these redirections. That is, for each light ball B i in cluster(B h ) and each point p j such that f i ( p j ) > 0, we set Let B 1 denote the set of light balls in cluster(B h ) with radius larger than that of B h , and B 2 denote the remaining light balls. We first consider B 1 , assuming it is non-empty. Let r denote the radius of the maximum radius ball in B 1 . Let T 1 ⊂ B 1 , the set of tiny balls from B 1 , denote the balls with radius at most r /4. Observe that for any r > 0, the centers of all balls in B 1 with radius at most r are contained in an axes-parallel square of side length O(r ). Thus, we may proceed as in Lemma 3.11 to partition B 1 \T 1 into O( −2 log 1/ ) groups, such that the (1 + )-expansion of the leader within each group contains any other ball in the group. In each group, we redirect the flow from the non-leaders to the leader. Now assume that T 1 is non-empty, and consider any B i ∈ T 1 . Consider expanding the maximum radius ball of B 1 by an additive factor of 2r h + 2r i ≤ r ; B i is contained in the expanded ball. Thus, a (1 + )-expansion of the maximum radius ball contains each ball B i ∈ T 1 . Hence, we redirect the flow from T 1 to the maximum radius ball, which we may assume is the leader of the group that contains it. As balls in B 1 have capacity at least that of the heavy ball B h , each such ball has enough capacity to absorb the flow redirected to it.
We select the leaders in B 1 . The selection of balls in B 2 is more involved, because the capacity of some of these balls may be much smaller than that of B h . Let T 2 ⊂ B 2 denote the balls with radius at most r h /2. We partition the balls in B 2 \T 2 into groups using the following slightly different scheme. We overlay a grid on the plane where each cell has side length 2 r h /4. For each grid cell that has at least one center of some ball in B 2 \T 2 , we create a group consisting of the balls in B 2 \T 2 with centers within the grid cell. As all centers of balls in B 2 are contained in a square of side length O(r h ), the number of groups is O (1/ 4 ).
Let B denote any such group. We order the balls in B by non-increasing radii, breaking ties using capacity. By monotonicity, this ordering is also an ordering where capacity is non-increasing. Furthermore, for any two balls B i and B j such that B i occurs before B j in the ordering, the (1 + )-expansion of B i contains B j . Let F be the first y(B ) balls in this ordering. Observe that We select the balls in F , and redirect the flow from the balls in B \F to the balls in F in an arbitrary way that respects the capacities. As observed above, the balls in F have enough capacity for doing this.
Each ball in T 2 is contained in the (1 + )-expansion of the heavy ball B h . We select B h , and redirect the flow from each ball in T 2 to B h .
As the number of groups for B 2 is O( −4 ), the number of balls we select for B 2 is O( −4 + y(B 2 )). As the number of balls selected for B 1 is O( −2 log 1/ ), the number of balls selected from cluster(B h ) is also O( −4 + y(B 2 )). We have reassigned the flow that was originally from B h to the selected balls in a way that respects capacity constraints. Furthermore, if a selected ball serves a point, then that point is contained in the (1 + )-expansion of the ball.
Summing over all clusters, the number of balls selected is O( −4 |H| + y(L)), implying the upper bound claimed in the statement of the lemma.
We note that Lemmas 3.11 and 3.12 can be modified to work in R d . In this case, the dependence on becomes O( −d log(1/ )), and O( −2d ), respectively (where the constants inside the Big-Oh may depend exponentially on the dimension d). Therefore, with suitable modifications to Lemma 3.11, the analysis of the Cluster Formation stage from the MMCC algorithm, and Lemma 3.12, Theorem 3.10 follows. Finally, we also note that, after appropriate modifications, the algorithm of this section can also be made to work in metric spaces with constant doubling dimension, with similar bicriteria approximation guarantees.

Hardness of Approximation
In this section, we prove the APX-hardness results for the Metric Monotonic Capacitated Covering (MMCC) problem, and a generalization of MMCC with weights.

Hardness of Metric Monotonic Capacitated Covering
We show that for any constant c ≥ 1, there exists a constant c > 0 such that it is NP-hard to obtain a (1 + c , c)-approximation for the MMCC problem. We observed in Sect. 1 that it is NP-hard to obtain a (o(log n), c)-approximation for the MMCC for 1 ≤ c < 3. The result we establish in this section shows that even if we allow expansion by a constant at least 3, it is not possible to obtain a PTAS for this problem.
To show this, we use a gap-preserving reduction from (a version of) the 3-Dimensional Matching problem.
Consider the Maximum Bounded 3-Dimensional Matching (3DM-3) problem (defined in [29]). In this problem, we are given three disjoint sets of elements X , Y , Z , with |X | = |Y | = |Z | = N , and a set of "triples" T ⊆ X × Y × Z , such that each element w ∈ W := X ∪ Y ∪ Z appears in 1, 2 or 3 triples of T . A triple t = (x, y, z) ∈ T is said to cover x ∈ X , y ∈ Y , z ∈ Z . A subset M ⊆ T of triples such that no element in W appears in more than one triple of M is called a matching, and the set of elements U ⊆ W that are covered by the triples in M are said to be the matched elements. If W = U , then the corresponding M is said to be a perfect matching. The goal of the 3DM-3 problem is to find a maximum cardinality matching. We have the following result for the 3DM-3 problem due to Petrank [29]. Notice that if the given 3DM-3 instance has a perfect matching, then there is a cover for W with exactly N triples. On the other hand, if at most 3β N elements can be matched in the given instance, then the minimum size cover has size significantly larger than N .
Reduction from 3DM-3 to MMCC. Given an instance I of 3DM-3 problem, we show how to reduce it to an instance I of the MMCC problem. Recall that in this version of the MMCC problem, a solution is allowed to expand the balls in the input by the given constant c ≥ 1.
First, we discuss the high level idea of the reduction. For each element w ∈ W , we create an element gadget, which is a system of balls and points that need to be covered. Three of the points in the gadget are designated as ideal points. In an element gadget, (a) the number of points in the gadget is one more than the total capacity of the balls, and (b) if we exclude an ideal point, the remaining points in the gadget can be served by the balls. For each triple t = (x, y, z), we create a triple gadget, a single ball that contains exactly one ideal point from the element gadgets of x, y, and z (see Fig. 1).
x y z t 1 w t 2 Fig. 1 There is an element gadget (represented here by a polygon) corresponding to each of the elements w, x, y, z, containing three ideal points each. Triple t 1 contains elements x, y, z, therefore its triple gadget (a ball) contains the ideal points of those elements. Triple t 2 contains elements w, y, z We ensure that for each element w, the balls for the (at most three) triples containing w are incident to different ideal points in the gadget for w.
Furthermore, the constructed instance of MMCC has the property that every ball in every element gadget must be chosen. From the property (a) of the element gadget, it follows that the triple gadgets chosen in a feasible solution for MMCC correspond to a cover of all elements in W in the original 3DM-3 instance. In particular, if the given 3DM-3 instance admits a perfect matching, then an optimal solution to the MMCC instance chooses precisely N triple gadgets; if the maximum matching in the given 3DM-3 instance is far from perfect, then an optimal solution to the MMCC instance chooses significantly more than N triple gadgets. The main complication in implementing this high level idea is that in the target instance, the solution can expand the input balls by a factor of up to c. Now, we construct a metric space (P ∪ C, d) for the MMCC instance I , that is induced by the shortest path metric on a certain graph G = (P ∪ C, E). The target instance I will have exactly one ball centered at each point of C, and P is the set of points that need to be covered by the balls. We have P ∩ C = ∅. We construct the instance by combining fragments, as described next.
Consider a vertex c ∈ C, that is connected to four other vertices p 1 , . . . , p 4 ∈ P (for convenience we refer to them as left, right, top, bottom vertices respectively) by an edge of weight 1. We also consider the associated ball of radius 1 at the center c that covers p 1 , . . . , p 4 . For convenience, we refer to this object (the graph induced by these five vertices and the ball) as a small cluster, and the ball as a small ball. A large cluster and a large ball are defined in the same way except the radius of the ball and the edge weights are c + 1.
Element Gadget. We combine small and large clusters in a careful way to form the element gadget. Consider p = c(c + 1)/2 + 1 copies of small clusters, numbered κ 1 , . . . , κ p . For each 1 < i ≤ p, we "glue" small clusters κ i−1 and κ i , by setting the right vertex of κ i−1 equal to the left vertex of κ i , that is, identifying these two vertices. This forms an object in which two consecutive clusters share exactly one vertex. We refer to this object as a small chain. For a particular small chain, we refer to the left vertex of κ 1 and right vertex of κ p as its leftmost and rightmost vertices, respectively. · · · · · · (c) Fig. 2 (a) A small cluster, (b) a small chain obtained by "gluing" p = 4 small clusters, (c) a long chain with its endpoints depicted as red squares Note that there are p balls in a small chain and 3 p + 1 points from P. A long chain is obtained by combining two small chains ψ 1 and ψ 2 and a single large cluster, by identifying (a) the rightmost vertex of ψ 1 and the left vertex of the cluster, and (b) the leftmost vertex of ψ 2 and the right vertex of the cluster. The leftmost vertex of ψ 1 and the rightmost vertex of ψ 2 are referred to as endpoints of the long chain. Note that a long chain has 2 p + 1 balls and 6 p + 4 points from P. See Fig. 2 for an illustration. Now we combine three long chains ψ 1 , ψ 2 , and ψ 3 and a large cluster to obtain the element gadget. This is done by identifying an endpoint of ψ 1 (resp. ψ 2 , ψ 3 ) with the left (resp. right, bottom) vertex of the large cluster. The other endpoints of the three chains that have not been identified are referred to as the ideal points of the element gadget. Note that an element gadget has 6 p + 4 balls, and 18 p + 13 = 3(6 p + 4) + 1 points from P.
Triple Gadget. For each element w ∈ W , we add an element gadget. Now we describe the triple gadget for triple t = (x, y, z). This gadget is similar to a large cluster, the only difference is that in addition to the central point c t , it contains only three other points p x , p y , p z . Recall that the edges from c t to these three points have weight c +1, which is also the radius of the corresponding ball. We add a triple gadget for each triple t = (x, y, z) ∈ T . Now, for each such triple t = (x, y, z), we identify the point p x (resp. p y , p z ) in its gadget with an ideal point from the element gadget of x (resp. y,z); see Fig. 3. Here, we ensure that if an element is contained in multiple triples, then a different ideal point from its gadget is assigned to each triple. The total number of balls in all the element gadgets is E := 3N · (6 p + 4). We refer to these balls as element balls. Similarly, the total number of balls in all the triple gadgets is |T |, which we refer to as triple balls. Finally, we set the capacity of all the balls to be 3. The total number of points in P is 3N · (18 p + 13). As mentioned above, the metric is induced by the graph G = (P ∪ C, E) as described above.
This completes the description of the instance I of the MMCC problem. By construction, the instance I satisfies the monotonicity property because all ball capacities are the same. It is also worth highlighting that there are only two distinct radii in the · · · · · · . . . . . . · · · · · · t 1 t 2 t 3

Fig. 3
Three triple gadgets (identified by dashed red balls) corresponding to triples t 1 , t 2 , t 3 are attached to three different ideal points (identified by squares) of an element gadget instance I . We are able to show that such a restricted version of the MMCC problem remains APX hard, even when we are allowed to expand the balls by a constant c ≥ 1 factor.

Properties of the Constructed Instance
Given a ball with radius r and c ≥ 1, we refer to the concentric ball of radius c · r as its c-expansion. We begin with an observation about the points that a c-expansion of a ball in instance I can cover. This follows easily from the length p of a short chain and the radii of the small and large balls.

Lemma 4.2 Let B be any input ball in instance I and let p ∈ P be a point that is contained in the c-expansion of B. Then (a) if B is a large (element or triple) ball, then either p ∈ B or p is a point in a small chain incident to B; (b) if B is small, then p is a point in the small chain containing B.
A large element ball has either one or two points that are not part of any short chain. Thus, Lemma 4.2 implies that any feasible solution to I must choose all the large element balls. Lemma 4.2 is also useful in showing that in any feasible solution, at least one triple ball incident to each element gadget must be chosen. The next lemma says that the above necessary condition is sufficient.

Lemma 4.4 Let B be a subset of the balls in I such that for every element w ∈ W , B includes at least one triple ball containing an ideal point of w's gadget. Furthermore, suppose B also contains all element balls. There is a feasible solution to I with B as the set of chosen balls.
Proof For each element w ∈ W , pick one ideal point q from its gadget that is contained in a triple ball in B , and assign q to that triple ball. For each element ball in w's gadget, among the four points it contains, assign to it the three points that are furthest from q (in the subgraph corresponding to w's gadget). In this assignment, each ball (element or triple) in B is assigned at most three points, and all points in P are assigned. Thus, the solution is feasible, and no input ball is expanded. Now, we establish a technical lemma that is needed subsequently. Proof As we observed already, all large element balls belong to B . Each small ball in w's gadget belongs to one of six short chains. There are two types of short chains. The first type we consider is incident to a large element ball B 1 at one end and a (large) triple ball B 2 at the other. The short chain ψ has p small balls and 3 p + 1 points from P. The large element ball B 1 incident to it has two points that are not in any short chain. From Lemma 4.2, it follows that these two points are assigned to to B 1 . Thus, B 1 can be assigned at most one point from ψ. By assumption, B 2 is assigned at most two points from ψ. This leaves at least 3( p − 1) + 1 points from ψ that must be assigned to the balls in ψ. Thus, all p balls in ψ must be in B .
The second type of short chain ψ we consider is incident to two large element balls, a large element ball B 1 that is part of the long chain that ψ is part of, and the central large element ball B 2 . As before, B 1 can be assigned at most one point from ψ. The ball B 2 has one point not on any short chain that must be assigned to it; hence it can be assigned at most two points from ψ. Arguing as above, we conclude that in this case too, all p balls in ψ must be in B . This concludes the proof. For future reference, we note that in the second type of short chain, we did not use the assumption in the lemma about each triple ball being assigned at most two points.
Finally, we argue that in an optimal solution to I , we may assume without loss of generality that all element balls are chosen.

Lemma 4.6 There is an optimal solution to I in which every element ball is chosen.
Proof Fix an optimal solution that maximizes the number of element balls. We claim that such a solution must choose every element ball. Suppose that this is not the case; we obtain a contradiction. Suppose an element ball B belonging to the gadget for element w ∈ W is not chosen. The ball B must be small. From the proof of Lemma 4.5, B must belong to a short chain ψ that is incident to a large element ball B 1 at one end and a (large) triple ball B 2 at the other. Furthermore, the triple ball B 2 must be chosen and assigned three points from ψ. As the chain ψ has 3 p + 1 points from P, it must be that the p − 1 other small balls in it are chosen, and the large element ball B 1 is assigned exactly one point from ψ. Now, from the optimal solution, we swap out B 2 and swap in B. Assign to the large element ball B 1 the endpoint q of chain ψ that it contains. Assign to each small ball in ψ the three points (among the four it contains) that are furthest from q. This modified assignment is feasible. Thus we have computed another optimal solution that has one more element ball, a contradiction.

Hardness Guarantee
We can now begin relating an instance I of 3DM-3 with the corresponding instance I of MMCC. For the reverse direction, suppose that there is a solution to I with cost at most E + K . Such a solution may expand the input balls in I by upto a factor of c. Then, we use Lemma 4.6 to obtain another solution of size at most E + K , in which all element balls are selected. This solution uses at most K triple balls. By Lemma 4.3, this set of triple balls includes, for each element w ∈ W , at least one triple ball containing an ideal point in w's gadget. Thus the set of triples corresponding to these at most K triple balls covers W .
Using this lemma, we obtain the following two corollaries, that show the gap between the instances that have a perfect matching, and those that do not.
Proof In the 3DM-3 instance I , the maximum number of elements that can be matched is at most 3α N , for some 0 < α < 1. If M ⊆ T is a minimum size set of triples that covers all the elements in W , then we first show that |M| ≥ α N + 3(1 − α)N /2.
where M 1 is the maximal set of triples such that each set covers three distinct elements, and M 2 is the remaining triples By assumption, the number of matched elements is at most 3α N , and therefore, |M 1 | ≤ α N . The number of elements left to be covered by M 2 is 3N − 3|M 1 |, and since each triple in M 2 covers at most two new elements each, |M 2 | ≥ 3(N − |M 1 |)/2. Therefore, since M 1 and M 2 are disjoint, |M| = |M 1 | + |M 2 | ≥ N − |M 1 |/2, which is minimized when |M 1 | = α N . Therefore, |M| ≥ α N + 3(1 − α)N /2. Now, using Lemma 4.7, we conclude that the cost of an optimal solution to I is at It is easy to verify that the previous quantity is exactly equal to 1 + 1−α 2(18 p+13) · (E + N ), recalling that E = 3N · (6 p + 4).

Hardness of Metric Monotonic Capacitated Covering with Weights
We consider a generalization of the Metric Monotonic Capacitated Covering (MMCC) problem. Like in the MMCC problem, here also we are given a set of balls B and a set of points P in a metric space. Each ball has a capacity, and the capacities of the balls are monotonic. Additionally, each ball has a non-negative real number associated with it which denotes its weight. The weight of a subset B of B is the sum of the weights of the balls in B . The goal is to find a minimum weight subset B of B and compute an assignment of the points in P to the balls in B such that the number of points assigned to a ball is at most its capacity. We refer to this problem as Metric Monotonic Capacitated Covering with Weights (MMCC-W). In the case where all balls have the same radius and the same capacity one can get an (1, O(1))-approximation for MMCC-W by using a constant approximation algorithm for the Budgeted Center problem [2]. However, as we prove, there are instances of MMCC-W that consist of balls of only two distinct radii for which it is NP-hard to obtain a (o(log |P|), c)-approximation for any constant c.
The reduction is from the Set Cover problem. Recall that in Set Cover we are given a set system (X , F) with n = |X | elements and m = |F| subsets of X . For each element e i ∈ X , let m i be the number of sets in F that contain e i . Also for each set X j ∈ F, let n j be the number of elements in X j . Without loss of generality, we assume that n j ≥ 2. Note that n i=1 m i = m j=1 n j . Let [t] = {1, . . . , t}. Fix a constant c, which is the factor by which the balls in the solution are allowed to be expanded, and a real 0 < α ≤ 1, which determines the weights of the balls. Let N = max{m, n} and M = c 1+1/α N 2/α .
Given any instance I of Set Cover we construct an instance I = (P, B) of MMCC-W. Intuitively, in a reduction from Set Cover to a metric covering problem, one should take a point for each element and a ball for each set. Also, if an element is contained in a set, the corresponding point should be contained in the respective ball. This construction shows an o(log n) inapproximabilty when we are not allowed to expand the balls. However, we would like to prove the same inapproximabilty even when we are allowed to expand each ball by factor of c. To achieve this result, we will tweak the above mentioned construction and take a collection of points for each element, which together form a pathlike structure. Now, we formally describe our construction. Our instance I is built on top of an underlying weighted graph G = (P ∪ C, E) that we construct below. The set P will be the set of points that need to be covered in I , and each point in C will be the center of a single ball in the instance. We have P ∩ C = ∅. The gadget for each element e i ∈ X consists of a path π i with 2M · m i − 1 vertices, and we refer to this set of vertices as V i . Refer a degree 1 vertex on π i as its 1st vertex, the vertex connected to it as the 2nd vertex and in general for ≥ 2, the index of the vertex connected to the th vertex other than the ( − 1)th vertex is + 1. The odd indexed vertices on this path belong to P, and the even indexed vertices belong to C. Thus M · m i (resp. M · m i − 1) vertices of the path are in P (resp. C). The weight of each path edge is set to be cR/M, where R is a positive real. For 1 ≤ ≤ m i , we refer to the (( − 1) · 2M + 1)th vertex of π i as the 'th ideal vertex of π i (or e i ). Note that all ideal vertices belong to P, and the weighted distance between two consecutive ideal vertices (along π i ) is (cR/M) · 2M = 2cR.
Corresponding to each subset X j ∈ F, G contains a vertex u j that belongs to C. Now for each e i ∈ X , consider a one-to-one mapping f from the set [m i ] to the set of m i subsets of X that contain e i . For 1 ≤ ≤ m i , we connect the th ideal vertex of e i to the vertex corresponding to the set f ( ) by an edge of weight R. That is, we connect the first ideal vertex of e i to the (vertex for the) first set containing e i , the second ideal vertex of e i to the (vertex for the) second set containing e i , and so on. Note that for any set X j ∈ F, u j gets connected to n j vertices of G. This concludes the description of G. See Fig. 4 for an illustration.
We consider the metric space (P ∪ C, d) for I , where d is the shortest path metric on G. Now we describe the set of balls in I . For each X j ∈ F, we add the ball B(u j , R) to B and set its capacity to n j . Note that B(u j , R) contains exactly n j points of P. Each such ball is called a subset ball. For each e i ∈ X , now consider the set of vertices V i on path π i . For each point p of C ∩ V i , we add the ball B( p, cR/M) to B and set its capacity to 1. We note that B( p, cR/M) contains two points from P. Each such ball is called an element ball. The balls in B have only two distinct radii. It is not hard to see that the capacities of these balls are monotonic w.r.t their radii. We set the weight of each ball B( p, r ) to r 1+α . This completes the description of instance I .
We say that a subset ball corresponding to u j is incident on path π i if the ball contains an ideal point of π i , which happens precisely when e i ∈ X j .

Properties of the Constructed Instance
We begin with an observation about the points that a c-expansion of a ball in instance I can cover.

Lemma 4.11
Let B be any input ball in instance I and let q ∈ P be a point that is contained in the c-expansion of B. We have: (a) if B is an element ball corresponding to element e i (and therefore centered at a point on π i ), then q belongs to path π i ; and (b) if B is a subset ball corresponding to set u t , then q belongs to path π i for some e i ∈ X t .
Proof For (a), let p ∈ C denote the center of B. Consider any p on path π j where i = j. Now the distance between p and p is at least 2R and thus even after c factor expansion the ball B( p, cR/M) cannot contain p , as M ≥ c 2 . For (b), consider a point p on path π j such that e j / ∈ X t . By contruction, the shortest path from u t to p has to traverse at least three edges of weight R plus the distance between two consecutive ideal points in some element gadget. Thus, the distance between u t and p is at least 3R + (cR/M) · 2M > cR. Thus even after c factor expansion the ball B(u t , R) cannot contain p .
The next lemma states a necessary condition for a feasible solution for I .

Lemma 4.12
Suppose we have a subset B of input balls in I such that for some element e i ∈ X, B includes none of the subset balls incident on path π i . There is no feasible solution to I with B as the chosen set of balls.
Proof By Lemma 4.11, the only balls whose c-expansion contains a point from π i are the element balls centered at points in π i , and the subset balls incident on π i . If none of these subset balls are in B , there is no feasible assignment with B , as the total capacity of the element balls centered at points in π i is one less than the number of points from P on π i . The next lemma says that the above necessary condition is sufficient.

Lemma 4.13
Let B be a subset of the balls in I such that for every element e i ∈ X, B includes at least one subset ball incident on π i . Furthermore, suppose B also contains all element balls. There is a feasible solution to I with B as the set of chosen balls. Proof For each element e i ∈ X , pick one ideal point q from π i that is contained in a subset ball in B , and assign q to that subset ball. For each element ball centered at a point in π i , among the two points of P it contains, assign to it the point further from q (along π i ). This assignment respects the capacity constraint for all balls in B . Thus, the solution is feasible.

The Hardness Guarantee
Lemma 4.14 The elements in X can be covered by at most k sets of F iff there is a solution to MMCC-W for the instance I with weight at most (k + 1) · R 1+α where the balls in the solution can be expanded by a factor of c.
Proof Suppose X can be covered by a collection F of at most k sets. We construct a set of balls B ⊆ B whose weight is at most (k + 1) · R 1+α . For each set X j ∈ F , we add the subset ball B(u j , R) to B . We also add all element balls to B . Observe that for each element e i ∈ X , B includes a subset ball that is incident on π i . By Lemma 4.13, there is feasible solution to to I with B as the set of chosen balls. Now the total weight of the subset balls is at most k · R 1+α . The weight of all the element balls centered at points in path π i is at most M · m i (cR/M) 1+α . The total weight of all element balls is then at most n · M · m · (cR/M) 1+α ≤ R 1+α .
Thus the weight of B is at most (k + 1) · R 1+α . Now suppose there is a solution, with B as the chosen balls, to MMCC-W with c factor expansion of the balls, and the weight of the balls in B is at most (k + 1) · R 1+α . The total number of points in P is n i=1 M · m i > m j=1 n j . Now the total capacities of the subset balls is m j=1 n j . Thus there must be at least one element ball in B . Also the weight of any subset ball is R 1+α . Thus there must be at most k subset balls in B . We consider the collection F ⊆ F of sets corresponding to these balls (at most k in number). By Lemma 4.12, for each element e i ∈ X , there is a subset ball in B centered at some u t that is incident on path π i . Thus, e i ∈ X t and X t ∈ F . Thus F covers all the points of X .
As Set Cover is NP-hard to approximate within a factor of o(log n), from Lemma 4.14, we obtain the following theorem.

Theorem 4.15
There exists a constant c > 0, such that for any constant c ≥ 1, it is NP-hard to obtain a (c log |P|, c)-approximation for MMCC-W. This result holds even for the particular weight function where the weight of a ball is equal to a constant power of its original radius.