Covering a Few Submodular Constraints and Applications
Abstract
We consider the problem of covering multiple submodular constraints. Given a finite ground set , a cost function , monotone submodular functions over and requirements the goal is to find a minimum cost subset such that for . When this is the well-known Submodular Set Cover problem. Previous work [8] considered the setting when is large and developed bi-criteria approximation algorithms, and approximation algorithms for the important special case when each is a weighted coverage function. These are fairly general models and capture several concrete and interesting problems as special cases. The approximation ratios for these problem are at least which is unavoidable when is part of the input. In this paper, motivated by some recent applications, we consider the problem when is a fixed constant and obtain two main results. When the are weighted coverage functions from a deletion-closed set system we obtain a -approximation where is the approximation ratio for the underlying set cover instances via the natural LP. Second, for covering multiple submodular constraints we obtain a randomized bi-criteria approximation algorithm that for any given integer outputs a set such that for each and . These results show that one can obtain nearly as good an approximation for any fixed as what one would achieve for . We also demonstrate applications of our results to implicit covering problems such as fair facility location.
Keywords and phrases:
covering, linear programming, rounding, fairnessCategory:
APPROXFunding:
Tanvi Bajpai: Supported in part by NSF grant CCF-2402667.Copyright and License:
2012 ACM Subject Classification:
Theory of computation Packing and covering problems ; Theory of computation Facility location and clustering ; Theory of computation Rounding techniquesAcknowledgements:
CC thanks Sayan Bandyapadhyay, Tanmay Inamdar and Kasturi Varadarajan for some earlier discussions on colorful set covering problems with fixed number of colors.Editors:
Alina Ene and Eshan ChattopadhyaySeries and Publisher:
Leibniz International Proceedings in Informatics, Schloss Dagstuhl – Leibniz-Zentrum für Informatik
1 Introduction
Covering problems are ubiquitous in algorithms and combinatorial optimization, forming the basis for a wide range of applications and connections. Among the most well-known covering problems are Vertex Cover (VC) and Set Cover (SC). In SC the input consists of a universe of elements and a family of subsets of . The objective is to find a minimum cardinality sub-collection such that the union of the subsets in is (i.e. covers) . VC is a special case of SC where corresponds to the edges of a given graph , and the sets correspond to vertices of . In the cost versions of these problems, each set or vertex is assigned a non-negative cost, and the objective is to find a covering sub-collection with the minimum total cost. A significant generalization of SC is the Submodular Set Cover (SubmodSC) problem where the input includes a normalized monotone submodular function , and the goal is to find a min-cost subset of such that . Recall that a real-valued set function is submodular iff .
The aforementioned covering problems are known to be NP-Hard, but classical approximation algorithms offer strong guarantees. For SC, the well-known Greedy algorithm proposed by Johnson [14] achieves a -approximation and has also been shown to achieve a -approximation for SubmodSC by Wolsey [21]. These results are essentially tight, assuming [11]. Nevertheless, VC admits a -approximation, and many other special cases of SC and SubmodSC also admit constant-factor approximations. Moreover, settling for bi-criteria approximations allows for additional flexibility and improvements. For SubmodSC, a slight modification to the standard Greedy algorithm yields the following tradeoff: for any integer , we may obtain a set such that and .
Another related covering problem of importance is the Partial Set Cover (PartialSC) problem, in which the input includes an additional integer , and the goal is to find a minimum-cost collection of sets that cover at least elements. A similar formulation extends to the submodular setting where the goal is to find a set such that . Partial covering problems are particularly useful as they naturally model outliers for various settings, and approximation algorithms for these problems have been well studied. It is straightforward to show that PartialSC is equivalent to SC in terms of approximation. In the submodular case, this equivalence is even clearer: we can consider the truncated submodular function where . In contrast, understanding the approximability of Partial Vertex Cover (PartialVC) is more nuanced. While a -approximation for PartialVC is known [4], it is not straight forward to see this.
In recent years, emerging applications and connections, particularly to fairness, have sparked interest in covering and clustering problems involving multiple groups or colors for elements. The goal in these problems is to ensure that some specified number of elements from each group or color are covered. The partial covering problems discussed above are a special case where the number of groups is one. In the Colorful Vertex Cover problem (ColorVC), multiple partial covering constraints are imposed. The input consists of a graph and subsets of edges where . Each subset represents the edges of color , and an edge can have multiple colors. Each color has requirement and the objective is to find a minimum-cost set of vertices that covers at least edges from each . Bera et al. [2] were the first to consider (a generalization of) ColorVC, obtaining an -approximation. This result is essentially tight when is large and part of the input, and one can show that SC reduces to ColorVC (see [2]). More recently, Bandyapadhyay et al. [1] considered ColorVC when is a fixed constant and achieved a -approximation for any fixed .
Chekuri et al. [8] explored a broader class of problems involving the covering of multiple (submodular) functions and various special cases. Their general framework not only recovers the -approximation from [2] but also provides results for a variety of geometric set cover problems, and combinatorial optimization problems such as facility location and clustering. However, while powerful and general, their framework is designed for cases where (i.e., the number of constraints) is part of the input. As a result, it is inherently limited to -approximation due to above mentioned reduction from SC to ColorVC. In a recent work, Suriyanarayana et al. [16] considered the Joint Replenishment Problem (JRP), a fundamental problem in supply chain optimization. JRP and its variants can be viewed as a covering problems where the objective is to satisfy demands over a time horizon for multiple items using orders that incur joint costs; readers can refer to [16] for the formal details of the JRP problem. While constant-factor approximations for JRP are well-established [3], Suriyanarayana et al. [16] introduced a colorful version of JRP (CJRP), similar in spirit to ColorVC. In CJRP, demands are grouped by color, and the goal is to satisfy at least demands from each color class. Chekuri et al. [8]’s framework can yield an -approximation for CJRP; however, [16] focused on the setting where is a fixed constant. They developed an intricate algorithm which achieves a -approximation for any fixed . Note that the approximation ratio does not depend on but the running time is exponential in . There are several settings in approximation for covering and packing problems where one has a fixed number of constraints or objectives and one obtains similar trade offs.
These recent developments inspire us to explore whether the general framework in [8], which applies to a wide variety of covering problems, can be adapted to the setting of a fixed number of submodular covering constraints. The goal is to remove the dependence of the approximation ratio on . Specifically, we consider two settings: (1) the Multiple Submodular Covering problem, and (2) Colorful Set Cover and a generalization.
Multiple Submodular Covering ().
The problem generalizes the above ColorSC problem. Formally, the input to the problem consists of monotone submodular functions over a finite ground set (provided as value oracles), a non-negative cost function , and non-negative integer requirements . The objective is to find a minimum-cost set such that for each . Note that when , this corresponds to SubmodSC. The Greedy algorithm (modified slightly) yields the following trade off for SubmodSC: for any integer the algorithm outputs a set such that and . We note that this trade off is essentially tight for any fixed via the hardness result for SC [11]. It is easy to reduce multiple submodular covering constraints to a single submodular covering constraint [12, 8], but in doing so one is limited to approximation to the cost to satisfy all constraints exactly – it is not feasible to distinguish the individual constraints via this approach. In contrast, Chekuri et al.[8] obtained the following bi-criteria approximation result for MSC: there is an efficient randomized algorithm that outputs a set such that for each and . In this paper we ask whether it is possible to achieve bi-criteria bounds for MSC instances with a fixed number of constraints that match those obtainable for single submodular constraint. We prove the following theorem and its corollary which together affirmatively answers this (modulo a small dependence on ).
Theorem 1.
There is a randomized polynomial-time algorithm that given an instance of the with fixed and outputs a set such that (i) for all and (ii) (where is the cost of the optimal solution to the instance).
Corollary 2.
We remark that the underlying algorithms give additive guarantees. In terms of running time, the most expensive part is to convert this additive guarantee into a multiplicative guarantee by guessing sufficiently many elements from some fixed optimum solution. The additive guarantees provide useful insights and are often sufficient if one assumes that the optimum value is large.
Colorful Set Cover (ColorSC) and generalizations.
Prior work of Inamdar and Varadarajan [13] showed a general setting when positive results for SC can be extended to PartialSC (). We describe the setting first. A class of SC instances (i.e. set systems), , is called deletion-closed if for any given instance in the class, removing a set or an element of the universe results in an instance that also belongs to . Many special cases of SC are naturally deletion-closed. For example, VC instances are deletion-closed as are many geometric SC instance (say covering points by disks in the plane). Suppose one can obtain a -approximation to SC on instances from via the natural LP relaxation. For example, VC has and there are many geometric settings where or [6, 13]. Inamdar and Varadarajan [13] additionally formalized the concept of deletion-closed set systems and obtained a -approximation for PartialSC on a set system from deletion-closed . Their framework gave a straightforward way to approximate geometric partial covering problems by building on existing results for SC. Chekuri et al. [8] extended this framework to address deletion-closed instances of ColorSC problem (which is analogous to the previously discussed ColorVC). Formally, the input for ColorSC consists of a set system from , costs on the sets, and sets where each is a subset of the universe , and non-negative integer requirements . The objective is to find a minimum-cost collection of sets such that for each , at least elements from are covered. In this setting, Chekuri et al. [8] achieved two key results. First, when , they improved the approximation bound from [13] to . Second, for general they obtained an -approximation which generalized [2] for ColorVC. Here we consider this same setting but when is a fixed constant and obtain the following result.
Theorem 3.
For any instance of ColorSC with fixed from a -approximable deletion-closed set system, and any fixed , there is a randomized polynomial-time algorithm that yields a -approximate feasible solution.
Chekuri et al. [8] prove a more general result that applies to covering integer programs induced by deletion-closed set systems (this is similar to the context considered by Bera et al. [2] for ColorVC). This generality is significant in applications such as facility location and clustering. We obtain the preceding result also in this more general setting (see Theorem 15); we defer the formal description of the problem setting to Section 2, and prove the theorem in Section 5.
Implications and Applications.
At a high level, our two main theorems show that approximation results for covering problems with a single submodular constraint, i.e. where , can be extended
to the case with a fixed number of constraints. We believe that these results, in addition to being independently interesting, are broadly applicable and valuable due to the modeling power of submodularity and set covering. However, one should expect additional technical details in applying these results to certain applications such as facility location, clustering, JRP, and others. In this paper, we prove our main technical results and lay down the general framework. We also demonstrate how the framework can be applied and adapted. We discuss these applications next.
Colorful covering problems:
Recall that ColorVC admits a -approximation for fixed [1]. Now consider the problem of covering points in plane by disks with costs.
This well-studied geometric set cover problem admits an approximation via the natural LP [6, 18].
[13, 8] obtained a approximation for PartialSC version of this problem ()
and
[8] obtained an -approximation for the colorful version of this problem (arbitrary ). In this paper we obtain a approximation for any fixed . In effect all the PartialSC problems considered in [13, 8] for various geometric settings admit essentially the same approximation for the colorful versions with fixed . The colorful problems capture outliers and fairness constraints, and thus, as long as the number of such constraints is small, the approximation ratio is independent of . Our bi-criteria approximation for MSC, in a similar vein, allows tight trade offs in cost vs coverage for any fixed number of constraints. See [12, 8] for an application to splitting point sets by lines where a bi-criteria approximation is adequate.
Facility location with multiple outliers: Facility location and clustering problems have long been central in both theory and practice, with a wide variety of objective functions studied in the literature. At a high level, the goal in facility-location problems is to select a subset of facilities and assign clients to them in a way that balances assignment cost with facility opening cost.
Several objectives and variants are studied in the literature.
Motivated by robustness considerations there is also extensive work on these problems when there are outliers. The goal in such problems is to solve the underlying clustering or facility location problem where certain number of points/clients can be omitted; equivalently the goal is to connect/cluster at least a given number of points.
These problems do not fall neatly in the framework of (partial) covering problems. Nevertheless it is possible to view them as implicit covering problems; in some settings one needs to consider an exponential sized set system. Thus, although the problems cannot be directly cast in as or , some of the ideas can be transferred with additional technical work.
[8] used this perspective and additional ideas to obtain -approximation for two facility location problems with multiple outlier classes. In this paper, we
consider these two problems when is a fixed constant and obtain constant factor approximations. The first one involves facility location to minimize the objective of sum of radii; it is easier to adapt the ideas for to this problem and we discuss this in the full version of our paper.
Our main new technical contribution is the second problem related to the outlier version of the well-known uncapacitated facility location problem (UCFL). In UCFL the input consists of a facility set and a client set that are in a metric space . Each facility has an opening cost and connecting a client to a facility costs . The goal is to open a subset of the facilities to minimize where is the distance of to the closest open facility. This classical problem has seen extensive work in approximation algorithms; the current best approximation ratio is [15]. Charikar et al. [7] studied the outlier version of this problem (under the name robust facility location) and obtained a -approximation; in this version the goal is to connect at least some specified number of clients, and this corresponds to having a single color class to cover. In [8], the generalization of this to outlier classes was studied under the name Facility Location with Multiple Outliers (. In this paper we prove the following result.
Theorem 4.
Let be the approximation ratio for UCFL via the natural LP relaxation. Then there is a randomized polynomial-time algorithm that given an instance of with fixed and yields a -approximate solution.
We remark that we use LP-based algorithms for UCFL as a black-box and have not attempted to optimize the precise ratio. Using from [15] we get an approximation factor of . We also note that the case of being fixed requires new technical ideas from those in [8]. We recall the non-trivial work of [16] who consider CJRP for fixed by adapting the ideas from [8] for an implicit problem. One of our motivations for this paper is to obtain a general framework for for fixed , and to showcase its applicability to concrete problems.
Summary of Technical Contributions.
Our algorithms follow a streamlined framework (outlined in Section 3.2) that builds upon the ideas in [8], while introducing additional new ingredients to obtain improved approximations when is fixed. The framework consists of four stages; at a high level, these stages can are: guess high-cost elements, solve a continuous relaxation, randomly round the fractional relaxation, and greedy fix in a post-processing step to ensure that constraints are appropriately satisfied. The main new ingredient is the rounding procedure employed in the third stage, which is centered around a rounding lemma (Lemma 10). The lemma is based on the concentration bound for independent rounding of a fractional solution to the multilinear relaxation of a submodular function. The improved approximation for fixed is based on obtaining a strong concentration bound. In order to obtain this we need to ensure that the underlying submodular function has a Lipschitz property. Our new insight is that one can use a greedy procedure to select a small number of elements to ensure the desired Lipschitz property. However this means that the algorithms will incur an additive approximation for selecting a constant number (that depends only on and ) of elements. We are able to convert this additive approximation into a multiplicative approximation by guessing a sufficiently large number of elements in the first stage. We note that several packing and covering problems with a fixed number of constraints follow this high-level template. Typically one uses properties of basic feasible solutions to underlying LP relaxations for packing and covering problems. We cannot use standard methods because our constraints are submodular and hence non-linear. Our fractional solutions come from solving non-linear programs or auxiliary LPs with many constraints. This is the reason for relying on our rounding approach. The high-level overview suppresses several non-trivial technical details in the algorithms, some of which are inherited from [8]. There are important differences in the details for the two problems, and are explained in the relevant sections.
Organization.
Section 2 provides the formal definitions for and , as well as some relevant background on submodularity. In Section 3 we describe the Rounding Lemma (Lemma 10), which is our key ingredient. In the same section, we provide an overview of our algorithmic framework. Section 4 describes our algorithm for Theorem 1, and Section 5 describes our algorithm for which generalizes Theorem 3. Finally, in Section 6, we discuss the application of to Facility Location. We sketch the analysis of the algorithms in the respective sections and defer the reader to the full version for complete details.
2 Notation and Preliminaries
2.1 Problem Definitions
Below we provide the formal problem statements for Multiple Submodular Covering Constraints () and Covering Coverage Functions () problems [8].
Definition 5 ().
Let be a finite ground set, and we are given monotone submodular functions for each with corresponding demands . Additionally, we are given a non-negative cost function . The goal is to find a subset that solves the following optimization problem:
Definition 6 ().
The problem consists of a set system where is the universe of elements, is a collection of subsets of . Each set has a cost associated with it and we are given a set of inequalities where . The goal is to optimize the integer program given in -.
| (-) |
| (-) |
In this program, the variables for represent the indicator variable for whether the set is selected. The variables for are indicators for whether the element is covered. If entries of are in then we obtain ColorSC as a special case.
CCF as a special case of MSC.
We view the constraints as submodular functions as follows: for the constraint, given by the row of matrix, we define a submodular function as follows: where is a variable indicating whether i.e., whether is covered by a set in the collection of sets, and is the entry in row and column of matrix . This is a submodular function since it is a weighted coverage function with weights coming from the matrix . In Section 5, we will use either the matrix form or the submodular function form as convenient.
In this paper we assume that input instances of each problem will have a fixed number of constraints, i.e., is taken to be some positive fixed constant.
2.2 Submodularity
For a submodular function, defined on ground set , we will use to denote the submodular function that gives marginals of on set i.e., . Further, for any and , we will use to denote the marginal value of when added to .
Multilinear Extensions of Set Functions.
Our main rounding algorithm (discussed in Section 3) makes use of the multilinear extension of set functions. Here are some relevant preliminaries.
Definition 7 (Multilinear extension).
The multilinear extension of a real-valued set function , denoted by , is defined as follows: For
Equivalently, where is a random set obtained by picking each independently with probability .
Lipschitzness and Concentration Bounds.
One benefit of utilizing the multilinear extension that is relevant to our algorithm is a concentration bound. Before we state that bound, we define Lipschitzness of a submodular function.
Definition 8 (-Lipschitz).
A submodular function is -Lipschitz if and : When is monotone, this amounts to .
Lemma 9 (Vondrák [20]).
Let be an -Lipschitz monotone submodular function. For , let be a random set where each element is drawn independently using the marginals induced by . Then for ,
Discretizing costs and guessing.
Our problems involve minimizing the cost of objects. Via standard guessing and scaling tricks, if we are willing to lose a factor in the approximation, we can assume that all costs are integers and are polynomially bounded in . In particular this implies that there are only choices for the optimum value. We will thus assume, in some algorithmic steps, knowledge of the optimum value with the implicit assumption that we are guessing it from the range of values. This will incur a polynomial-overhead in the run-time, the details of which we ignore in this version.
3 High-Level Algorithmic Framework and Rounding Lemma
Our algorithms for approximating and share a unified framework. The framework relies on a Rounding Lemma (Lemma 10), which provides a method to convert a fractional solution to an integral one in a way that preserves key properties of the original fractional solution while incurring only a minimal additive cost. In this section, we begin by presenting and proving our Rounding Lemma (Section 3.1). We then provide a high-level overview of the algorithmic framework (Section 3.2) built upon this Rounding Lemma.
3.1 Rounding Lemma
A key step in our framework is the ability to round fractional solutions to integral ones while maintaining important guarantees. Our Rounding Lemma (stated below) provides a formal method for achieving this. It shows that for submodular functions (), a fractional solution satisfying certain constraints with respect to their multilinear extensions can be (randomly) rounded in polynomial time to an integral solution. This rounding incurs only a small (constant sized) additive increase in cost and ensures that the values of the submodular functions are sufficiently preserved with high probability.
Lemma 10 (Rounding Lemma).
Let be a finite set, be a non-negative cost function and let . Let be monotone submodular functions over . For each , let be its corresponding multilinear extension. Suppose we are given a fractional point such that and for each . Then, for any fixed there is a randomized algorithm that outputs a set such that (i) , and (ii) with probability at least .
To prove this lemma, we first establish the following helpful claim, which ensures that after selecting a subset of elements, a monotone function becomes Lipschitz continuous (Definition 8), with a Lipschitz constant that depends on the number of elements chosen.
Claim 11.
Let be a finite set and be a non-negative monotonic111Note that we do not need submodularity in this claim. function over . For any , and , there exists a polynomial time algorithm that returns a set of cardinality at most such that one of the following conditions hold: (i) , or (ii) , .
Proof.
Consider the following greedy procedure to construct : Initialize as the empty set. While there exists an element satisfying and , add to . Finally, return .
We prove that in at most iterations of the while loop, the set must satisfy or . Note that trivially, the returned set satisfies one of these conditions. In the rest of the proof we will bound the cardinality of .
Suppose that we never satisfy in iterations. We will show using induction that we must then satisfy . In particular, let denote the set after iterations of the while-loop in the given procedure (where ), and let be the element that is to be added to in the -th iteration. Now, using induction on , we will show that for all the following inequality will hold
For , is the empty set, hence , and . Assume for some fixed . We will show that the inequality must hold for . To see this, observe that if does not hold, then there exists an element, such that . Therefore we will have,
Therefore, for , holds. Thus, we are guaranteed that after selecting elements, if is not met holds.
Using Claim 11, we prove the Rounding Lemma.
Proof of Rounding Lemma (Lemma 10).
Consider the following algorithm. For a value of that we will determine later, for each , create a set according to the greedy algorithm in Claim 11. We are therefore guaranteed that for each constraint or for all . Note that ’s may not be disjoint. Next, perform independent randomized rounding with marginal probabilities given by to find a set . Let .
We first prove that each constraint is satisfied to a approximation with probability . Fix any constraint . Suppose the set returned from Claim 11’s algorithm satisfies , then we are done. Otherwise, we know that for all , . Denote by the submodular function that is defined by . Let . We have that for all , and therefore, for all . Then, when we (independently) randomly round using marginal probabilities given by , we get using Lemma 9 for the function 222Note that the Lemma 9 applies when we round elements in . Here we round all elements in and, since is monotone, this is guaranteed to have a higher value than rounding some of them, therefore we can safely apply the concentration bound.,
| (1) |
Now, . Furthermore,
| (By Definition 7) | ||||
| (Since the probabilities sum to one) | ||||
| (Using monotonicity of and Definition 7) | ||||
Therefore, . Substituting this in Equation 1, Now, choosing , we have Therefore, with probability at least ,
Now, we will look at the cost of the sets chosen. To get the required probability bound, we need to choose . Therefore, following claim 11, the size of set for each is at most . Therefore, together the total cardinality of is bounded by . Finally, the expected cost of is . Together, we get .
It is easy to see that the algorithm runs in polynomial time.
3.2 Overview of Algorithmic Framework
Our algorithmic framework consists of four stages. Let denote the initial set of objects, and let denote the number of covering constraints.
-
Stage 1: Guessing Highest-Cost Objects from the Optimal Solution. First, we “guess” as the highest-cost objects from the optimal solution, where is chosen to be some sufficiently large constant number that depends on and . By guessing we mean enumerating all possible -sized subsets of objects from . We use this subset to construct a residual instance in which all objects with costs higher than any object in are discarded.
-
Stage 2: Constructing a Fractional Solution. Let denote the set of objects that remain after Stage 1. We solve a continuous relaxation to obtain a a fractional solution to the residual instance.
-
Stage 3: Rounding the Fractional Solution. is rounded into an integral solution using Rounding Lemma (Lemma 10). satisfies each covering constraint with adequate probability while incurring a small additive cost.
-
Stage 4: Fixing . Finally, since only satisfies the constraints probabilistically, additional adjustments are needed to ensure all constraints are satisfied. This is done in a simple way by fixing each constraint individually by greedy methods.
4 Multiple Submodular Cover
We prove Theorem 1 (restated below). In the full version, we show how this algorithm can be extended to prove Corollary 2.
We will use to denote the input instance of , which consists of a finite ground set , a non-negative cost function , and monotone submodular functions with associated requirement . Recall that we assume is fixed. We will assume (without loss of generality) that for all .
Theorem 1. [Restated, see original statement.]
There is a randomized polynomial-time algorithm that given an instance of the with fixed and outputs a set such that (i) for all and (ii) (where is the cost of the optimal solution to the instance).
4.1 Algorithm Preliminaries
4.1.1 Constructing residual instances
Recall that Stage 1 of our algorithmic framework (Section 3.2) constructs a residual instance after guessing a subset of elements from the optimal solution. Below is a definition which describes how to construct a residual and cost-truncated residual instances.
Definition 12 (Residual and Cost-Truncated Residual Instances).
Given an instance and a set , a residual instance of with respect to is where , for all , and . We do not change since the cost values remain unchanged, but assume its domain is restricted to . A cost-truncated residual instance of with respect to is a residual instance in which elements in that have a higher cost than any element in are also removed. More precisely, .
4.1.2 Continuous relaxation of and approximation results
To construct the desired fractional in Stage 2, we use the continuous relaxation of , stated below as a feasibility program in MSC-Relax. For an instance , the objective of MSC-Relax is to find a fractional point whose cost is at most and where the value of the multilinear extension of each constraint , denoted by , at satisfies the demand .
Note that for , where is the cost of the optimal solution to the instance, MSC-Relax is guaranteed to be feasible.
| (MSC-Relax) |
Finding a feasible solution to the above continuous optimization problem is NP-Hard. The following result can be used to get an approximate solution.
Theorem 13 (Chekuri, Vondrák, and Zenklusen [10] and Chekuri, Jayram, and Vondrák [9]).
For any fixed , there is a randomized polynomial-time algorithm that, given an instance of MSC-Relax and a value oracle access to the submodular functions , with high probability, either correctly outputs that the instance is not feasible or outputs an such that
4.1.3 Algorithm to fix constraints
In Stage 4 of our algorithm, we need to fix unsatisfied constraints, and for this we can focus on a single submodular covering constraint. The relevant optimization problem is the following:
Suppose the optimum cost for this is . Our goal is to show that we can find a set such that and . For this purpose we guess and recast the problem as a submodular function maximization problem subject to a knapsack constraint with budget .
Note that there is a feasible solution to this maximization problem of value at least . We can now use the following result. It is based on slight modification of the standard greedy algorithm.
Lemma 14 (Sviridenko [17]).
There is a -approximation for monotone submodular function maximization subject to a knapsack constraint.
4.2 Algorithm for MSC
We provide pseudo-code for our algorithm in Algorithm 1 and demarcate the various steps involved in each of the framework stages outlined in Section 3.2. We additionally provide more detailed descriptions of how each of the four stages in the context of our approximation.
Stage 1: Choosing and constructing .
In the first stage we guess sufficiently many (i.e. ) high-cost elements from the optimal solution, denoted by . To properly set , we define and such that and . Next, we define . With this, we finally set . All these choices become clear in the analysis. We additionally assume that , otherwise we can simply return as our final solution.
Now, upon guessing (i.e. enumerating) , we construct a cost-truncated residual instance of with respect to called per Definition 12, in which will not contain any element whose cost is higher than elements in .
Stage 2: Constructing a fractional solution using the continuous relaxation of .
To construct our fractional solution , we use the continuous relaxation of , as defined in Section 4.1.2. Let denote the cost of an optimal solution to . Using Theorem 13 we obtain a vector s.t. and .
Stage 3: Rounding to construct set .
Now we use our Rounding Lemma (Lemma 10) to round to an integral solution . The precise usage of the Rounding Lemma for this algorithm is as follows: Given , , and their corresponding multi-linear extensions , our fractional point satisfies and for all , (where ). Rounding Lemma thus outputs an s.t. (i) (where is the cost of the most expensive element in ) and (ii) with probability at least . Note that this is at least per our choice of and .
Stage 4: Greedily fixing unsatisfied constraints.
For each constraint that has yet to be sufficiently satisfied by , i.e., , use the procedure outlined in Section 4.1.3 to reformulate the problem of finding a corresponding to “fix” (i.e. satisfy) it as a submodular maximization problem subject to a single knapsack constraint. The budget for the knapsack constraint will be . Using the greedy procedure from Lemma 14 we obtain, for each unsatisfied , a set s.t. and .
Letting , our algorithm returns the elements in . In the following subsection, we show how this output will satisfy the coverage and cost bounds in Theorem 1
Analysis overview.
We defer the formal analysis to the full version and give a brief overview here. The analysis is not difficult modulo the powerful technical tools that we used along the way. Let be the optimum value of ; the residual instance after picking has a feasible solution of cost . Stage 2 solves a continuous relaxation for the problem which outputs a fractional solution that satisfies the residual constraints, , to a factor of with . Then we apply our Rounding Lemma to round to a solution that satisfies each constraint with probability at least while incurring an additive cost that we can bound as . This is because we guessed sufficiently many high-cost elements for . Since the probability of a constraint remaining unsatisfied after rounding is at most , the expected cost of fixing it is at most and hence the total expected cost of fixing the constraints is . Thus the total expected cost is . Each constraint is satisfied to . The running time is dominated by the time to guess the elements in , which is polynomial since with fixed, is a constant.
5 Covering Coverage Functions
In this section, we prove Theorem 15 (stated below). Recall that the problem (Definition 6) is a useful general problem that can model problems such as ColorVC, ColorSC, various facility location and clustering problems, to name a few. As such, Theorem 15 subsumes Theorem 3, which we discussed in the introduction.
We begin by discussing some technical components that are needed for our algorithm, before proceeding with its description. We conclude the section with a brief overview of the algorithm analysis. Again, complete details of the analysis and proof of Theorem 15 can be found in the full version.
For the remainder of this section, we use to denote an instance of the problem with set system , constraint matrix , and requirements for , where is assumed to be fixed.
Theorem 15.
Given an instance of the problem such that the underlying set cover problem has a approximation via the natural , and for a deletion closed set system, for any , there exists a randomized polynomial time (in ) algorithm that returns a collection such that (i) is a feasible solution that satisfies all the covering constraints and (ii) .
5.1 Algorithm Preliminaries
5.1.1 Operations on constraints
At times, we will restrict our instance to a subset of elements of the universe. We define this as follows.
Definition 16 (Restricting the Universe).
Given an instance of the problem, and a subset of elements, , we define restricting the Universe to elements of as follows: For each set , and update the Universe . For each such that , delete column of matrix A.
Since we assume that the underlying set system of our initial instance is deletion closed, the restricted instance is a valid instance. Next, our algorithm chooses sets to be a part of the solution in multiple stages. Accordingly, we update the instance to reflect this selection. This is defined as follows.
Definition 17 (Residual Instance-).
Given an initial instance and a collection of sets we define a residual instance of with respect to as follows: For all and , set . For all , set . In addition, for all , let denote the submodular function that corresponds to the constraint.
5.1.2 Greedy algorithm to fix CCF constraints
Recall that our framework requires us to have the ability to satisfy a single constraint. For the problem, we use the following from [8]. Consider a universe of size with a collection of subsets of the universe. Each element has a weight and each set has a cost . Suppose we want to pick a sub-collection of sets with maximum weighted coverage such that a budget constraint is satisfied. We can write the following for this problem.
| (-) |
This is called the Max-Budgeted-Cover problem. One can run the standard greedy algorithm for this problem and [8] shows the following guarantee.
Lemma 18 (Chekuri et al. [8]).
Let be the optimum value of (-) for a given instance of Max-Budgeted-Cover with budget . Suppose Greedy Algorithm is run till the total cost of sets is equal to or exceeds for the first time. Then the weight of elements covered by greedy is at least .
Remark 19.
The proof of the above lemma yields a stronger result: the Greedy algorithm achieves the same guarantee even when restricted to the support of the fractional solution. This will be useful in implicit settings.
5.2 Algorithm for CCF
Stage 1: Guess the high cost elements.
Following the framework, we guess high cost elements from some fixed optimal solution (Line 1). The number is chosen to be . After this, we construct a residual instance to reflect this selection and work with the residual instance for remaining stages.
Stage 2: Construct Fractional Solution using the LP Relaxation.
Stage 2′: Obtain a slack in covering.
We divide the elements into two categories: The heavy elements () are those for which , while the shallow elements are the remaining non-heavy elements.
We will first cover the heavy elements completely (in Line 4). This is done by restricting the universe to , using restricted to as a solution for canonical of the canonical set cover problem and then using the -approximation promised in the problem to obtain an integral covering. Here we use the fact that the instance is deletion closed. We then scale the shallow elements by to obtain a corresponding slack in covering.
Stage 3: Round fractional solution.
Stage 4: Greedy fixing of the solution.
For any constraint that is not satisfied, we will fix it via a greedy algorithm. Following the greedy fix for Max-Budgeted-Cover in 5.1.2, this works as follows: From the collection of sets not picked, pick the sets greedily in order of decreasing bang-per-buck (ratio of marginal value of the set and its cost).
Analysis overview.
We defer the formal analysis to the full version and give a brief overview here. First, we define, , where and are the optimal values for and (the residual instance of with respect to ), and is the set of elements guessed in Stage 1. In Stage 2 we solve an LP relaxation to obtain a fractional solution that satisfies all the constraints exactly with .
Following our framework, we want to use our key lemma to round . However, the lemma is based on the multilinear relaxation of the residual function while satisfies the constraint based on the LP. Hence, we must address the issue of the correlation gap [5, 19], since working with loses a factor of in covering the constraint and our goal is to cover it exactly. To address this, we scale up the LP solution by a factor of . Note that in doing so, there will be some points that are already covered beyond and their coverage should not exceed . Following [13, 8], we first separate out the points that are covered already to more than . Scaling covers them in the LP to , and we can cover all those points via the LP-based Set Cover algorithm promised to us by the deletion-closed set system. The cost of covering them is . For points that are covered less than the scaling increases their coverage by a factor of , allowing us to work with the multilinear relaxation and avoid the correlation gap. This allows us to use the Rounding Lemma on these points to fully cover them. A minor technicality to note is that the Rounding Lemma loses a -factor in the coverage so we actually must scale up the LP by . Thus, the expected cost of the random rounding is plus the additive cost which we bound as an -fraction of as in analysis.
To fix the unsatisfied constraints, and achieve exact coverage, we do the analysis with respect to the scaled up LP solution and take advantage of Lemma 18. The expected cost of
fixing can be bound by since each constraint is satisfied with probability in the random rounding.
Thus the total cost is
Each constraint is satisfied fully as per the discussion. The running time of our algorithm is dominated by guessing . With fixed, the size of is a constant and the running time is polynomial.
6 Applications to Facility Location with Multiple Outliers
In this section, we give an overview for applying the framework to the problem of Facility Location with Multiple Outliers (). For complete details and an application to facility location to minimize sum of radii, we refer the reader to full version of the paper. is a generalization of the classical facility location problem, in which clients belong to color classes, and the objective is to cover a required number of clients from each class. Specifically, we are given a set of facilities , a set of clients , where each client belongs to at least one of color classes (i.e. ), and a metric space . Each facility has an associated non-negative opening cost , and each client can be served by assigning it to an open facility , incurring connection cost . For each color class , we are given a required demand (where ). The goal is to open facilities and assign clients to facilities such that at least clients from are serviced by a facility, and where the total facility and connection costs of opened facilities and their assigned clients is minimized.
can be naturally expressed as an instance of the problem over an exponentially large set system, where the universe of elements is and where each set corresponds to a star defined by a facility and a subset of clients assigned to it. The goal is to select a collection of such stars to cover at least clients from each color class , while minimizing the total cost, where the cost of a star is given by . We can capture this problem via the following exponentially sized CCF-esque IP.
| (-) |
Since there are exponentially many stars to be considered, we apply a refined version of the framework that avoids explicitly enumerating all possible stars. The key change is that instead of guessing the high cost stars, we guess certain high-cost components of the optimal solutions.
We now describe each stage of our refined -based algorithm for , beginning with a pre-processing step (Stage 0) that enables efficient guessing and separation oracle.
Stage 0: Scaling and pruning facility and connection costs.
As is standard, we assume that the value of the optimal solution is guessed up to a factor (we overload to denote both the true and guessed optimal cost). This introduces only a polynomial overhead in runtime and at most a loss in the approximation ratio. In addition to guessing , we define a polynomial scaling factor .
Using the guessed and , we can now eliminate any unnecessary or negligible facility and connection costs. First, we disallow (i.e. remove from the instance) any facilities for which since we know they will never be selected in the optimal solution. We also disallow any connections between facilities and clients for which . Next, for any facility with , we define its scaled cost ; for any client-facility pair where , we define the scaled connection cost . For the remaining facility and connection costs, we scale them as follows: This will guarantee that all scaled facility costs will be integer-values in . Thus, the scaled cost of each star will be some polynomially bounded integer. The discretized distances may no longer form a metric (since they may violate triangle inequality), however, this is fine since our framework did not require set costs to satisfy such properties. The metric property for distances will only be required in Stage 2’, when we must solve the canonical facility location problem to cover heavy clients. For that stage, we will show that we can viably revert back to using the original metric .
This pre-processing step will incur at most an additive term per cost term, and thus will result in a multiplicative increase in the total cost of the solution, but this can be absorbed into our final approximation factor. In the rest of the section we assume that we have guessed correctly to within a factor.
Stage 1: Guessing high-cost components.
This stage consits of two parts: (1) Guess some high cost components (2) Create a residual instance accounting for the guess.
Guess high-cost components.
In the framework, we needed to guess the most expensive stars from the optimal solution. This is because our rounding procedure in Lemma 10 and the greedy fix step later choose these many extra sets to satisfy the constraints and we account for these sets via the initial guesses. For the setting explicitly guessing a a star would require exponential time. To circumvent this we instead guess partial information about many stars. However, we must now guess more information due to this partial knowledge.
Specifically, we guess tuples of the form for with , where is a facility from one of the highest cost stars, is a set of the farthest clients assigned to in the optimal solution, and is the total cost of the full optimal star at facility , i.e., , where denotes the full client set served by . We note that if there are fewer than stars in the optimum solution, the problem becomes simpler; in that case we would have guessed all the optimum’s facilities. This will be clear after a description of the remaining analysis and we give a remark at the end of Stage as to how to deal with that case. Let be the cost of the guessed portion of the solution.
Create residual instance.
Let denote the collection of guessed partial stars. We define as the set of guessed high-cost facilities, and as the set of clients guessed to be served by those stars. Let be the smallest guessed star cost across all tuples.
Now, to describe the residual instance with respect to these guessed components, first update the color demands to . Next, restrict attention to clients in and define for each facility a restricted family of allowable stars, denoted by , and updated star costs, denoted by , as follows:
-
For , we allow only singleton stars with and (for the corresponding tuple ). For these stars, we define the updated cost as , since after guessing , we account to open facility and the connection costs for the clients in .
-
For , we allow any star with and total cost at most . For these stars, the cost remains as .
|
Restricted Primal (-)
s.t. Dual s.t. |
(- Primal & Dual) |
Stage 2: Constructing a fractional solution via the dual.
We now solve an LP relaxation for the residual instance defined in Stage 1. This may still contain exponentially many variables. To solve this LP efficiently, we apply the ellipsoid method to its dual (see - Primal & Dual), which has polynomially many variables but exponentially many constraints. This requires a polynomial-time separation oracle which, given a candidate dual solution , either certifies feasibility or returns a violated constraint. That is, it identifies residual star such that . Again, since facility costs and distances are integral and polynomially bounded (from pre-processing in Stage 0), we can design such an oracle. We formalize this as the following lemma.
Lemma 20.
Assuming that all distances and facility costs are integral and polynomially bounded, there exists a polynomial-time separation oracle for the dual LP given in - Primal & Dual.
Note that the LP may not be feasible if our guess was incorrect. In this case we can discard the guess. For a correct guess, we denote the cost of this LP solution by . Note that .
Stage 2’: Handling Heavy vs. Shallow Clients.
Given the fractional solution to the residual LP, we now have a polynomial-sized support of stars. We begin by partitioning the clients into heavy clients and shallow clients based on their fractional coverage: let and , where we set as in the framework. To cover the heavy clients, we prove the following lemma.
Lemma 21.
Given , a feasible solution to the residual LP with cost , there is an efficient algorithm to cover the heavy clients with cost at most where is the approximation factor of the underlying LP-based approximation algorithm for UCFL.
Now, to proceed with Stages and for the shallow clients, we perform the following steps.
-
Restrict the instance to shallow clients. In our fractional solution for the primal in - Primal & Dual, we update the stars to only have shallow clients and remove all the heavy clients. The variables remain unchanged. The variables are restricted to only shallow clients. We also update the covering requirements to reflect that heavy clients have already been selected: for each color class .
-
Scale the solution. We scale the solution by . Note that the new scaled solution is a feasible solution for the primal in - Primal & Dual with a cost and each color’s residual requirement is over-covered by a factor of .
Stage 3: Rounding the fractional solution.
We work with the residual instance restricted to shallow clients and scaled as mentioned in the previous stage. We now want to apply Lemma 10 to round this fractional solution. The algorithm in the lemma potentially picks stars to ensure that randomized rounding satisfies the constraints with high probability. Algorithmically, we still perform the same step. For each color class , we select (up to) the top stars from the support of the fractional solution . The proof of Lemma 10 easily extends to only picking from the support. Finally, we randomly round the remaining stars with probabilities given by . Suppose the set of stars chosen in this stage is . We can prove the following key lemma.
Lemma 22.
After selecting the stars , each constraint is satisfied with probability at least and the expected cost of is bounded by .
Stage 4: Greedily fixing remaining unsatisfied constraints.
If any color class constraint remains unsatisfied after Stage 3, we fix it by greedily selecting stars until the constraint becomes satisfied. As in the original framework, we can employ the greedy fix from the Max-Budgeted-Cover heuristic. The - for this problem is to select a subset of stars that cover the maximum number of clients at a cost bounded by . Further, as per the remark after Lemma 18, this Greedy can be executed by only looking at the support of the LP solution. Therefore the step can be performed in polynomial time. Similar to the analysis in , this greedy fix is applied to the covering class with a probability bounded by . Further we can show that the total expected cost to fix all constraints is bounded by where is the cost of highest cost star in the support. Each star in the support has cost at most due to the guessing in Stage (as discussed in proof of Lemma 22). Hence the expected fixing cost is .
Putting together, we obtain the following result. The analysis is similar to that of . The running time is polynomial in .
Theorem 4. [Restated, see original statement.]
Let be the approximation ratio for UCFL via the natural LP relaxation. Then there is a randomized polynomial-time algorithm that given an instance of with fixed and yields a -approximate solution.
References
- [1] Sayan Bandyapadhyay, Aritra Banik, and Sujoy Bhore. On fair covering and hitting problems. In Graph-Theoretic Concepts in Computer Science: 47th International Workshop, WG 2021, Warsaw, Poland, June 23–25, 2021, Revised Selected Papers 47, pages 39–51. Springer, 2021. doi:10.1007/978-3-030-86838-3_4.
- [2] Suman K Bera, Shalmoli Gupta, Amit Kumar, and Sambuddha Roy. Approximation algorithms for the partition vertex cover problem. Theoretical Computer Science, 555:2–8, 2014. doi:10.1016/J.TCS.2014.04.006.
- [3] Marcin Bienkowski, Jaroslowaw Byrka, Marek Chrobak, Neil Dobbs, Tomasz Nowicki, Maxim Sviridenko, Grzegorz Świrszcz, and Neal E Young. Approximation algorithms for the joint replenishment problem with deadlines. Journal of Scheduling, 18(6):545–560, 2015. doi:10.1007/S10951-014-0392-Y.
- [4] Nader H Bshouty and Lynn Burroughs. Massaging a linear programming solution to give a 2-approximation for a generalization of the vertex cover problem. In STACS 98: 15th Annual Symposium on Theoretical Aspects of Computer Science Paris, France, February 25–27, 1998 Proceedings 15, pages 298–308. Springer, 1998. doi:10.1007/BFB0028569.
- [5] Gruia Calinescu, Chandra Chekuri, Martin Pál, and Jan Vondrák. Maximizing a submodular set function subject to a matroid constraint. In International Conference on Integer Programming and Combinatorial Optimization, pages 182–196. Springer, 2007.
- [6] Timothy M Chan, Elyot Grant, Jochen Könemann, and Malcolm Sharpe. Weighted capacitated, priority, and geometric set cover via improved quasi-uniform sampling. In Proceedings of the twenty-third annual ACM-SIAM symposium on Discrete Algorithms, pages 1576–1585. SIAM, 2012. doi:10.1137/1.9781611973099.125.
- [7] Moses Charikar, Samir Khuller, David M Mount, and Giri Narasimhan. Algorithms for facility location problems with outliers. In SODA, volume 1, pages 642–651. Citeseer, 2001. URL: http://dl.acm.org/citation.cfm?id=365411.365555.
- [8] Chandra Chekuri, Tanmay Inamdar, Kent Quanrud, Kasturi Varadarajan, and Zhao Zhang. Algorithms for covering multiple submodular constraints and applications. Journal of Combinatorial Optimization, 44(2):979–1010, 2022. doi:10.1007/S10878-022-00874-X.
- [9] Chandra Chekuri, T.S. Jayram, and Jan Vondrak. On multiplicative weight updates for concave and submodular function maximization. In Proceedings of the 2015 Conference on Innovations in Theoretical Computer Science, ITCS ’15, pages 201–210, New York, NY, USA, 2015. Association for Computing Machinery. doi:10.1145/2688073.2688086.
- [10] Chandra Chekuri, Jan Vondrak, and Rico Zenklusen. Dependent randomized rounding via exchange properties of combinatorial structures. In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, pages 575–584, 2010. doi:10.1109/FOCS.2010.60.
- [11] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM (JACM), 45(4):634–652, 1998. doi:10.1145/285055.285059.
- [12] Sariel Har-Peled and Mitchell Jones. Few cuts meet many point sets. Algorithmica, 85(4):965–975, 2023. doi:10.1007/S00453-022-01059-Y.
- [13] Tanmay Inamdar and Kasturi Varadarajan. On Partial Covering For Geometric Set Systems. In Bettina Speckmann and Csaba D. Tóth, editors, 34th International Symposium on Computational Geometry (SoCG 2018), volume 99 of Leibniz International Proceedings in Informatics (LIPIcs), pages 47:1–47:14, Dagstuhl, Germany, 2018. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.SoCG.2018.47.
- [14] David S Johnson. Approximation algorithms for combinatorial problems. In Proceedings of the fifth annual ACM symposium on Theory of computing, pages 38–49, 1973. doi:10.1145/800125.804034.
- [15] Shi Li. A 1.488 approximation algorithm for the uncapacitated facility location problem. Information and Computation, 222:45–58, 2013. doi:10.1016/J.IC.2012.01.007.
- [16] Varun Suriyanarayana, Varun Sivashankar, Siddharth Gollapudi, and David B Shmoys. Improved approximation algorithms for the joint replenishment problem with outliers, and with fairness constraints. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 2793–2828. SIAM, 2024. doi:10.1137/1.9781611977912.99.
- [17] Maxim Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint. Operations Research Letters, 32(1):41–43, 2004. doi:10.1016/S0167-6377(03)00062-2.
- [18] Kasturi Varadarajan. Weighted geometric set cover via quasi-uniform sampling. In Proceedings of the forty-second ACM symposium on Theory of computing, pages 641–648, 2010. doi:10.1145/1806689.1806777.
- [19] Jan Vondrák. Submodularity in combinatorial optimization. PhD thesis, Charles University, 2007. Avaulable at https://theory.stanford.edu/˜jvondrak/data/KAM_thesis.pdf.
- [20] Jan Vondrák. A note on concentration of submodular functions. arXiv preprint arXiv:1005.2791, 2010. arXiv:1005.2791.
- [21] Laurence A Wolsey. An analysis of the greedy algorithm for the submodular set covering problem. Combinatorica, 2(4):385–393, 1982. doi:10.1007/BF02579435.
