Abstract 1 Introduction 2 Preliminaries 3 An FPT (𝟒+ϵ)-approximation Algorithm 4 Hardness of Clustering with Mergeable Constraints References

Improved FPT Approximation for Sum of Radii Clustering with Mergeable Constraints

Sayan Bandyapadhyay ORCID Portland State University, OR, USA Tianzhi Chen ORCID Portland State University, OR, USA
Abstract

In this work, we study k-min-sum-of-radii (k-MSR) clustering under mergeable constraints. k-MSR seeks to group data points using a set of up to k balls, such that the sum of the radii of the balls is minimized. A clustering constraint is called mergeable if merging two clusters satisfying the constraint, results in a cluster that also satisfies the constraint. Many popularly studied constraints are mergeable, including fairness constraints and lower bound constraints.

In our work, we design a (4+ϵ)-approximation for k-MSR under any given mergeable constraint with runtime 2O(kϵlog2kϵ)n4, i.e., fixed-parameter tractable in k for constant ϵ. Our result directly improves upon the FPT (6+ϵ)-approximation by Carta et al. [10]. We also provide a hardness result that excludes the exact solvability of k-MSR under any given mergeable constraint in time f(k)no(k), assuming ETH is true.

Keywords and phrases:
sum-of-radii clustering, mergeable constraints, approximation algorithm
Category:
APPROX
Copyright and License:
[Uncaptioned image] © Sayan Bandyapadhyay and Tianzhi Chen; licensed under Creative Commons License CC-BY 4.0
2012 ACM Subject Classification:
Theory of computation Approximation algorithms analysis
Funding:
This work was supported by the National Science Foundation under Grant No. AF 2311397.
Editors:
Alina Ene and Eshan Chattopadhyay

1 Introduction

Given a set of points in a metric space and an integer k>0, the k-min-sum-of-radii clustering problem (k-MSR for short) seeks to group the points using a set of at most k balls, called clusters, each centered at a point, minimizing the sum of the radii of those balls. Like most clustering objectives, finding an optimal k-MSR clustering is NP-hard, as shown by Gibson et al. [21]. They also designed a quasi-polynomial-time approximation scheme (QPTAS) for k-MSR in the same work, implying it is unlikely that k-MSR is APX-hard under standard complexity-theoretic assumptions. Consequently, substantial attention has been directed toward designing approximation algorithms. Currently, the best-known polynomial-time approximation factor is 3+ϵ [9].

To highlight the motivation for studying k-MSR, we compare it to k-center and k-median, two popular clustering objectives. k-center is similar to k-MSR, but instead of minimizing the sum of the radii of balls, it minimizes the radius of the largest ball. Simple 2-approximation algorithms exist for k-center [22, 23], and variants of k-center are relatively easy to solve. However, as a trade-off, k-center is prone to outliers, as well as artifacts such as the dissection effect shown in Figure 1, where clusters might be split unnecessarily. k-median, on the other hand, seeks to minimize the sum of each point’s distance to the closest center, rather than the radii of a cluster. k-median and its variants are relatively more difficult to solve, but its robustness has given rise to its popularity [3, 16]. Informally, the k-MSR problem lies between k-center and k-median – it reduces the dissection effect seen in k-center, while often allowing for simpler solutions than those typically admitted by k-median, which motivates our study.

Refer to caption
(a) k-center.
Refer to caption
(b) k-MSR.
Figure 1: An example of the dissection effect in k-center. In Figure 1(a), k-center splits a group of data that should have been in the left cluster, and assign them to the cluster on the right in effort to decrease the maximum ball size. In Figure 1(b), k-MSR mitigates the issue and provides better separation between clusters.

k-MSR clustering has also been studied under additional constraints, popularly known as constrained variants. In our work, we focus on specific types of constraints, namely mergeable constraints. A clustering constraint is said to be mergeable if merging two clusters that satisfy the constraint, results in a cluster that also satisfies the constraint. The property has proven useful for analyzing constrained k-MSR clustering, with a number of works dedicated to it [10, 18, 6]. A popular class of mergeable constraints includes many fairness constraints. First introduced by Chierichetti et al. [15], given a partition of the input dataset into groups based on some attribute (commonly referred to as colors), fair clustering aims to ensure that each group in the dataset is proportionally represented within every cluster. For instance, given a set of red and blue points, fair clustering seeks to have a balanced number of red and blue points in each cluster. Carta et al. showed that many common fairness constraints are mergeable [10]; we discuss these constraints further in Section 1.2. There are also non-fair constraints that are mergeable, such as the lower bound constraint, where each cluster is required to have at least L points, for some given L>0 [1].

Constrained versions of k-MSR are often difficult to approximate and lack tractability in polynomial time. A potential approach to address this issue is the design of fixed-parameter tractable (FPT) algorithms. FPT algorithms still have a runtime polynomial in the input size but do not have to be polynomial in their parameters. For example, an algorithm running in O(n22k) time is considered to be FPT in parameter k. Fixed-parameter tractability essentially allows efficient computation, assuming the parameters are small in practice. For challenging constrained clustering problems such as constrained k-MSR, FPT algorithms are often the preferred approach, as they offer computational tractability and can yield improved approximation guarantees. In our work, we also adopt this approach to address the complexity of the problem.

1.1 Our Contributions

Our work establishes two main results concerning k-MSR under mergeable constraints: one demonstrating that the problem admits a (4+ϵ)-approximation in FPT time in k for constant ϵ, and another proving its computational hardness. We begin by presenting the following theorem.

Theorem 1.

For any ϵ>0, there is a (4+ϵ)-approximation algorithm for the k-MSR problem under mergeable constraints that runs in 2O(kϵlog2kϵ)n4 time.

Our result directly improves upon the FPT (6+ϵ)-approximation by Carta et al. [10]. Moreover, to solve k-MSR under a given mergeable constraint, their algorithm requires the existence of a constant-factor approximation algorithm for k-center under the same constraint, while ours does not have this restriction. There are constraints under which no constant-factor approximation is known for k-center. For instance, no approximation algorithm is known for k-center under what is called fair representational constraint, which is one of the more general definitions of fair constraints. Our algorithm also has the advantage of generality. While the FPT algorithms by Drexler et al. [18] and Banerjee et al. [6] achieve an approximation factor of (1+ϵ), they are limited to specific metric spaces such as Euclidean spaces or those with bounded doubling dimension. In contrast, our algorithm applies to general metric spaces.

Next, we give the following theorem regarding the hardness of k-MSR clustering under the fair representational constraint.

Theorem 2.

Fair Representational Clustering cannot be solved in time f(k)no(k) unless ETH is false.

This result also trivially extends to k-MSR under mergeable constraints, as the fair representational constraint is known to be mergeable [18].

1.2 Related Work

Gibson et al. showed that the k-MSR objective is NP-hard, even in planar graphs and metrics with doubling dimensions [21]; consequently, a line of work gave approximation algorithms for the problem. Charikar and Panigrahy first gave a polynomial time 3.504-approximation for the problem in their seminal work [12]. This result remained the best approximation until Friggstad and Jamshidian followed up with a 3.489-approximation nearly twenty years later [20]. Soon after, Buchem et al. improved the approximation factor to (3+ϵ), which is the current best known in polynomial time [9].

k-MSR has been studied under various constraints, most notably the capacitated constraint and the matroid constraint. A line of work on devising O(1)-approximation for capacitated k-MSR led to an FPT 3-approximation [24, 19, 25, 5]. k-MSR has also been studied under matroid constraints, leading to an FPT (9+ϵ) approximation by Inamdar and Varadarajan [24] and an improved (3+ϵ) approximation by Chen et al. [13]. Polynomial time O(1)-approximations for these variants of k-MSR are not known. Interestingly, polynomial time O(1)-approximations exist for k-MSR with lower bound [2, 9].

Another common k-MSR constraint to study is the mergeable constraint. Most of the studies are geared towards fairness constraints, since many fairness constraints are mergeable. Drexler et al. first developed an FPT (1+ϵ)-approximation algorithm for k-MSR under mergeable constraints in Euclidean metrics of arbitrary dimension with constant k [18]. The result was subsequently extended to metrics with bounded doubling dimension by Banerjee et al. [6]. Carta et al. gave a (6+ϵ)-approximation algorithm for k-MSR under mergeable constraints in general metrics [10].

Fair k-MSR has also received attention from researchers. Carta et al. showed that several fairness constraints are mergeable [10]. We name a few of these constraints that are commonly used. The simplest definition is called exact fairness [8], where each cluster must have exactly the same number of points from each color. The next step up is Pairwise Fair Clustering (or sometimes called (t,k)-fair clustering) [15]. Given a set of red points and blue points and integer t>0, Pairwise Fair Clustering ensures that the ratio between blue and red points in each cluster is between t and 1t. This definition is sometimes extended to have more than two colors. Another common fairness definition is called Fair Representational Clustering [7, 8], where the fraction of points in some color i allowed in each cluster is at most αi and at least βi, given parameters αi,βi[0,1]. Bandyapadhyay et al. gave polynomial-time O(1)-approximation algorithms for k-MSR under the Pairwise Fair Clustering constraint with two groups and exact fair constraint for l2 groups [4]. Chen et al. developed an FPT (3+ϵ)-approximation algorithm for k-MSR under an alternate notion of fairness that is a special case of matroid k-MSR [13]. In their definition of fairness, each color is allowed to have a pre-defined number ki of points that can be chosen as cluster centers. Banerjee et al. [6] also gave an FPT (1+ϵ)-approximation algorithm for the same problem in Euclidean metric. For more works on fair clustering, one can refer to the survey by Chhabra et al. [14].

Roadmap.

We first define some notations in Section 2, then describe our algorithm in Section 3, and finally prove our hardness result in Section 4.

2 Preliminaries

In k-MSR clustering, we are given a set P of n points in a metric space with metric d. The goal is to find (i) a set of up to k points C={c1,,ck}P called centers, and (ii) a function ϕ assigning each point pP to some center ciC. We call (C,ϕ) a clustering of P. For each ciC, we call 𝒞i=ϕ1(ci) the cluster centered at ci, and we define the radius corresponding to ci as ri=maxp:ϕ(p)=cid(p,ci). We refer to the sum of radii of a clustering (C,ϕ) as the cost of the clustering, and we denote it by cost(C,ϕ); we use the same notation cost(R) to denote a sum of a set of radii R, or cost(𝒞) for a set of clusters 𝒞.

We merge two clusters Ci=ϕ1(ci),Cj=ϕ1(cj) by setting ϕ(p)=c for all points pCiCj and some cP. We say that a constraint is mergeable if for any two clusters satisfying the constraint, merging the clusters results in a new cluster that also satisfies the constraint. We refer to a clustering that satisfies some (mergeable) constraint as a constrained clustering, and a clustering without any constraint as a vanilla clustering. We say that a clustering (C,ϕ) is feasible if every point in P is assigned to some center, |C|k, and each cluster satisfies the desired constraint (if any). Moreover, we say that a clustering (C,ϕ) is optimal if it is feasible, and cost(C,ϕ) is minimized over all such clustering.

We denote a ball with center c and radius r by B(c,r). Given a point p and a ball B(c,r), we say that p is contained in B(c,r) if d(p,c)r, denoted by pB(c,r); note that a point p may be contained in a ball B(c,r), but not assigned to its center c, as p may be contained in multiple balls but only assigned to one cluster.

3 An FPT (𝟒+ϵ)-approximation Algorithm

In this section, we describe an FPT (4+ϵ)-approximation algorithm for k-MSR under mergeable constraints. We first describe the algorithm at a high level, then state the procedures that it calls, present the algorithm, and lastly analyze it.

Our algorithm stems from the idea that we can find a vanilla clustering and expand the balls such that each cluster of any constrained clustering is contained within one of the expanded balls. This allows us to transform a vanilla clustering into a constrained clustering in FPT time in k. Our algorithm has four major steps: (1) Guess the radii of an optimal vanilla clustering up to a (1+ϵ) factor. (2) Compute a (2+ϵ)-approximate vanilla clustering using the guessed radii, (3) Expand the vanilla clusters, resulting in a (3+ϵ)-approximate set of balls, and (4) Merge the expanded balls to create clusters that satisfy the given mergeable constraint. These steps are accomplished through calls to a series of procedures, which we describe in the following subsections.

Similar to our algorithm, the algorithm of Carta et al. [10] also finds a (3+ϵ)-approximate set of balls with the aforementioned containment property. However, our algorithm finds the balls in a simpler and more general manner in that it does not rely on a k-center algorithm under the same constraint to obtain the balls. Carta et al. also made the same observation as ours that one can satisfy the mergeable constraint by merging the (3+ϵ)-approximate balls, but our approach differs in how we choose the new center for each set of balls being merged. The algorithm of Carta et al. picks the center of the largest ball, which results in a 2 factor increase in cost, but we show that one can always pick a center that results in at most a 43 factor increase in cost.

3.1 Finding a Vanilla Clustering

We first describe a procedure that takes a radii vector (r1,,rl), and a multiplicity vector (k1,,kl) as input. Moreover, if there is a feasible vanilla clustering that uses those radii with the respective multiplicities, the procedure outputs a vanilla clustering that uses those radii values up to a factor of 2 with the same multiplicities. We make use of the fact shown in Observation 3 that for any point p in a cluster 𝒞i with radius ri, we can cover all points in the cluster with a ball centered at p with a radius of 2ri. This allows us to recursively find a point p that is not yet covered, and guess the cluster containing p and the corresponding radius to cover all points of that cluster. We note that our procedure is essentially the same as the procedure used by Chen et al. in their work on k-MSR [13], but restated to take in a pair of radii and multiplicity vectors as an additional input; this allows us to use the same set of radii and multiplicity vectors in both the vanilla clustering step and the cluster expansion step of our algorithm. The procedure is given as Algorithm 1.

Algorithm 1 Vanilla-Clustering(P,Ri,Ki).
Observation 3.

For any cluster 𝒞i with center c and radius r, we can cover all points in 𝒞i using a ball centered at some point pC with radius 2r.

Proof.

Consider the point p and some other point p𝒞i. By triangle inequality, we have that d(p,p)d(p,c)+d(p,c). Furthermore, since p,p𝒞i, d(p,c),d(p,c)r. It follows that d(p,p)2r for any p,p𝒞i, therefore we can cover all points in C with any point p𝒞i and a radius 2r.

Lemma 4.

Suppose there is a feasible vanilla clustering of P that uses exactly kj balls of radius rj for 1jl. Then Algorithm 1 outputs a feasible vanilla clustering of P that uses at most kj balls of radius 2rj for 1jl.

Proof.

Let 𝒞={𝒞1,,𝒞k} be a set of feasible vanilla clusters corresponding to the radii vector Ri. We consider the following construction of (Cv,ϕv). Let p be some point in P. Note that p is in some 𝒞i𝒞 with radius rj for some j. We add p to Cv as a new center and set ϕv(q)=p for all points q such that d(p,q)2rj, and remove p and all such q from P. We also remove the cluster 𝒞i from consideration. We can do this because by Observation 3 we have that p with a radius of 2rj covers all points in 𝒞i. We repeat this process until there are no points left in P. We can always find a clustering within k iterations this way because each time we repeat the process, we cover all points in some cluster 𝒞i. Specifically, there exists a recursion branch in Algorithm 1 that follows this construction. It follows that Algorithm 1 outputs the desired vanilla clustering of P.

Lemma 5.

Given a vector Ri with O(log1+ϵkϵ) radii, Algorithm 1 runs in (O(log1+ϵkϵ))kn time.

Proof.

First, we consider the depth of the recursive calls in Algorithm 1. Since each clustering is restricted to having a maximum of k clusters, and each recursive call decreases the number of clusters by one, each branch of recursive calls will have at most k levels of recursion. Now, we consider how many additional calls each recursive call makes. By our assumption, we have O(log1+ϵ(kϵ)) radii in Ri. Since each recursive call makes an additional call for each rjRi with kj>0, each recursive call can make O(log1+ϵ(kϵ)) additional calls. Since each call can be computed in O(n+k) time, it follows that Algorithm 1 runs in (O(log1+ϵkϵ))kn time.

3.2 Expanding the Vanilla Clustering

Given a feasible vanilla clustering, we describe a process to expand the vanilla balls so that each cluster of a fixed, constrained clustering is contained in one of the expanded balls. The idea is to guess how much to expand the vanilla clusters using a guessed set of radii for the constrained clustering. In particular, for each vanilla cluster 𝒞vi, we guess the biggest constrained cluster whose center is contained in 𝒞vi, and expand 𝒞vi by its radius, thereby fully containing all the constrained clusters whose centers are in 𝒞vi. We assume that the radii vector Ri=(r1,,rl) and a multiplicity vector Ki=(k1,,kl) of the constrained clustering are given. The procedure is given as Algorithm 2.

Algorithm 2 Expand-Balls(Cv,ϕv,Ri,Ki).

Now we show some properties of the output of Algorithm 2.

Lemma 6.

Given a vanilla clustering with balls B1(c1,r1v),,Bk(ck,rkv), and the radii vector Ri=(r1,,rl) and a multiplicity vector Ki=(k1,,kl) of a fixed constrained clustering, the output Re of Algorithm 2 contains a vector (r1c,,rkc) such that each cluster of the constrained clustering is contained in B(cj,rjc) for some 1jk. Moreover, j=1krjcj=1krjc+t=1lktrt.

Proof.

Let 𝒞={𝒞1,,𝒞k} be a fixed constrained clustering that for each 1tl uses kt clusters of radii rt. Consider the following construction of a vector (r1c,,rkc). For each center cjCv with radius rjv, let 𝒞ψ(j) be the largest cluster in 𝒞 whose center is in the vanilla cluster 𝒞vj, and let rψ(j) be its radius. We set each rjc to rjv+rψ(j). Note that the expanded ball B(cj,rjc) contains any cluster in 𝒞 whose center is in 𝒞vj, satisfying the desired property. As the center of each constrained cluster in 𝒞 is contained in a unique vanilla cluster, the radius of the constrained cluster is used at most once to expand the radius of that vanilla cluster. In particular, each rtRi is used at most kt times for the expansion. It follows that (r1c,,rkc) is in Re. For the same reason, j=1krjc=j=1k(rjv+rψ(j))j=1krjv+t=1lktrt, which proves the moreover part.

Now we show the following two lemmas about the output size and runtime of Algorithm 2.

Lemma 7.

Given the vectors Ri,Ki of size O(log1+ϵkϵ), the number of guesses |Re| that Algorithm 2 outputs is bounded by (O(log1+ϵkϵ))k.

Proof.

Algorithm 2 computes Re by enumerating a set of guesses E for how much to expand each vanilla cluster. There are k vanilla clusters, and there are O(log1+ϵkϵ) ways to expand each cluster, since there are O(log1+ϵkϵ) distinct radii in Ri. It follows that the number of entries in E and Re are bounded by (O(log1+ϵkϵ))k.

Since each entry in E takes O(k) time to compute, we have the following lemma.

Lemma 8.

Given the vectors Ri,Ki of size O(log1+ϵkϵ), Algorithm 2 runs in (O(log1+ϵkϵ))k time.

3.3 Merging Balls

Suppose we are given a set of balls that cover a set of points P. Moreover, we assume that there is a clustering of P with a mergeable constraint such that each cluster is contained in one of the given balls. We design a procedure that helps find a clustering of P with the mergeable constraint such that its cost is at most 4/3 times the sum of the radii of the given balls.

Before describing the algorithm, we define a few notations. Let B={B1,,Bk} denote a set of balls. Let G=(B,E) be a graph with the balls in B as vertices and the set of edges E={(Bi,Bj)Bi,BjB and there is a point such that pBi and pBj}. A connected component of G therefore contains a maximal subset of balls where every pair is connected by a path.

The merging algorithm, Algorithm 3, is quite simple. Given a set of balls B that cover a set of points P, Algorithm 3 first construct the graph G=(B,E) as defined above. For each connected component of balls B in G, Algorithm 3 finds a point c contained in the balls of B such that assigning all other points contained in B to c results in a cluster whose radius is minimized.

Algorithm 3 Merge-Balls(B).

In the following, we show that the output clustering (C,ϕ) of Algorithm 3 satisfies a mergeable constraint based on an additional assumption.

Lemma 9.

Suppose there is a clustering 𝒞 of P with a mergeable constraint such that each cluster is contained in a ball of B. Then the output clustering (C,ϕ) of Algorithm 3 satisfies the mergeable constraint.

Proof.

Since each cluster in 𝒞 is in a ball of B, the union of the balls in each connected component of G is the disjoint union of a subset of clusters of 𝒞. This means that when Algorithm 3 merges the balls in a connected component into a single cluster, the resulting cluster satisfies the constraint, as the constraint is mergeable. It follows that the output clustering (C,ϕ) also satisfies the mergeable constraint.

Lemma 10.

Algorithm 3 runs in O(kn2) time.

Proof.

The main bottlenecks of Algorithm 3 are the construction of the edges E for graph G, and finding a center c in each connected component B that minimizes the radius when merging the balls in B. Since there are O(n) points in P and O(k) balls, checking if each point pP is in both of each pair of Bi,Bj to construct E takes O(nk2) time. To find c, we can simply probe all points in the connected component. Since there are O(k) components, and each component can contain up to O(n) points, we can find c in O(kn2) time, resulting in an overall runtime of O(kn2).

Next, we give an upper bound on the cost of the clustering (C,ϕ) computed by Algorithm 3. Consider any connected component of G with a set of balls B. Let P be the set of points in the union of the balls of B. Denote the sum of the radii of the balls in B by . Then we have the following lemma.

Lemma 11.

There is a point cP such that for any qP, d(p,q)43.

Before proving the lemma, we describe a major consequence of it. Recall that for each connected component of G, Algorithm 3 assigns all points in the associated balls to a single cluster center c. Moreover, this center c is chosen to be a point pP that minimizes the maximum distance from all other points. Thus, by the above lemma, the resulting cluster 𝒞=ϕ1(c) has cost(𝒞)43, where is the sum of the radii of the balls in the component. As B is the disjoint union of balls in the components, we have the following corollary.

Corollary 12.

The cost of the output clustering (C,ϕ) of Algorithm 3 is at most 43 times the sum of the radii of the input balls in B.

Now, we move on towards proving Lemma 11. We need a few additional definitions and observations. Let T be a spanning tree of the concerned connected component. We call a path π={π1,,πm} in T a maximum leaf-to-leaf path if π is a simple path from a leaf node in T to another leaf node, and the sum of the radii of the balls along π is the maximum among all such paths. Now, consider any path π={π1,,πm} between two leaves. Also, consider the graph T=Tπ, constructed by removing the vertices and edges of π from T. Note that T is a collection of connected components each having exactly one ball Bi connected to exactly one πiπ with an edge of T. We say that such a component is connected to T via πi. A set of balls ZB is said to form a connected set if the subgraph of G induced by the balls in Z is connected.

Observation 13.

Consider a connected set Z of balls with the sum of radii . For any two points a,b contained in the balls of Z, d(a,b)2.

Proof.

Let B(ci,ri),B(cj,rj)Z be the balls containing a,b, respectively. Since Z is a connected set, there is a simple path between B(ci,ri) and B(cj,rj) in G that uses only balls of Z. Thus, by triangle inequality, d(ci,cj)ri+2(rirj)+rj=2rirj. Hence, d(a,b)d(a,ci)+d(ci,cj)+d(cj,b)2.

Similarly, we also have the following observation.

Observation 14.

Consider a connected set Z of balls with the sum of radii . For any point p in a ball of Z and a ball B(ci,ri)Z, d(ci,p)2ri.

Proof.

Let B(cj,rj)Z be a ball containing p. Since Z is a connected set, there is a simple path between B(ci,ri) and B(cj,rj) in G that uses only balls of Z. Thus, by triangle inequality, d(ci,p)d(ci,cj)+d(cj,p)ri+2(rirj)+rj+d(cj,p)2ri.

The proof of Lemma 11 is as follows.

Proof.

We prove the existence of the point c with the desired property by considering Procedure 4 that explicitly computes a center. We clarify that this procedure is only for the purpose of analysis, and we do not run it as a part of our algorithm.

Algorithm 4 Find-Center(B).

Procedure 4 picks the new center according to two main cases: (i) there exists a ball in B with radius at least 23, (ii) all balls in B have radii less than 23.

In case (i), there exists a ball B(ci,ri) such that ri23. In this case, Procedure 4 picks ci as the new center. By Observation 14, we have that for any point pP, d(ci,p)2ri223=43.

In case (ii), each ball in B has a radius of less than 23. The center selection is split up into two subcases: (iia) SumL13, and (iib) SumL<13.

We first consider case (iia), where a point p contained in both πi1 and πi is chosen as the new center. As each ball has radius less than 23 and π1 is a leaf, while loop 7 cannot terminate with i=1, i.e., at its termination i2. Thus, πi1 is well-defined. Consider the partition of B into two parts BL and BR. BL is the union of the balls π1,,πi1 and any ball that lies in a component of Tπ that is connected to T via πj for 1ji1. Similarly, BR is the union of the balls πi,,πm and any ball that lies in a component of Tπ that is connected to T via πj for ijm. Let PL={pLpoint pL is in a ball of BL} and PR={pRpoint pR is in a ball of BR}. Note that PLPR=P. Next, we bound the distance between the chosen center p and the points in PL and PR. Note that by our definition of p,PL, and PR, pPLPR. First, we consider d(p,pL) for any pLPL. Note that SumL<23 at the end of while loop 7. Also, by our definition of SumL, it is the sum of the radii of the balls in BL. Then, as BL is a connected set of balls and pPL, by Observation 13, it follows that d(p,pL)2SumL<43. Now we consider d(p,pR) for any pRPR. Since SumL13 in this subcase and (BL,BR) is a partition of B, we know that the sum of the radii of the balls in BR, SumR23. Then, as BR is a connected set of balls and pPR, by Observation 13, it follows that d(p,pR)2SumR43. Thus, the lemma follows in this subcase.

Next, we consider case (iib), here, we pick center ci of ball πi=B(ci,ri) as the new center. Similar to case (iia), we partition B into three parts BL, BM, and BR. BL is the union of the balls π1,,πi1 and any ball that lies in a component of Tπ that is connected to T via πj for 1ji1. Similarly, BM is the union of the ball πi and any ball that lies in a component of Tπ that is connected to T via πi. Lastly, BR is the union of the balls πi+1,,πm and any ball that lies in a component of Tπ that is connected to T via πj for i+1jm. Let PL={pLpoint pL is in a ball of BL}, PM={pMpoint pM is in a ball of BM}, and PR={pRpoint pR is in a ball of BR}. Note that PLPMPR=P, so we can again bound the overall distance by considering PL,PM, and PR separately. Towards this end, note that by the end of Loop 5, SumL=cost(BL) and Sum=cost(BL)+cost(BM).

We first consider d(p,pL) for any pLPL. Note that BL{πi} is a connected set of balls with a sum of radii of SumL+ri. Furthermore, recall that SumL<13 by the assumption of case (iib), and ri<23 by the assumption of case (ii). By Observation 14, we have that d(ci,pL)2(SumL+ri)ri=2SumL+ri<213+23=43.

Next, we consider d(p,pR) for some pRPR. Note that cost(BR)=Sum. Since Sum23 by the end of Loop 7, we have that cost(BR)=Sum13. Note also that BR{πi} is a connected set of balls with a sum of radii of cost(BR)+ri. Since ri<23 by the assumption of case (ii), by Observation 14 we have that for any pRPR, d(ci,pR)2(cost(BR)+ri)ri=2cost(BR)+ri<213+23=43.

Lastly, we consider d(p,pM) for some pMPM. Let BlBM be a ball that contains pM. Consider any simple path π in T between any leaf ball in BM and πi such that π also contains Bl. Such a path exists, as Bl lies in a component of Tπ that is connected to T via πi. We denote by ππi the path obtained by removing πi from π. Note that

SumL+cost(ππi)+ri+SumR. (1)

Since balls along the path π have the maximum sum of radii, we have that

cost(ππi)+ri=cost(π)cost(π)SumL+ri+SumR

and therefore

cost(ππi)SumL+SumR. (2)

Combining Inequalities 1 and 2 we have that

2cost(ππi)+ri. (3)

Now, note that π is a connected set of balls including BlpM with the sum of radii cost(ππi)+ri. Thus, by Observation 14 we have that d(ci,pM)2(cost(ππi)+ri)ri=2cost(ππi)+ri. Applying Inequality 3 we have that d(ci,pM).

Since d(pL,ci),d(pM,ci),d(pR,ci)43 for all pLPL,pMPM, and pRPR, and PLPMPR covers all points in P, ci is at a distance of at most 43 from any point in P in case (iib). This finishes the proof of the lemma.

3.4 Guessing Radii

Here we describe a procedure for guessing the radii of any k-MSR clustering up to (1+ϵ) factor. Formally, one can enumerate a set R of k-sized vectors such that for any k-MSR clustering with radii r1rk, there is a vector (r1,,rk) in R where for all 1jk, rjrj and if rj>ϵrkk, rj(1+ϵ)rj. Moreover, the size of R is only kO(log1+ϵkϵ)n2. From now on, we call this type of enumeration process a guessing procedure, and we refer to each enumerated entry as a guess. Similar procedures have been used previously for guessing the radii of an optimal capacitated sum of radii clustering [24, 5, 19, 25, 13]. We restate those procedures to obtain a general algorithm that can enumerate the radii of all clusterings. In our algorithm, this procedure will be used to guess both a set of optimal vanilla radii and a set of optimal radii under any given mergeable constraint. Towards this end, we clarify that in the guessed radii vector (r1,,rk), all radii might not be distinct, and specifically we have O(log1+ϵkϵ) many distinct radii. So, we will enumerate O(log1+ϵkϵ)-sized vectors instead while keeping track of the multiplicities.

The procedure works in the following manner. Given a set of n points P in a metric space, and parameters k and ϵ, we first compute a radius rmax that can potentially be the maximum radius of some clustering; we can do this in O(n2) time by trying distances between each pair of points in P. For each rmax, we set [ϵrmaxk,(1+ϵ)rmax] as the range in which we guess the radii. The procedure is described as Algorithm 5.

Algorithm 5 Guess-Radii(P,k,ϵ).

Now we show some properties of the output size, runtime, and correctness of the procedure. First, we note the following observation about the number of distinct radii in each RiR.

Observation 15.

Each RiR from the output of Algorithm 5 has O(log1+ϵkϵ) distinct radii.

Proof.

Consider the radii vector Ri=((1+ϵ)m,,(1+ϵ)M). Since Ri is formed by picking powers of 1+ϵ from the range [ϵrmaxk,(1+ϵ)rmax], the number of radii in Ri is bounded by Mm+11+log1+ϵ(1+ϵ)rmaxϵrmaxk=O(log1+ϵkϵ).

Furthermore, we have the following observation about the size of K, which will help us bound the runtime of both Algorithm 5 and some other algorithms.

Observation 16.

The size of K is bounded by kO(log1+ϵkϵ)n2.

Proof.

First, note that there are a total of n2 choices for rmax. Next, we consider the number of distinct radii vectors Kij=(km,,kM) that are computed for each rmax. By Observation 15 we have that Mm+1=|Ri| is bounded by O(log1+ϵkϵ). Since each ktk, there are kO(log1+ϵkϵ) possible vectors Kij for each choice of rmax. It follows that there are a total of kO(log1+ϵkϵ)n2 many vectors in K.

Since Algorithm 5 enumerates each entry of K, the next lemma follows.

Lemma 17.

Algorithm 5 runs in kO(log1+ϵkϵ)n2 time.

Next, we show that for any clustering, Algorithm 5 outputs a solution with a correct guess.

Lemma 18.

For any k-MSR clustering with radii r1,,rk, there are vectors Ri=(r1,,rl) in R and Kij=(k1,,kl) in K for some i,j such that there exist exactly kt indices κ{1,,k} with rκrt<(1+ϵ)rκ for 2tl and k1 indices κ{1,,k} with rκr1. Moreover, t=1lktrt(1+ϵ)i=1kri.

Proof.

Consider the choice of rmax made in Algorithm 5 so that rmax=maxi=1kri. Also consider the vector Ri=(r1,,rl) in R corresponding to this choice. By definition, rl(1+ϵ)rmax and r1ϵrmaxk. Define k1 to be the number of radii in {r1,,rk} that are at most r1. For 2tl, define kt to be the number of radii in {r1,,rk} that are also in (rt1,rt]. By definition, t=1lkt=k, and so (k1,,kl) is a vector in K which is considered w.r.t rmax. Also by definition of kt for 2tl, there are kt indices κ{1,,k} with rκ(rt1,rt], i.e., rκrt<(1+ϵ)rκ as desired.

Next, we prove the moreover part. The sum of the k1 radii r1 is at most k1(1+ϵ)ϵrmaxk2ϵrmax2ϵi=1kri. Also, for 2tl, ktrtκ{1,,k}rκ(rt1,rt](1+ϵ)rκ. Summing over all 2tl, t=2lktrt(1+ϵ)i=1kri. Hence, t=1lktrt(1+3ϵ)i=1kri. Scaling ϵ down by a factor of 3, we obtain the lemma.

Corollary 19.

For any optimal sum of radii clustering (vanilla or constrained) 𝒞, there are vectors Ri=(r1,,rl) in R and Kij=(k1,,kl) in K for some i,j such that there is a feasible (1+ϵ)-approximate clustering that uses kt balls of radius rt for all 1tl.

Proof.

Let r1,,rk be the radii of an optimal clustering. Fix the vectors Ri=(r1,,rl) and Kij=(k1,,kl) given by Lemma 18. We show that each cluster of 𝒞 can be covered by a ball of radius rτ for some τ such that kt balls of radius rt are used for all 1tl. As the clustering induced by these balls is essentially the same clustering 𝒞, it remains feasible. To cover each cluster of 𝒞 that uses a ball of radius rκ, if rκr1, use the same center, but with radius r1; otherwise, as t=1lkt=k, there is 2τl with rκrτ<(1+ϵ)rκ, and in this case use the same center, but with radius rτ. By the correspondence of the radii {rκ} and {rt} in Lemma 18, kt balls of radius rt are used for all 1tl. Lastly, as t=1lktrt(1+ϵ)i=1kri, the new set of balls induces a (1+ϵ)-approximate clustering.

3.5 The Algorithm and its Analysis

With all of the procedures defined, we state the main algorithm. Let k>0 and ϵ>0 be some constants, our algorithm is given as Algorithm 6.

Algorithm 6 Mergeable-Constraint-k-MSR(P,k,ϵ).

We now analyze Algorithm 6, starting with the correctness and approximation factor. We have the following Lemma.

Lemma 20.

Algorithm 6 produces a (4+ϵ)-approximate k-MSR clustering satisfying a given mergeable constraint.

Proof.

Let OPTv and OPTc be the respective optimal cost of vanilla k-MSR and k-MSR with the given mergeable constraint. By Corollary 19, there are vectors Ri=(r1,,rl) in R and Kii=(k1,,kl) in K for some i,i such that there is a feasible (1+ϵ)-approximate vanilla clustering that uses kt balls of radius rt for all 1tl. Fix this choice of Ri and Kii in Line 2. Then by Lemma 4, Vanilla-Clustering(P,Ri,Kii) (Algorithm 1) outputs a feasible (2+2ϵ) vanilla clustering (Cv,ϕv). Let c1,,ck be the centers in Cv. Similarly, by Corollary 19, there are vectors Rj=(r1,,rl) in R and Kjj=(k1,,kl) in K for some j,j such that there is a feasible (1+ϵ)-approximate clustering 𝒞 with the given mergeable constraint that uses kt balls of radius rt for all 1tl. Fix this choice of Rj and Kjj in Line 7. Then by Lemma 6, the output Rejj of Expand-Balls(Cv,ϕv,Rj,Kjj) (Algorithm 2) and hence Re contains a vector (r1c,,rkc) such that each cluster of 𝒞 is contained in B(cj,rjc) for some 1jk. Moreover, by the same lemma, μ=1krμcτ=1krτ+t=1lktrt(2+2ϵ)OPTv+(1+ϵ)OPTc(3+3ϵ)OPTc. This is true, as the optimal vanilla clustering cost is at most the optimal cost of any constrained clustering.

Fix the above choice of (r1c,,rkc)Re in Line 10. Then B={B(c1,r1c),,B(cl,rkc)}, and hence by Lemma 9, the clustering (Cj,ϕj) returned by Merge-Balls(B) (Algorithm 3) satisfies the mergeable constraint. Moreover, by Corollary 12, the cost of (Cj,ϕj) is at most 43μ=1krμc43(3+3ϵ)OPTc=(4+4ϵ)OPTc. Scaling ϵ by a constant, we obtain the lemma.

Now we analyze the runtime of Algorithm 6. We have the following lemma.

Lemma 21.

Algorithm 6 runs in 2O(kϵlog2kϵ)n4 time.

Proof.

We first show the runtime of each part of the algorithm. By Lemma 17 we have that the Guess-Radii procedure has a runtime of kO(log1+ϵkϵ)n2. Next, by Observation 16 we also have that Guess-Radii produces kO(log1+ϵkϵ)n2 many vectors in K. By Lemma 5 we have that the Vanilla-Clustering procedure runs in O((log1+ϵkϵ)kn) time; since we run Vanilla-Clustering on each pair RiR and KiiK, this step of the algorithm has a runtime of

kO(log1+ϵkϵ)n2(log1+ϵkϵ)kn
= kO(log1+ϵkϵ)(log1+ϵkϵ)kn3.

Moving on to the expand radii step, by Lemma 8 we have that the Expand-Balls procedure runs in (O(log1+ϵkϵ))k time. Also, Expand-Balls runs for each of the kO(log1+ϵkϵ)n2 pairs RjR and KjjK, resulting in a runtime of kO(log1+ϵkϵ)(log1+ϵkϵ)kn2. The next step of the algorithm, Merge-Balls, runs in O(kn2) time by Lemma 10. We also showed in Lemma 7 that Expand-Balls outputs (O(log1+ϵkϵ))k pairs RjR and KjjK, so the total number of guesses for expanded radii is bounded by kO(log1+ϵkϵ)(log1+ϵkϵ)kn2. Since Algorithm 6 runs Merge-Balls for each set of guessed radii RjRe, this step has a runtime of

kn2kO(log1+ϵkϵ)(log1+ϵkϵ)kn2
= kO(log1+ϵkϵ)(log1+ϵkϵ)kn4.

Adding everything together we have that Algorithm 6 runs in time

kO(log1+ϵkϵ)n2+kO(log1+ϵkϵ)(log1+ϵkϵ)kn3+
kO(log1+ϵkϵ)(log1+ϵkϵ)kn2+kO(log1+ϵkϵ)(log1+ϵkϵ)kn4
= kO(log1+ϵkϵ)(log1+ϵkϵ)kn4
= 2O(kϵlog2kϵ)n4.

Lemma 20 and Lemma 21 together produces Theorem 1.

Theorem 1. [Restated, see original statement.]

For any ϵ>0, there is a (4+ϵ)-approximation algorithm for the k-MSR problem under mergeable constraints that runs in 2O(kϵlog2kϵ)n4 time.

4 Hardness of Clustering with Mergeable Constraints

Definition 22 (Fair Representational Clustering).

We are given disjoint groups of points, denoted as P1,,P, with a total of n points with a distance metric d. Each group Pi is associated with fairness parameters αi,βi[0,1], for 1i. A clustering is said to be fair representational if, within every cluster, the proportion of points from group i lies between αi and βi, for all i. The objective is to find a fair representational clustering with k clusters, that minimizes the sum of the radii of the clusters.

As shown by Carta et al. [11], the fair representational constraint is mergeable. We prove that Fair Representational Clustering cannot be solved in time f(k)no(k) unless ETH is false. Specifically, we show a reduction from the Dominating set problem.

Given an instance of Dominating set containing an n-vertex graph G=(V,E) and a parameter k, we construct a new graph G. Initially, G=G. Add two sets V1,V2 of k vertices each to G and the edges {v,vi},{v,vj} for all vV,viV1,vjV2. The instance of Fair Representational Clustering is defined as follows: =3, P1=V, P2=V1, P3=V2, d is the shortest path metric in G, α1=1/3,β1=1, and for i{2,3}, αi=1/(n+2) and βi=1.

Lemma 23.

Dominating set admits a solution of size k if and only if there is a fair representational clustering of i=13Pi with k clusters of cost k.

Proof.

The forward direction is easy to prove. Suppose Dominating set admits a solution V={v1,v2,,vk}. We construct a clustering (C,ϕ) of i=13Pi as follows. Use the points {v1,v2,,vk} in P1 as centers. For each vP1, assign v to a center vj that dominates v. Assign the j-th points of P2 and P3 to vj. It is easy to verify that each cluster has radius 1, so the total cost is k. Also in each cluster, the proportion of points of P1 is at least 1/3, the proportion of points of P2 is at least 1/(n+2), and the same for P3. So, (C,ϕ) is a fair representational clustering.

Now, suppose there is a fair representational clustering of i=13Pi with k clusters of cost k. First, we argue that each cluster has a non-zero radius, as otherwise, it is a singleton cluster, which is not fair. This implies that the radius of each cluster is 1. Now, each cluster center must be in P1=V, as the distance between any two points uP2,uP3 is 2, excluding them to be a center of a fair cluster of radius 1. It follows that any point vP1 is at a distance at most 1 from the k cluster centers. Hence, these vertices in V form a dominating set of size k.

Since it is not possible to solve Dominating set in f(k)no(k) time unless ETH is false [17], we obtain Theorem 2.

Theorem 2. [Restated, see original statement.]

Fair Representational Clustering cannot be solved in time f(k)no(k) unless ETH is false.

References

  • [1] Sara Ahmadian and Chaitanya Swamy. Approximation algorithms for clustering problems with lower bounds and outliers. In Ioannis Chatzigiannakis, Michael Mitzenmacher, Yuval Rabani, and Davide Sangiorgi, editors, 43rd International Colloquium on Automata, Languages, and Programming, ICALP, volume 55 of LIPIcs, pages 69:1–69:15. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2016. doi:10.4230/LIPICS.ICALP.2016.69.
  • [2] Sara Ahmadian and Chaitanya Swamy. Approximation Algorithms for Clustering Problems with Lower Bounds and Outliers. In Ioannis Chatzigiannakis, Michael Mitzenmacher, Yuval Rabani, and Davide Sangiorgi, editors, 43rd International Colloquium on Automata, Languages, and Programming (ICALP 2016), volume 55 of Leibniz International Proceedings in Informatics (LIPIcs), pages 69:1–69:15, Dagstuhl, Germany, 2016. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ICALP.2016.69.
  • [3] Vijay Arya, Naveen Garg, Rohit Khandekar, Adam Meyerson, Kamesh Munagala, and Vinayaka Pandit. Local search heuristics for k-median and facility location problems. SIAM J. Comput., 33(3):544–562, 2004. doi:10.1137/S0097539702416402.
  • [4] Sayan Bandyapadhyay, Eden Chlamtáč, Yury Makarychev, and Ali Vakilian. A polynomial-time approximation for pairwise fair k-median clustering. arXiv preprint arXiv:2405.10378, 2024. doi:10.48550/arXiv.2405.10378.
  • [5] Sayan Bandyapadhyay, William Lochet, and Saket Saurabh. FPT constant-approximations for capacitated clustering to minimize the sum of cluster radii. In Erin W. Chambers and Joachim Gudmundsson, editors, 39th International Symposium on Computational Geometry, SoCG 2023, June 12-15, 2023, Dallas, Texas, USA, volume 258 of LIPIcs, pages 12:1–12:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2023. doi:10.4230/LIPICS.SOCG.2023.12.
  • [6] Sandip Banerjee, Yair Bartal, Lee-Ad Gottlieb, and Alon Hovav. Improved fixed-parameter bounds for min-sum-radii and diameters k-clustering and their fair variants. Proceedings of the AAAI Conference on Artificial Intelligence, 39(15):15481–15488, April 2025. doi:10.1609/aaai.v39i15.33699.
  • [7] Suman Bera, Deeparnab Chakrabarty, Nicolas Flores, and Maryam Negahbani. Fair algorithms for clustering. In Advances in Neural Information Processing Systems, pages 4954–4965, 2019.
  • [8] Ioana O Bercea, Martin Groß, Samir Khuller, Aounon Kumar, Clemens Rösner, Daniel R Schmidt, and Melanie Schmidt. On the cost of essentially fair clusterings. In Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques (APPROX/RANDOM 2019). Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2019. doi:10.4230/LIPIcs.APPROX-RANDOM.2019.18.
  • [9] Moritz Buchem, Katja Ettmayr, Hugo KK Rosado, and Andreas Wiese. A (3)-approximation algorithm for the minimum sum of radii problem with outliers and extensions for generalized lower bounds. In Proceedings of the 2024 Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1738–1765. SIAM, 2024.
  • [10] Lena Carta, Lukas Drexler, Annika Hennes, Clemens Rösner, and Melanie Schmidt. FPT Approximations for Fair k-Min-Sum-Radii. In Julián Mestre and Anthony Wirth, editors, 35th International Symposium on Algorithms and Computation (ISAAC 2024), volume 322 of Leibniz International Proceedings in Informatics (LIPIcs), pages 16:1–16:18, Dagstuhl, Germany, 2024. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ISAAC.2024.16.
  • [11] Lena Carta, Lukas Drexler, Annika Hennes, Clemens Rösner, and Melanie Schmidt. Fpt approximations for fair k-min-sum-radii, 2024. doi:10.48550/arXiv.2410.00598.
  • [12] Moses Charikar and Rina Panigrahy. Clustering to minimize the sum of cluster diameters. J. Comput. Syst. Sci., 68(2):417–441, 2004. doi:10.1016/j.jcss.2003.07.014.
  • [13] Xianrun Chen, Dachuan Xu, Yicheng Xu, and Yong Zhang. Parameterized approximation algorithms for sum of radii clustering and variants. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 38, pages 20666–20673, 2024. doi:10.1609/AAAI.V38I18.30053.
  • [14] Anshuman Chhabra, Karina Masalkovaitė, and Prasant Mohapatra. An overview of fairness in clustering. IEEE Access, 2021. doi:10.1109/ACCESS.2021.3114099.
  • [15] Flavio Chierichetti, Ravi Kumar, Silvio Lattanzi, and Sergei Vassilvitskii. Fair clustering through fairlets. In Advances in Neural Information Processing Systems, pages 5029–5037, 2017. URL: https://proceedings.neurips.cc/paper/2017/hash/978fce5bcc4eccc88ad48ce3914124a2-Abstract.html.
  • [16] Vincent Cohen-Addad, Anupam Gupta, Lunjia Hu, Hoon Oh, and David Saulpic. An improved local search algorithm for k-median, 2021. arXiv:2111.04589.
  • [17] Marek Cygan, Fedor V Fomin, Łukasz Kowalik, Daniel Lokshtanov, Dániel Marx, Marcin Pilipczuk, Michał Pilipczuk, and Saket Saurabh. Parameterized algorithms, volume 5. Springer, 2015. doi:10.1007/978-3-319-21275-3.
  • [18] Lukas Drexler, Annika Hennes, Abhiruk Lahiri, Melanie Schmidt, and Julian Wargalla. Approximating fair k-min-sum-radii in euclidean space. In International Workshop on Approximation and Online Algorithms, pages 119–133. Springer, 2023. doi:10.1007/978-3-031-49815-2_9.
  • [19] Arnold Filtser and Ameet Gadekar. Fpt approximations for capacitated sum of radii and diameters. arXiv preprint arXiv:2409.04984, 2024. doi:10.48550/arXiv.2409.04984.
  • [20] Zachary Friggstad and Mahya Jamshidian. Improved polynomial-time approximations for clustering with minimum sum of radii or diameters. In Shiri Chechik, Gonzalo Navarro, Eva Rotenberg, and Grzegorz Herman, editors, 30th Annual European Symposium on Algorithms, ESA 2022, September 5-9, 2022, Berlin/Potsdam, Germany, volume 244 of LIPIcs, pages 56:1–56:14. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2022. doi:10.4230/LIPIcs.ESA.2022.56.
  • [21] Matt Gibson, Gaurav Kanade, Erik Krohn, Imran A. Pirwani, and Kasturi R. Varadarajan. On clustering to minimize the sum of radii. SIAM J. Comput., 41(1):47–60, 2012. doi:10.1137/100798144.
  • [22] Teofilo F Gonzalez. Clustering to minimize the maximum intercluster distance. Theoretical Computer Science, 38:293–306, 1985. doi:10.1016/0304-3975(85)90224-5.
  • [23] Dorit S Hochbaum and David B Shmoys. A best possible heuristic for the k-center problem. Mathematics of operations research, 10(2):180–184, 1985. doi:10.1287/MOOR.10.2.180.
  • [24] Tanmay Inamdar and Kasturi Varadarajan. Capacitated Sum-Of-Radii Clustering: An FPT Approximation. In Fabrizio Grandoni, Grzegorz Herman, and Peter Sanders, editors, 28th Annual European Symposium on Algorithms (ESA 2020), volume 173 of Leibniz International Proceedings in Informatics (LIPIcs), pages 62:1–62:17, Dagstuhl, Germany, 2020. Schloss Dagstuhl – Leibniz-Zentrum für Informatik. doi:10.4230/LIPIcs.ESA.2020.62.
  • [25] Ragesh Jaiswal, Amit Kumar, and Jatin Yadav. FPT approximation for capacitated sum of radii. In Venkatesan Guruswami, editor, 15th Innovations in Theoretical Computer Science Conference, ITCS 2024, January 30 to February 2, 2024, Berkeley, CA, USA, volume 287 of LIPIcs, pages 65:1–65:21. Schloss Dagstuhl – Leibniz-Zentrum für Informatik, 2024. doi:10.4230/LIPICS.ITCS.2024.65.